TX2 Inference Server

  • 24x 1 TFLOPS, 6,144 GPU CUDA cores with NVIDIA® Pascal™ architecture
  • 2x 10G SFP+, 2x 1G SFP uplink capability
  • 3x 2.5″ SATA drives
  • 1U ATX style redundant power supply
  • Operating Temperature: 0°C to +60°C (+32°F to +140°F)

Part #: UTX2AS-01, UTX2AS-02, UTX2AS-03, UTX2AS-04 Category: Tags: , , , , , , ,

More Information

Developed in partnership with USES Integrated Solutions, the TX2 Inference Server is an extremely low wattage, high performance deep learning inference server powered by the NVIDIA Jetson platform. Housed in a compact 1U chassis, the TX2 Inference Server can be outfitted with up to 24x NVIDIA Jetson TX2 modules. Within the Server, three processor module carriers house up to eight Jetson TX2 modules each, and are all connected via a Gigabit Ethernet fabric through a specialized Managed Ethernet Switch developed by Connect Tech with 10G uplink capability (XDG201).

The TX2 Inference Server includes Out of Band Management (OOBM) via an ARM-based SMARC module to control and monitor each of the Jetson TX2 modules. The OOBM not only monitors the health/boot of each module, but also remotely hard power (on/off) them individually.

Specifications

Processor Module Carriers

• Each Module Carrier will allow up to 8 x NVIDIA TX2 modules to be installed
• 3x module carriers can be installed in the system for a total of 24 modules

Out-of-band Management Module

• ARM Based OOBM
• Enabling Serial console access, power status monitoring, and power control (ON/OFF) to all 24 x TX2 modules
• OOBM accessible via Ethernet or via its own integrated USB-to-Serial console

Internal Storage

Up to 3 x 2.5” SATA Hard Drives (6TB Total)

Internal Array Communication

• 24x Gigabit Ethernet / 1000BASE-T / IEEE 802.3ab channels
• All TX2 modules can communicate to all other TX2 modules

Internal Embedded Ethernet Switch

• Vitesse/Microsemi VSC7448 Managed Ethernet Switch Engine (XDG201)
• CPU: 500 MHz MIPS 24KEc
• Memory: 4Gb DDR3 SDRAM
• Storage: 128Mb Serial NOR Flash
• Multi 1G/10G Uplinks, with 24 1G downstream module connections

Misc / Additional IO

1 x 1GbE OOB management port via RJ-45; 1x USB UART management port; status LEDs

Input Power

100~240 VAC (dual redundant) with 650W output each

Cooling

• Passive heatsinks installed on each TX2 module
• Forced convection via fans on front and rear of system
• Fans: Delta, 8x 40 x 40 x 28mm, 29 CFM / 2,400 in H2O static pressure

Operating Temperature

0°C to +60°C (+32°F to +140°F)

Dimensions

• Standard 1U rackmount height (1.75 inch / 44.45mm)
• 25 inch / 635mm depth

Downloads for Jetson™ TX2 Inference Server – UTX2AS

Downloads support: UTX2AS-01, UTX2AS-02, UTX2AS-03, UTX2AS-04

Ordering Information

Main Products
Part Number Description
UTX2AS-01 Jetson™ TX2 Inference Server (North American Version), No storage
UTX2AS-02 Jetson™ TX2 Inference Server (North American Version), 3x 1TB SSDs
UTX2AS-03 Jetson™ TX2 Inference Server (European Version), No storage
UTX2AS-04 Jetson™ TX2 Inference Server (European Version), 3x 1TB SSDs

Custom Design

Looking for a customized NVIDIA® Jetson™ TX2 Product? Click here to tell us what you need.

You may also like…

Go to Top