Inference Server, Powered by NVIDIA® Jetson Orin™ NX

  • 24x 100 TOPS, 1024-core NVIDIA GPU and 32 Tensor Cores
  • 4x 10G SFP+ uplink capability
  • Supports External NVMe Storage
  • 2U ATX style redundant power supply
  • Operating Temperature: 0°C to +50°C (+32°F to +122°F)

More Information

Housed in a 2U chassis, the Orin NX Inference Server can be outfitted with up to 24x NVIDIA Jetson Orin NX modules.

Developed in partnership with USES Integrated Solutions, the Orin NX Inference Server is an extremely low wattage, high performance deep learning inference server powered by the NVIDIA Jetson Orin NX 16GB module. Within the Server, three processor module carriers house up to eight Jetson Orin NX modules each, and are all connected via a Gigabit Ethernet fabric through a specialized Managed Ethernet Switch developed by Connect Tech with 10G uplink capability (XDG205).

Explore how the Inference Server optimizes edge AI performance across multiple industries and applications.

Specifications

Processing Modules

• 24x NVIDIA® Jetson Orin™ NX
• GPU: 1024-core CUDA GPU with 32 Tensor Cores, 3.7 TFLOPS (FP16), 100 TOPs (INT8)
• CPU: 8-core Cortex Arm v8.2 CPU, 2MB L2 + 4MB L3
• Memory: 16 GB 128-bit LPDDR5, 102.4GB/s
• Storage: Supports External Storage (NVMe) via x4 PCIe

Out-of-band Management Module

• ARM based OOBM
• Enabling Serial console access, power status monitoring, and power control (ON/OFF) to all 24x Orin NX modules
• OOBM accessible via Ethernet or via its own integrated USB-to-Serial console

Processor Module Carriers

• Each module carrier will allow up to 8x Orin NX modules to be installed
• 3x module carriers can be installed in the system for a total of 24 modules

Internal Embedded Ethernet Switch

• Vitesse/Microsemi SparX-5i VSC7558TSN-V/5CC Managed Ethernet Switch Engine (XDG205)
• CPU: 1 GHz VCore
• Memory: 8Gb DDR4 SDRAM
• Storage: 1Gbit Serial NOR Flash
• Multi 10G uplinks, with 12x 1G downstream
• Complete TSN feature set (Time-Sensitive Networking)

Internal Array Communication

• 24x Gigabit Ethernet / 1000BASE-T / IEEE 802.3ab channels
• All Orin NX modules can communicate to all other Orin NX modules

Misc / Additional IO

1x 1GbE OOB management port via RJ-45; 1x USB UART management port; status LEDs

Input Power

100~240 VAC (dual redundant) with 1000W output each

Internal Storage

Each Orin NX module has its own M.2 NVMe interface

Operating Temperature

0°C to +50°C (+32°F to +122°F)

Dimensions

Standard 2U rackmount height (3.5 inch / 88.9mm), 25 inch / 635mm depth

Ordering Information

Main Products
Part Number Description
UNGX2U-07 Orin NX Inference Server – 2U Array, with 24x NVIDIA Jetson Orin NX 16GB, 24x 1TB NVMe SSD
UNGX2U-08 Orin NX Inference Server – 2U Array, with 16x NVIDIA Jetson Orin NX 16GB, 16x 1TB NVMe SSD
UNGX2U-09 Orin NX Inference Server – 2U Array, with 8x NVIDIA Jetson Orin NX 16GB, 8x 1TB NVMe SSD
UNGX2U-10 Orin NX Inference Server – 2U Array, with 24x NVIDIA Jetson Orin NX 16GB, 24x 2TB NVMe SSD
UNGX2U-11 Orin NX Inference Server – 2U Array, with 16x NVIDIA Jetson Orin NX 16GB, 16x 2TB NVMe SSD
UNGX2U-12 Orin NX Inference Server – 2U Array, with 8x NVIDIA Jetson Orin NX 16GB, 8x 2TB NVMe SSD

Custom Design

Looking for a customized NVIDIA® Jetson Orin NX™ Product? Click here to tell us what you need.

Video

You may also like…

Go to Top