GPU Systems Featured Server Models
Competitive Advantages
  • Unmatched density, flexibility, and customizability that can be optimized for specific compute, GPU acceleration, networking, and storage requirements. 
  • A+ Multi-GPU systems with advanced thermal designs support the highest TDP (280W) processors and latest GPUs in a dense form factor in 2U and 4U.
  • The unique AIOM enhances fast and flexible networking capabilities for large-scale/disaggregated infrastructure that requires massive dataflow between systems. 

Supermicro offers the broadest portfolio of GPU Systems, powered by dual 3rd Gen AMD EPYC™ Processors. Leveraging our advanced thermal design, including liquid cooling and custom heatsinks, Supermicro 2U and 4U GPU systems feature NVIDIA’s latest A100 or AMD Instinct™ MI200 GPUs in hyper-dense multi-GPU, multi-node systems. Supermicro’s flexible Advanced I/O Module (AIOM) form factor further enhances multi-GPU communication for data-hungry AI applications.

Maximum Acceleration for AI, ML, DL Training and Inference, and HPC Applications
Maximum Acceleration for AI, Deep Learning, and HPC, Powered by AMD and NVIDIA
A+ GPU Systems
GPU 

NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs

Input/Output  

8 PCI-E 4.0 x16 via PCI-E switch - supporting HGX A100 8-GPU's 1:1 connection to 8 NICs

Drives 

Up to 24 Hot-swap 2.5" SATA/SAS

Up to 6x NVMe


Processors 

Dual AMD EPYC™ 7003/7002 Series Processors

Management 

IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog

Power

2200W Redundant Platinum Level Power Supplies

Memory

Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem

Form Factor 

4U Rackmountable

Product Features
  • Supports AIOM
  • 32 DIMM slots supporting DDR-3200 up to 8TB, up to 10 (8 - PCI-E switch and 2 - CPU) U.2 NVMe drives + 2 M.2 drives
  • 2200W cost effective Platinum level redundant power supplies or 3000W true redundant Titanium level power supplies
  • High GPU peer-to-peer communication with 3rd Gen NVLink and NVSwitch
  • Dual 3rd  or 2nd Gen AMD EPYC top of the line processors (Max. 280W)
  • 1:1 GPUDirect RDMA for large-scale deep learning model training and inference
Key Features:

AI / deep learning

High performance computing

Key Applications:
A+ Server 4124GO-NART

The latest Ampere generation NVIDIA HGX A100 8-GPU with Supermicro's unique AIOM support enhancing the 8GPU communication and data flow between systems featuring NVIDIA NVLink and NVSwitch, 1:1 GPUDirect RDMA, GPUDirect Storage, and NVMe-oF on InfiniBand.

4U GPU with NVLink Servers
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications 
GPU 

NVIDIA HGX A100 4-GPU with NVLink

Input/Output  

4 PCI-E Gen4 x16 (LP) slots - supporting HGX A100 4-GPU's 1:1 connection to 4 NICs

Up to 10 PCI-E Gen4 x16 slots

Drives 

4 Hot-swap 2.5" drive bays (SATA/NVMe Hybrid or SAS with optional HBA)

Processors 

Dual AMD EPYC™ 7003/7002 Series Processors

Management 

IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog

Power

2200W Redundant Power Supplies with PMBus

Memory

32 DIMM slots

Up to 8TB 3DS ECC DDR4 3200 MHz SDRAM

Form Factor 

2U Rackmountable

Product Features
  •  Dual AMD EPYC™ 7003/7002 Series Processors(7003 Series Processor drop-in support requires BIOS version 2.0 or newer) 6.8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs
  • 4 PCI-E Gen 4 x16 (LP), 1 PCI-E Gen 4 x8 (LP)
  • 4 Hot-swap 2.5" drive bays (SAS/SATA/NVMe Hybrid)
  • 2x 2200W Platinum Level power supplies with Smart Power Redundancy
  • High Density 2U System with NVIDIA® HGX™ A100 4-GPU; Highest GPU communication using NVIDIA® NVLINK™, 4 NICs for GPUDirect RDMA (1:1 GPU Ratio)
  • Supports HGX A100 4-GPU 40GB (HBM2) or 80GB (HBM2e)
  • Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™ v3.0 up to 600GB/s interconnect
  • On board BMC supports integrated IPMI 2.0 + KVM with dedicated 10G LAN
Key Features:

Autonomous vehicle technologies

Research laboratory / national laboratory

Cloud computing

High-performance computing (HPC)

AI / ML, deep learning training and inference

Key Applications:
A+ Server 2124GQ-NART

Features NVIDIA HGX A100 4-GPU with Supermicro's advanced thermal heatsink design for best performance and flexibility in the compact 2U form factor, enabling high GPU peer-to-peer communication without I/O bottleneck for data hungry workloads.

2U GPU with NVLink Servers
Next Generation of Accelerated Computing with NVIDIA HGX A100 4-GPU
GPU 

8 Double-Width/Single-Width PCI-E 3.0/4.0 x16 Card (Full Height Full Length)

NVIDIA A100, V100, A40, T4, Quadro RTX

AMD Instinct MI100 and MI210

Input/Output  

Up to 9 PCI-E 4.0 x16 slots or 10 PCI-E 4.0 x16 slots without NVMe devices

Drives 

Up to 24x 2.5" SAS/SATA drive bays
2x 2.5" SATA supported natively*
4x 2.5" NVMe supported natively

Processors 

Dual AMD EPYC™ 7003/7002 Series Processors

Management 

IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog

Power

2200W Redundant Power Supplies with PMBus

Memory

32 DIMM slots

Up to 8TB 3DS ECC DDR4 3200MHz RDIMM/LRDIMM

Form Factor 

4U Rackmountable

Product Features
  • Up to 24 Hot-swap 2.5" drive bays.
  • 2 GbE LAN ports
  • 8 Hot-swap 11.5K RPM cooling fans
  • 2000W (2+2) Redundant Power Supplies

    Titanium Level (96%+)

  • Supports up to 8 double-width GPUs Direct connect for maximum performance
  • Double AMD EPYC™ 7003/7002 Series Processors
  • 8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs
  • 9 PCI-E 4.0 x16 (Option: 10 PCI-E 4.0 x16 slots without NVMe devices)
Key Features:

Molecular dynamics simulation

Cloud gaming

High performance computing (HPC)

AI / deep learning

Key Applications:
A+ Server AS -4124GS-TNR+

Featuring dual-root topology optimized for HPC applications,  the maximum 160 PCI-E 4.0 lanes formed by dual AMD EPYC™ CPU socket configuration drive 8 directly attached PCI-E 4.0 GPUs, 200G networking and U.2 NVMe Storage at full-speed.

A+ 4U 8 PCI-E 4.0 GPU System
Flexible Platform with Directly Attached 8 GPUs with Dual-Root Balance Architecture
GPU 

3 double-width or 6 single-width GPUs

NVIDIA A100, A40. AMD Instinct MI210

Input/Output  

6 PCI-E Gen 4 x16 (4 internal, 2 external) slots

1 PCI-E Gen 4 x8 AIOM networking slot

Drives 

2 front Hot-swap 2.5" U.2 NVMe Gen4 drive bays

Processors 

Single 2nd or 3rd Gen AMD EPYC™ Processor per node

Management 

IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog

Power

2600W Redundant Power Supplies with PMBus

Memory

8 DIMM slots

Up to 2TB 3DS ECC DDR4-3200MHz SDRAM

Form Factor 

2U Rackmountable

Product Features
  • Integrated IPMI 2.0 + KVM with  dedicated LAN
  • 2 front Hot-swap 2.5" U.2 NVMe Gen4 drive bays
  • AST2600 BMC
  • 2600W Redundant (1+1) Power Supplies

    Titanium Level (96%)

  • Supports up to 6 single-width GPUs Direct connect for maximum performance
  • Single AMD EPYC™ 7003/7002 Series Processors (7003 Series Processor drop-in support requires BIOS version 2.0 or newer)
  • 2TB Registered ECC DDR4 3200MHz SDRAM in 8 DIMMs
  • Up to 6 PCI-E Gen 4 x16 (4 internal, 2 external) slots, 1 PCI-E 4.0 x8 AIOM slot, 2x NVMe M.2, Form Factor: 2280, 2210, M.2 Key: M-key
Key Features:

Industrial automation, retail, smart medical expert systems

Cloud gaming

AI inference and machine learning

Media / video streaming

Key Applications:
A+ Server AS -2114GT-DNR

Setting a new standard for dense, energy efficient, and resource saving multi-node, multi-GPU system, supporting up to 3 double-width or 6 single-width PCI-E 4.0 GPUs per node,  ideal for multi-instance high-end cloud gaming and many other compute intensive data center applications. Available with AMD EPYC™ 7003/7002 Series or AMD Ryzen™ Threadripper™ PRO processors

A+ 2U 2-Node Multi-GPU
High-Performance, Resource-Saving, Large-Scale Data Center GPU Solution
  • Dual AMD EPYC™ 7003 series processors
  • Supports the new AMD Instinct™ MI250 OAM Accelerator
  • 4U or 5U configuration32 DIMM slots per node supporting DDR5- 3200MHz
  • Flexible Storage configuration with 10 hot-swap 2.5” U.2 NVMe drives
  • 4U with optional 1U extension for a 5U system providing PCIe slots expansion with Supermicro AIOM support
  • Supports next-generation GPUs in a variety of form factors
  • Universal GPU server OCP standards-based design
  • Modular design for flexibility/future-proofing
  • Optimized thermal capability for 500W/700W GPUs
Key Features:
4U and 5U Universal GPU Systems 

Supermicro A+ Universal GPU systems are open, modular, standards-based servers which provide superior performance and serviceability with dual AMD EPYC™ 7003 series processors, supporting AMD Instinct™ MI250 OAM Accelerator and various GPU and accelerator form factors, and featuring a hot-swappable, tool-less design. The system's “future proofed” design allows to standardize on one GPU platform with multiple configurations for all data center needs with optimized thermal management.

A+ Universal GPU System
Modular Platform for HPC Applications and Advanced Data Center AI Infrastructure