Modular, standard-based platform designed for large scale AI deep learning and HPC workloads
Leading 2U TwinPro Architecture with 2 Nodes
Flexible Platform with Directly Attached
8 GPUs with Dual-Root Balance Architecture
Next Generation of Accelerated Computing with NVIDIA HGX A100 4-GPU
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
- Unmatched density, flexibility, and customizability that can be optimized for specific compute, GPU acceleration, networking, and storage requirements.
- A+ Multi-GPU systems with advanced thermal designs support the highest TDP (280W) processors and latest GPUs in a dense form factor in 2U and 4U.
- The unique AIOM enhances fast and flexible networking capabilities for large-scale/disaggregated infrastructure that requires massive dataflow between systems.
Supermicro offers the broadest portfolio of GPU Systems, powered by dual 3rd Gen AMD EPYC™ Processors. Leveraging our advanced thermal design, including liquid cooling and custom heatsinks, Supermicro 2U and 4U GPU systems feature NVIDIA’s latest A100 or AMD Instinct™ MI200 GPUs in hyper-dense multi-GPU, multi-node systems. Supermicro’s flexible Advanced I/O Module (AIOM) form factor further enhances multi-GPU communication for data-hungry AI applications.
NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
8 PCI-E 4.0 x16 via PCI-E switch - supporting HGX A100 8-GPU's 1:1 connection to 8 NICs
Up to 24 Hot-swap 2.5" SATA/SAS
Up to 6x NVMe
Dual AMD EPYC™ 7003/7002 Series Processors
IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog
2200W Redundant Platinum Level Power Supplies
Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
4U Rackmountable
- Supports AIOM
- 32 DIMM slots supporting DDR-3200 up to 8TB, up to 10 (8 - PCI-E switch and 2 - CPU) U.2 NVMe drives + 2 M.2 drives
- 2200W cost effective Platinum level redundant power supplies or 3000W true redundant Titanium level power supplies
- High GPU peer-to-peer communication with 3rd Gen NVLink and NVSwitch
- Dual 3rd or 2nd Gen AMD EPYC top of the line processors (Max. 280W)
- 1:1 GPUDirect RDMA for large-scale deep learning model training and inference
AI / deep learning
High performance computing
The latest Ampere generation NVIDIA HGX A100 8-GPU with Supermicro's unique AIOM support enhancing the 8GPU communication and data flow between systems featuring NVIDIA NVLink and NVSwitch, 1:1 GPUDirect RDMA, GPUDirect Storage, and NVMe-oF on InfiniBand.
NVIDIA HGX A100 4-GPU with NVLink
4 PCI-E Gen4 x16 (LP) slots - supporting HGX A100 4-GPU's 1:1 connection to 4 NICs
Up to 10 PCI-E Gen4 x16 slots
4 Hot-swap 2.5" drive bays (SATA/NVMe Hybrid or SAS with optional HBA)
Dual AMD EPYC™ 7003/7002 Series Processors
IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog
2200W Redundant Power Supplies with PMBus
32 DIMM slots
Up to 8TB 3DS ECC DDR4 3200 MHz SDRAM
2U Rackmountable
- Dual AMD EPYC™ 7003/7002 Series Processors(7003 Series Processor drop-in support requires BIOS version 2.0 or newer) 6.8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs
- 4 PCI-E Gen 4 x16 (LP), 1 PCI-E Gen 4 x8 (LP)
- 4 Hot-swap 2.5" drive bays (SAS/SATA/NVMe Hybrid)
- 2x 2200W Platinum Level power supplies with Smart Power Redundancy
- High Density 2U System with NVIDIA® HGX™ A100 4-GPU; Highest GPU communication using NVIDIA® NVLINK™, 4 NICs for GPUDirect RDMA (1:1 GPU Ratio)
- Supports HGX A100 4-GPU 40GB (HBM2) or 80GB (HBM2e)
- Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™ v3.0 up to 600GB/s interconnect
- On board BMC supports integrated IPMI 2.0 + KVM with dedicated 10G LAN
Autonomous vehicle technologies
Research laboratory / national laboratory
Cloud computing
High-performance computing (HPC)
AI / ML, deep learning training and inference
Features NVIDIA HGX A100 4-GPU with Supermicro's advanced thermal heatsink design for best performance and flexibility in the compact 2U form factor, enabling high GPU peer-to-peer communication without I/O bottleneck for data hungry workloads.
8 Double-Width/Single-Width PCI-E 3.0/4.0 x16 Card (Full Height Full Length)
NVIDIA A100, V100, A40, T4, Quadro RTX
AMD Instinct MI100 and MI210
Up to 9 PCI-E 4.0 x16 slots or 10 PCI-E 4.0 x16 slots without NVMe devices
Up to 24x 2.5" SAS/SATA drive bays
2x 2.5" SATA supported natively*
4x 2.5" NVMe supported natively
Dual AMD EPYC™ 7003/7002 Series Processors
IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog
2200W Redundant Power Supplies with PMBus
32 DIMM slots
Up to 8TB 3DS ECC DDR4 3200MHz RDIMM/LRDIMM
4U Rackmountable
- Up to 24 Hot-swap 2.5" drive bays.
- 2 GbE LAN ports
- 8 Hot-swap 11.5K RPM cooling fans
- 2000W (2+2) Redundant Power Supplies
Titanium Level (96%+)
- Supports up to 8 double-width GPUs Direct connect for maximum performance
- Double AMD EPYC™ 7003/7002 Series Processors
- 8TB Registered ECC DDR4 3200MHz SDRAM in 32 DIMMs
- 9 PCI-E 4.0 x16 (Option: 10 PCI-E 4.0 x16 slots without NVMe devices)
Molecular dynamics simulation
Cloud gaming
High performance computing (HPC)
AI / deep learning
Featuring dual-root topology optimized for HPC applications, the maximum 160 PCI-E 4.0 lanes formed by dual AMD EPYC™ CPU socket configuration drive 8 directly attached PCI-E 4.0 GPUs, 200G networking and U.2 NVMe Storage at full-speed.
3 double-width or 6 single-width GPUs
NVIDIA A100, A40. AMD Instinct MI210
6 PCI-E Gen 4 x16 (4 internal, 2 external) slots
1 PCI-E Gen 4 x8 AIOM networking slot
2 front Hot-swap 2.5" U.2 NVMe Gen4 drive bays
Single 2nd or 3rd Gen AMD EPYC™ Processor per node
IPMI 2.0
KVM with dedicated LAN
SSM, SPM, SUM
SuperDoctor® 5
Watchdog
2600W Redundant Power Supplies with PMBus
8 DIMM slots
Up to 2TB 3DS ECC DDR4-3200MHz SDRAM
2U Rackmountable
- Integrated IPMI 2.0 + KVM with dedicated LAN
- 2 front Hot-swap 2.5" U.2 NVMe Gen4 drive bays
- AST2600 BMC
- 2600W Redundant (1+1) Power Supplies
Titanium Level (96%)
- Supports up to 6 single-width GPUs Direct connect for maximum performance
- Single AMD EPYC™ 7003/7002 Series Processors (7003 Series Processor drop-in support requires BIOS version 2.0 or newer)
- 2TB Registered ECC DDR4 3200MHz SDRAM in 8 DIMMs
- Up to 6 PCI-E Gen 4 x16 (4 internal, 2 external) slots, 1 PCI-E 4.0 x8 AIOM slot, 2x NVMe M.2, Form Factor: 2280, 2210, M.2 Key: M-key
Industrial automation, retail, smart medical expert systems
Cloud gaming
AI inference and machine learning
Media / video streaming
Setting a new standard for dense, energy efficient, and resource saving multi-node, multi-GPU system, supporting up to 3 double-width or 6 single-width PCI-E 4.0 GPUs per node, ideal for multi-instance high-end cloud gaming and many other compute intensive data center applications. Available with AMD EPYC™ 7003/7002 Series or AMD Ryzen™ Threadripper™ PRO processors
- Dual AMD EPYC™ 7003 series processors
- Supports the new AMD Instinct™ MI250 OAM Accelerator
- 4U or 5U configuration32 DIMM slots per node supporting DDR5- 3200MHz
- Flexible Storage configuration with 10 hot-swap 2.5” U.2 NVMe drives
- 4U with optional 1U extension for a 5U system providing PCIe slots expansion with Supermicro AIOM support
- Supports next-generation GPUs in a variety of form factors
- Universal GPU server OCP standards-based design
- Modular design for flexibility/future-proofing
- Optimized thermal capability for 500W/700W GPUs
Supermicro A+ Universal GPU systems are open, modular, standards-based servers which provide superior performance and serviceability with dual AMD EPYC™ 7003 series processors, supporting AMD Instinct™ MI250 OAM Accelerator and various GPU and accelerator form factors, and featuring a hot-swappable, tool-less design. The system's “future proofed” design allows to standardize on one GPU platform with multiple configurations for all data center needs with optimized thermal management.