• AIME A8000 Deep Learning Server - Front
  • AIME A8000 Deep Learning Server - Front
  • AIME A8000 Deep Learning Server
  • AIME A8000 Open Front
  • AIME A8000 Front Panel
  • AIME A8000 Back
  • AIME A8000 Top
  • AIME A8000 Multi GPU Server

AIME A8000 - Multi GPU HPC Rack Server

The AIME A8000 is the enterprise Deep Learning server based on the ASUS ESC8000A-E11 barebone, configurable with up to 8 of the most advanced deep learning accelerators and GPUs.

Enter the Peta FLOPS HPC computing area with more then 8 Peta TensorOps Deep Learning performance. The A8000 is the ultimate multi-GPU server: Dual EPYC CPUs with up to 2 TB main memory, the fastest PCIe 4.0 bus speeds and up to 100 GBE network connectivity.

Built to perform 24/7 for most reliable high performance computing. Either at your inhouse data center, co-location or as a hosted solution.

required
required
required
required
required
required
required
required
Parts in Stock!
Ready for dispatch in 5-14 days
0,00 €
Price excluding VAT

AIME A8000 - Deep Learning Server

If you are looking for a server specialized in maximum deep learning training, inference performance and for the highest demands in HPC computing, the AIME A8000 multi-GPU 4U rack server takes on the task of delivering.

The AIME A8000 is based on the new ASUS ESC8000A-E11 barebone which is powered by two AMD EPYC™ Milan processors, each with up to 64 cores. Totaling a CPU performance of up to 256 parallel computable CPU threads.

Its GPU-optimized design with high air flow cooling allows the use of eight high-end double-slot GPUs like the NVIDIA A100, RTX 3090 Turbo, RTX A6000, Tesla or Quadro GPU models.

Definable GPU Configuration

Choose the desired configuration among the most powerfull NVIDIA GPUs for Deep Learning:

Up to 8x NVIDIA A100

The NVIDIA A100 is the flagship of the NVIDIA Ampere processor generation and the current successor to the legendary NVIDIA Tesla accelerator cards. The NVIDIA A100 is based on the GA-100 processor in 7nm manufacturing with 6912 CUDA cores, 432 third-generation Tensor cores and 40 or 80 GB HBM2 memory with the highest data transfer rates. A single NVIDIA A100 GPU already breaks the peta-TOPS performance barrier. Four accelerators of this type add up to more than 1000 teraFLOPS fp32 performance. The NVIDIA A100 is currently the most efficient and fastest deep learning accelerator card available.

Up to 8x NVIDIA RTX 3090 Turbo

Built on the 2nd generation NVIDIA Ampere RTX architecture, the GeForce RTX™ 3090 doubles the AI ​​performance with 10496 CUDA cores and 328 Tensor cores. It offers performance previously only available from NVIDIA Titan class GPUs. The RTX 3090 features 24GB of GDDR6X memory. The AIME A8000 comes with server-capable Turbo versions of the RTX 3090.

Up to 8x NVIDIA RTX 3080 TI Turbo

The GeForce RTX ™ 3080 TI is the slightly smaller sister of the RTX 3090 with 10240 CUDA cores, 320 Tensor cores and 12 GB of GDDR6X memory. It is the direct successor to the still widespread RTX 2080 TI. For tasks where the focus is on high compute density and less on GPU memory, such as inference tasks, it is a less expensive alternative.

Up to 8x NVIDIA RTX A6000

The NVIDIA RTX A6000 is the Ampere-based successor to the NVIDIA Quadro series. It features the same GPU processor (GA-102) as the RTX 3090, but all cores of the GA-102 processor enabled. It outperforms the RTX 3090 with its 10752 CUDA and 336 Gen 3 Tensor cores. Equipped with 48 GB GDDR6 ECC, twice the amount of GPU memory compared to the predecessor of the Quadro RTX 6000 and the RTX 3090. The NVIDIA RTX A6000 is currently the second fastest NVIDIA GPU available, beaten only by the NVIDIA A100. It is best suited for memory demanding tasks.

Up to 8x NVIDIA RTX A5000

With its 8,192 CUDA and 256 Tensor cores of the 3rd generation, the NVIDIA RTX A5000 is less powerful than a RTX 3090. However, with its 230 watts of power consumption and 24 GB of memory, it is a very efficient accelerator card and especially for inference tasks an interesting alternative.

All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.

Dual EPYC CPU Performance

The high-end AMD EPYC CPU designed for servers delivers up to 64 cores with a total of 128 threads per CPU with an unbeaten price performance ratio.

The available 2x 128 PCI 4.0 lanes of the AMD EPYC CPU allow highest interconnect and data transfer rates between the CPU and the GPUs and ensures that all GPUs are connected with full x16 PCI 4.0 bandwidth.

A large amount of available CPU cores can improve the performance dramatically in case the CPU is used for preprocessing and delivering of data to optimaly feed the GPUs with workloads.

Up to 16 TB High-Speed SSD Storage

Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access times to the data are essential for fast turn around times.

The AIME A8000 can be configured with up to two exchangeable U.2 NVMe triple level cell (TLC) SSDs with a capacity of up to 8 TB each, which adds up to a total capacity of 16 TB of fastest SSD storage.

Since each of the SSDs is directly connected to the CPU and the main memory via PCI 4.0 lanes, they achieve consistently high read and write rates of 3000 MB/s.

As usual in the server sector, the SSDs have an MTBF of 2,000,000 hours and a 5-year manufacturer's guarantee.

High Connectivity and Management Interface

Additionally to the standard 2x 1 Gbit/s and 1x 10 Gbit/s SFP+ LAN ports the A8000 is fitted with up to 2x 100 Gbit/s (GBE) network adapter for highest interconnect to NAS resources and big data collections. Also for data interchange in a distributed computing setup the highest available LAN connectivity is a must have.

The AIME A8000 is completely remote manageable through IPMI/BMC (AST2600) which makes a successful integration of the AIME A8000 into larger server clusters possible.

Optimized for Multi GPU Server Applications

The AIME A8000 offers energy efficiency with redundant Titanium grade power supplies, which enable long time fail-safe operation.

Its thermal control technology provides more efficient power consumption for large-scale environments.

All setup, configured and tuned for perfect Multi GPU performance by AIME.

The A8000 comes with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and Mxnet. Ready after boot up to start right away to accelerate your deep learning applications.

Technical Details

Type Rack Server 4U, 80cm depth
CPU (configurable) EPYC Milan
2x EPYC 7313 (16 cores, 3.0 / 3.7 GHz)
2x EPYC 7413 (24 cores, 2.85 / 4.0 GHz)
2x EPYC 7543 (32 cores, 2.8 / 3.7 GHz)
2x EPYC 7713 (64 cores, 2.0 / 3.6 GHz)
RAM 128 / 256 / 512 / 1024 / 2048 GB ECC memory
GPU Options 2 to 8x NVIDIA A100 80GB or
2 to 8x NVIDIA RTX 3080 TI 12GB or
2 to 8x NVIDIA RTX 3090 24GB or
2 to 8x NVIDIA RTX A5000 24GB or
2 to 8x NVIDIA RTX A6000 48GB or
2 to 8x Tesla V100S 32GB
Cooling Dedicated CPU fans. GPUs are cooled with an air stream provided by 10 high performance fans > 100000h MTBF
Storage M.2 NVMe PCIe 4.0 boot SSD
Up to 2 x 8TB U.2 NVMe SSDs
Tripple Level Cell (TLC) quality 3000 MB/s read, 3000 MB/s write
MTBF of 2,000,000 hours and 5 years manufacturer's warranty

8 x 3.5" HDD Storage Bays SATA (SAS RAID optional)
Network 1 x IPMI LAN
2 x 10 GBit LAN RJ45 or
2 x 10 GBit LAN SFP+ or
2 x 100 GBE 2x QSFP28
USB 2 x USB 3.2 ports (front)
PSU 2+2x 2200W redundant power
80 PLUS Titanium certified (96% efficiency)
Noise-Level 88dBA
Dimensions (WxHxD) 440mm x 176mm x 800mm (4U)
17.6" x 6.92" x 31.5"
Operating Environment Operation temperature: 10℃ ~ 35℃
Non operation temperature: -40℃ ~ 70℃

AIME A8000 featuered technologies