• AIME A4000 Multi GPU Rack Server - Front Top
  • AIME A4000 Multi GPU Rack Server - Front Top
  • AIME A4000 Multi GPU Rack Server - Left
  • AIME A4000 Multi GPU Rack Server - Back Top
  • AIME A4000 Multi GPU Rack Server - Back
  • AIME A4000 Multi GPU Rack Server - Front
  • AIME A4000 Multi GPU Rack Server - Right

AIME A4000 - Multi GPU HPC Rack Server

The AIME A4000 is an enterprise Deep Learning server based on the ASUS ESC4000A-E10, configurable with up to 4 of the most advanced deep learning accelerators and GPUs to enter the Peta FLOPS HPC computing area with more then 4 Peta TensorOps Deep Learning performance. Packed into a dense form factor of 2 height units, EPYC CPU performance, the fastest PCI 4.0 bus speeds and 10GB network connectivity.

Built to perform 24/7 for most reliable high performance computing. Either at your inhouse data center, co-location or as a hosted solution.

required
required
required
required
required
required
required
required
Parts in Stock!
Ready for dispatch in 5-14 days
0,00 €
Price excluding VAT

AIME A4000 - Deep Learning Server

If you are looking for a server specialized in maximum deep learning training, interference performance and for the highest demands in HPC computing, the AIME A4000 multi-GPU 2U rack server takes on the task of delivering.

The AIME A4000 is basend on the new ASUS ESC4000A-E10 barebone which is powered by an AMD EPYC™ 7002 processor with up to 64 cores, 128 threads.

Its GPU-optimized design with high air flow cooling allows the use of four high-end double-slot GPUs like the NVIDIA A100, Tesla or Quadro GPU models.

Definable GPU Configuration

Choose the desired configuration among the most powerfull NVIDIA GPUs for Deep Learning:

2-4x NVIDIA A100

The NVIDIA A100 is the flagship of the NVIDIA Ampere processor generation and the current successor to the legendary NVIDIA Tesla accelerator cards. The NVIDIA A100 is based on the GA-100 processor in 7nm manufacturing with 6912 CUDA cores, 432 third-generation Tensor cores and 40 or 80 GB HBM2 memory with the highest data transfer rates. A single NVIDIA A100 GPU already breaks the peta-TOPS performance barrier. Four accelerators of this type add up to more than 1000 teraFLOPS fp32 performance. The NVIDIA A100 is currently the most efficient and fastest deep learning accelerator card available.

2-4x NVIDIA RTX 3090 Turbo

Built on the 2nd generation NVIDIA Ampere RTX architecture, the GeForce RTX™ 3090 doubles the AI ​​performance with 10496 CUDA cores and 328 Tensor cores. It offers performance previously only available from NVIDIA Titan class GPUs. The RTX 3090 features 24GB of GDDR6X memory. The AIME A4000 uses server-capable Turbo versions of the RTX 3090.

2-4x NVIDIA RTX 3080 TI Turbo

The GeForce RTX ™ 3080 TI is the slightly smaller sister of the RTX 3090 with 10240 CUDA cores, 320 Tensor cores and 12 GB of GDDR6X memory. It is the direct successor to the still widespread RTX 2080 TI. For tasks where the focus is on high compute density and less on GPU memory, such as inference tasks, it is a less expensive alternative.

2-4x NVIDIA RTX A6000

The NVIDIA RTX A6000 is the Ampere-based successor to the NVIDIA Quadro series. It features the same GPU processor (GA-102) as the RTX 3090, but all cores of the GA-102 processor enabled. It outperforms the RTX 3090 with its 10752 CUDA and 336 Gen 3 Tensor cores. Equipped with 48 GB GDDR6 ECC, twice the amount of GPU memory compared to the predecessor of the Quadro RTX 6000 and the RTX 3090. The NVIDIA RTX A6000 is currently the second fastest NVIDIA GPU available, beaten only by the NVIDIA A100. It is best suited for memory demanding tasks.

2-4x NVIDIA RTX A5000

With its 8,192 CUDA and 256 Tensor cores of the 3rd generation, the NVIDIA RTX A5000 is less powerful than an RTX 3090. However, with its 230 watts of power consumption and 24 GB of memory, it is a very efficient accelerator card and especially for inference tasks an interesting alternative.

All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.

EPYC CPU Performance

The high-end AMD EPYC CPU designed for servers delivers up to 64 cores with a total of 128 threads per CPU with an unbeaten price performance ratio.

The available 128 PCI 4.0 lanes of the AMD EPYC CPU allow highest interconnect and data transfer rates between the CPU and the GPUs and ensures that all GPUs are connected with full x16 PCI 4.0 bandwidth.

A large amount of available CPU cores can improve the performance dramatically in case the CPU is used for preprocessing and delivering of data to optimaly feed the GPUs with workloads.

Up to 32 TB High-Speed SSD Storage

Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access times to the data are essential for fast turn around times.

The AIME A4000 can be configured with up to four exchangeable U.2 NVMe triple level cell (TLC) SSDs with a capacity of up to 8 TB each, which adds up to a total capacity of 32 TB of fastest SSD storage.

Since each of the SSDs is directly connected to the CPU and the main memory via PCI 4.0 lanes, they achieve consistently high read and write rates of 3000 MB/s.

As usual in the server sector, the SSDs have an MTBF of 2,000,000 hours and a 5-year manufacturer's guarantee.

High Connectivity and Management Interface

With the available 2 x 1 Gbit/s and 1 x 10 Gbit/s SFP+ LAN ports fastest connections to NAS resources and big data collections are achievable. Also for data interchange in a distributed compute cluster the highest available LAN connectivity is a must have.

The AIME A4000 is completely manageable with ASMB9 (out-of-band) and ASUS Control Center (in-band) makes a successful integration of the AIME A4000 into larger server clusters possible.

Optimized for Multi GPU Server Applications

The AIME A4000 offers energy efficiency with redundant platinum power supplies, which enable long time fail-safe operation.

Its thermal control technology provides more efficient power consumption for large-scale environments.

All setup, configured and tuned for perfect Multi GPU performance by AIME.

The A4000 come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and Mxnet. Ready after boot up to start right away to accelerate your deep learning application.

Technical Details

Type Rack Server 2U, 80cm depth
CPU (configurable) Rome
EPYC 7232 (8 cores, 3.1 GHz)
EPYC 7302 (16 cores, 3.0 GHz)
EPYC 7402 (24 cores, 2.8 GHz)
EPYC 7502 (32 cores, 2.5 GHz)
EPYC 7462 (48 cores, 2. GHz)
EPYC 7742 (64 cores, 2.25 GHz)
Milan
EPYC 7313 (16 cores, 3.0 / 3.7 GHz)
EPYC 7443 (24 cores, 2.85 / 4.0 GHz)
EPYC 7543 (32 cores, 2.8 / 3.7 GHz)
EPYC 7713 (64 cores, 2.0 / 3.6 GHz)
RAM 64 / 128 / 256 / 512 / 1024 GB ECC memory
GPU Options 2 to 4x NVIDIA A100 80GB or
2 to 4x NVIDIA RTX 3080TI 12GB or
2 to 4x NVIDIA RTX 3090 24GB or
2 to 4x NVIDIA Quadro RTX 6000 24GB or
2 to 4x NVIDIA RTX A5000 24GB or
2 to 4x NVIDIA RTX A6000 48GB or
2 to 4x Tesla V100 16GB or
2 to 4x Tesla V100S 32GB
Cooling CPU and GPUs are cooled with an air stream provided by 7 high performance fans > 100000h MTBF
Storage Up to 4 x 8TB U.2 NVMe SSD
Tripple Level Cell (TLC) quality
3000 MB/s read, 3000 MB/s write
MTBF of 2,000,000 hours and 5 years manufacturer's warranty
Network 2 x 1 GBit LAN RJ45
optional: 1 x 10 GBit LAN SFP+ or RJ45
1 x IPMI LAN
USB 4 x USB 3.0 ports (front)
2 x USB 3.0 ports (back)
PSU 2 x 1600 Watt redundant power
80 PLUS Platinum certified (94% efficiency)
Noise-Level 80dBA
Dimensions (WxHxD) 800mm x 440mm x 88.9mm (2U)
31.50" x 17.22" x 3.5"
Operating Environment Operation temperature: 10℃ ~ 35℃
Non operation temperature: -40℃ ~ 70℃

AIME A4000 featuered technologies