• AIME R500 Multi GPU Rack Server Front
  • AIME R500 Multi GPU Rack Server Front
  • AIME R500 Multi GPU Rack Server

AIME R500 - 4 GPU Rack Server

Your Deep Learning server, configurable with 4 high end deep learning GPUs which give you the fastest deep learning power available: upto 500 Trillion Tensor FLOPS of AI performance and 64 GB high speed GPU memory. Built to perform 24/7 at your inhouse server room or co-location with an server graded EPYC powered CPU.

This product is currently unavailable.

Reserve

AIME R500 - Machine Learning Server

For Deep Learning a new kind of server is needed, the multi GPU AIME R500 takes on the task for delivering maximum Deep Learning training and interference performance.

With its high air flow cooling design it keeps operating at their highest performance levels even under full load in 24/7 scenarios.

7 high performance fans are working aligned to deliver a high air flow through the system, this setup keeps the system cooler, more performant and durable than an collection of many small fans for each component.

Definable Quad GPU Configuration

Choose the need confguration among the most powerfull NVIDIA GPUs for Deep Learning:

4 x Nvidia RTX 2080TI

Each NVIDIA RTX 2080TI trains AI models with 544 NVIDIA Turing mixed-precision Tensor Cores delivering 107 Tensor TFLOPS of AI performance and 11 GB of ultra-fast GDDR6 memory.

With the AIME R500 you can combine the power of 4 of those adding up to more then 400 Trillion Tensor FLOPS of AI performance.

4 x Nvidia Quadro RTX 6000

The Quadro RTX 6000 is the pro version of the Nvidia Titan RTX, with improved multi GPU blower ventilation, additional virtualization capabilities and ECC memory. It is powered by the same Turing core as the Titan RTX with 576 tensor cores, delivering 130 Tensor TFLOPs of performance and 24 GB of ultra-fast GDDR6 ECC memory.

With the AIME R500 you can combine the power of 4 of those adding up to more then 500 Trillion Tensor FLOPS of AI performance.

4 X Nvidia Quadro GV100

The Quadro GV100 is based on the same Volta GPU processor as the Nvidia Tesla V100 but adds video card capabilities and has an more reliable fan based cooling then a Tesla V100. With its 640 Tensor Core and 32 GB of highest bandwith HBM2 memory the Quadro GV100 is one of the fastest beyond 100 teraFLOPS (TFLOPS) GPUs currently available.

All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.

EPYC CPU Performance

The high-end AMD EPYC CPU designed for servers delivers up to 32 cores with a total of 64 threads per CPU with an unbeaten price performance ratio.

The available 128 PCI 3.0 lanes of the AMD EPYC CPU allow highest interconnect and data transfer rates between the CPU and the GPUs and ensures that all GPUs are connected with full x16 PCI 3.0 bandwidth.

A large amount of available CPU cores can improve the performance dramatically in case the CPU is used for prepossessing and delivering of data to optimal feed the GPUs with workloads.

Up to 8TB High-Speed SSD Storage

Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access times to the data are essential for fast turn around times.

The AIME R500 can be configured with two NVMe SSDs, which are connected by PCI lanes directly to CPU and main memory. We offer following 3 class types of SSD to be configured:

  • QLC Type: high read rates, average write speed - best suitable for reading of static data libraries or archieves
  • TLC Type: highest read and high write speed - best suitable for fast read/write file access
  • MLC Type: highest read and write speed - best suitable for high performance databases, data streaming and virtualization

High Connectivity and Management Interface

With the two available 10 Gbit/s LAN ports fastest connections to NAS resources and big data collections are achievable. Also for data interchange in a distributed compute cluster the highest available LAN connectivity is a must have.

The AIME R500 is equipped with a dedicated IPMI LAN interface with its advanced BMC it can be remote monitored and controlled (wake-up/reset). This features make a successful integration of the AIME R500 into a server rack cluster possible.

Well Balanced Components

All of our components have been selected for their energy efficiency, durability, compatibility and high performance. They are perfectly balanced, so there are no performance bottlenecks.We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.

Developed for Machine Learning Applications

The AIME R500 was first designed for our own machine learning server needs and evolved in years of experience in deep learning frameworks and customized PC hardware building.

Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and Mxnet. Just login and start right away with your favourite machine learning framework.

Technical Details

Type Rack Server 4HE, 65cm depth
CPU (configurable) EPYC 7261 (8 cores, 2.5 GHz)
EPYC 7232 (8 cores, 3.1 GHz)
EPYC 7351 (16 cores, 2.4 GHz)
EPYC 7302 (16 cores, 3.0 GHz)
EPYC 7402 (24 cores, 2.8 GHz)
EPYC 7502 (32 cores, 2.5 GHz)
RAM 64 / 128 / 256 GB ECC memory
GPU Options 4x NVIDIA RTX 2080TI or
4x NVIDIA Quadro RTX 6000 or
4x NVIDIA Quadro RTX 8000 or
4x NVIDIA Quadro GV100
Cooling CPU and GPUs are cooled with an air stream provided by 7 high performance 120mm fans > 100000h MTBF
Storage Upto 2 x 4 TB NVMe SSD
Config Options:
QLC: 1500 MB/s read, 1000 MB/s write
TLC: 3500 MB/s read, 1750 MB/s, write
MLC: 3500 MB/s read, 2700 MB/s write
Network 2 x 10 GBit LAN
1 x IPMI LAN
USB 2 x USB 3.0 ports (front)
2 x USB 3.0 ports (back)
PSU 2000 Watt power
80 PLUS Platinum certified (94% efficiency)
Noise-Level < 50-55dBA
Dimensions (WxHxD) 440 x 180 x 650 mm

AIME R500 featuered technologies