AIME R400 - 4 GPU Rack Server
Your Deep Learning server. Built to perform 24/7 at your inhouse data center or co-location. Configurable with 4 high end deep learning GPUs which give you the fastest deep learning power available: upto 500 Trillion Tensor FLOPS of AI performance and 64 GB high speed GPU memory.
Product Related Downloads
AIME R400 - Deep Learning Server
For Deep Learning a new kind of server is needed, the multi GPU AIME R400 takes on the task for delivering maximum Deep Learning training and interference performance.
With its liquid cooled CPU and high air flow cooling design it keeps operating at their highest performance levels even under full load in 24/7 scenarios.
6 high performance fans are working aligned to deliver a high air flow through the system, this setup keeps the system cooler, more performant and durable than an collection of many small fans for each component.
Definable Quad GPU Configuration
Choose the need confguration among the most powerfull NVIDIA GPUs for Deep Learning:
4 X NVIDIA RTX 2080TI
Each NVIDIA RTX 2080TI trains AI models with 544 NVIDIA Turing mixed-precision Tensor Cores delivering 107 Tensor TFLOPS of AI performance and 11 GB of ultra-fast GDDR6 memory.
With the AIME R400 you can combine the power of 4 of those adding up to more then 400 Trillion Tensor FLOPS of AI performance.
4 X Titan RTX
Powered by the award-winning Turing™ architecture, the Titan RTX is bringing 130 Tensor TFLOPs of performance, 576 tensor cores, and 24 GB of ultra-fast GDDR6 memory.
With the AIME R400 you can combine the power of 4 of those adding up to more then 500 Trillion Tensor FLOPS of AI performance.
4 X Tesla V100
With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraFLOPS (TFLOPS) barrier of deep learning performance including 16 GB of highest bandwith HBM2 memory.
Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, a single Tesla V100 GPU delivers 47X higher inference performance than a CPU server.
All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.
Threadripping CPU Performance
The high-end AMD Threadripper CPU designed for worksations and servers delivers up to 32 cores with a total of 64 threads per CPU with an unbeaten price performance ratio.
The available 64 PCI 3.0 lanes of the AMD Threadripper CPU allow highest interconnect and data transfer rates between the CPU and the GPUs.
A large amount of available CPU cores can improve the performance dramatically in case the CPU is used for prepossessing and delivering of data to optimal feed the GPUs with workloads.
Up to 4TB High-Speed SSD Storage
Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access times to the data are essential for fast turn around times.
The AIME R400 can be configured with two NVMe SSDs, which are connected by PCI lanes directly to CPU and main memory. We offer following 3 class types of SSD to be configured:
- QLC Type: high read rates, average write speed - best suitable for reading of static data libraries or archieves
- TLC Type: highest read and high write speed - best suitable for fast read/write file access
- MLC Type: highest read and write speed - best suitable for high performance databases, data streaming and virtualization
Well Balanced Components
All of our components have been selected for their energy efficiency, durability, compatibility and high performance. They are perfectly balanced, so there are no performance bottlenecks.We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.
Tested with Real Life Deep Learning Applications
The AIME R400 was first designed for our own deep learning server needs and evolved in years of experience in deep learning frameworks and customized PC hardware building.
Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and mxnet. Just login and start right away with your favourite DeepLearning framework.
|Type||Rack Server 6HE, 45cm depth|
|CPU (configurable)||Threadripper 1920X (12 cores, 4 GHz)
Threadripper 1950X (16 cores, 4 GHz)
Threadripper 2920X (12 cores, 4.3 GHz)
Threadripper 2950X (16 cores, 4.3 GHz)
Threadripper 2970X (24 cores, 4.2 GHz)
Threadripper 2990X (32 cores, 4.2 GHz)
|RAM||64 or 128 GB
|GPU Options||4x NVIDIA RTX 2080TI or
4x NVIDIA RTX Titan or
4x NVIDIA Tesla V100
|Cooling||CPU liquid cooled
GPUs are cooled with an high air flow stream
High performance case fans > 100000h MTBF
|Storage||Upto 2 X 2TB NVMe SSD
QLC: 1500 MB/s read, 1000 MB/s write
TLC: 3500 MB/s read, 1750 MB/s, write
MLC: 3500 MB/s read, 2700 MB/s write
|Network||2 x 1000 MBit LAN|
|USB||2 x USB 3.1 Gen 1 ports (front)
1 x USB Type-C™ USB 3.1 Gen 2
1 x USB 3.1 Gen 2 Type-A
6 x USB 3.1 Gen 1
|PSU||2000 Watt power
80 PLUS Platinum certified (94% efficiency)
|Dimensions (WxHxD)||440 x 265 x 430 mm|