AIME G500E - Deep Learning Workstation
The AIME G500E is designed as maintainable and efficient high-end workstation with enough cooling and PSU capacity to host up to four high-end GPUs. It is the 5th generation of AIME workstations supporting the latest efficient AMD EPYC CPU generation with support for DDR5 memory and PCIe 5.0 bus technology.
The workstation is designed for an optimal air flow which is backed by powerful, temperature controlled high air flow fans which keep the GPUs and CPU on the optimal operating temperature.
CPU and GPUs directly exhaust the hot air outside of the case to prevent building up heat inside the case. This prevents overheating of the GPU array and maintain high performance under full load in 24/7 scenarios.
Definable GPU Configuration
Choose the desired configuration among the most powerful NVIDIA GPUs for Deep Learning and Rendering:
Up to 2x NVIDIA RTX Pro 6000 Blackwell Workstation 96GB
The first GPU of NVIDIAs next generation Blackwell is the RTX Pro 6000 Workstation Edition with unbeaten 24.064 CUDA and 752 Tensor cores of the 5th generation.
With the memory increase to 96 GB GDDR7 GPU memory with an impressive 1.6 TB/s memory bandwidth, it is the inception of a new standard in GPU computing. The NVIDIA RTX Pro 6000 Blackwell Workstation Edition is currently the most powerful workstation GPU available.
Up to 4x NVIDIA RTX Pro 6000 Blackwell Max-Q 96GB
The NVIDIA RTX Pro 6000 Blackwell Max-Q has the same technical specifications as the Workstation Edition with two important differences: it is limit to efficient 300W power intake and comes in active blower GPU format. The dense 2-slot format of the RTX Pro 6000 Blackwell Max-Q allows to pack up to four GPUs in a AIME G500E workstation.
For GPU memory demanding tasks, like large language models, a 4x RTX Pro 6000 Blackwell Max-Q setup provides a total of 384 GB blazing fast GPU memory, this allows to run LLM models with up to 320B parameters in fp8 or even 640B parameters in fp4 resolution.
Up to 4x NVIDIA RTX Pro 4500 Blackwell 32GB
The next generation NVIDIA RTX PRO 4500 Blackwell GPU is a great and efficient entry card. With 10,496 CUDA, 328 Tensor cores of the 5th generation and 32 GB GDDR7 memory, it beats all previous NVIDIA RTX 4500 and even RTX 5000 generation cards in performance, performance/price and memory capabilities.
Up to 2x NVIDIA RTX 5090 32GB
The Geforce™ RTX 5090 is the flagship of NVIDIA Geforce Blackwell GPU generation. It is the direct succesor of the RTX 4090. The RTX 5090 enables 680 fifth-generation Tensor Cores and 21.760 next-gen CUDA® cores with 32GB of GDDR7 graphics memory for unprecedented rendering, AI, graphics, and compute performance. Due to the tripple level fan architecture the noise level of the RTX 5090 is a suitable solution for running such powerfull GPUs in an office environment.
Up to 4x NVIDIA RTX 6000 Ada 48GB
The RTX ™ 6000 Ada is built on the latest NVIDIA GPU architecture: Ada Lovelace. It is the direct succesor of the RTX A6000 and the Quadro RTX 6000. The RTX 6000 Ada combines 568 fourth-generation Tensor Cores, and 18.176 next-gen CUDA® cores with 48GB of graphics memory for unprecedented rendering, AI, graphics, and compute performance.
All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.
Highly efficient EPYC CPU Performance
The AMD EPYC 8004 CPU series designed for efficient but powerful workstations and delivers up to 64 cores with a total of 128 threads per CPU. It supports the latest DDR5 memory and PCIe 5.0 technology.
The EPYC 8004 CPU is designed for maximum performance but keeping a constraint power limit of 200 Watts to deliver high and very efficient performance for a great price.
In case you are looking for maximum CPU performance without compromise please have a look at the AIME G500
The available 96 PCIe 5.0 lanes of the AMD EPYC 8004 CPU allow high interconnect and optimal data transfer rates between all GPUs and the CPU.
More than 40 TB High-Speed SSD Storage
Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access time to the data are essential for fast turn around times.
The AIME G500E can be configured with up to two 8 TB onboard M.2 PCIe 5.0 NVMe SSDs and additional in the front bays two U.2 NVME SSDs with up to 15.36 TB each. In total more then 40 TB high speed SSD storage is possible.
All SSDs are connected by PCIe lanes directly to CPU and main memory with up to 6000 MB/s read, 5000 MB/s write rates.
A Workstation suitable for Office and Server Room
The AIME G500E was designed as an office compatible PC workstation with server grade hardware. We recommend to limit the configuration with a maximum of two GPUs for use in an office environment.
When setup in an air ventilated dedicated server room the use of up to four GPUs with no restraints is possible. The G500E supports IPMI LAN and a BMC (Board Management Controller) to remote control and monitor the hardware, essential for serious server setups.
Well Balanced Components
All of our components have been selected for their energy efficiency, durability, compatibility and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.
Tested with Real Life Deep Learning Applications
The AIME G500E was first designed for our own deep learning application needs and evolved in years of experience in deep learning frameworks and customized PC hardware building.
Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like PyTorch and Tensorflow. Just login and start right away with your favorite Deep Learning framework.
AIME G500E Technical Details
Type | Tower Workstation |
CPU Options |
AMD EPYC 8004 Series EPYC 8534 (64 cores, Siena, 2.3 / 3.1 GHz) or EPYC 8434 (48 cores, Siena, 2.5 / 3.1 GHz) or EPYC 8324 (32 cores, Siena, 2.65 / 3 GHz) or EPYC 8224 (24 cores, Siena, 2.55 / 3 GHz) or EPYC 8124 (16 cores, Siena, 2.45 / 3 GHz) or EPYC 8024 (8 cores, Siena, 2.4 / 3 GHz) |
RAM | 64 to 1024 GB DDR5 4800 MHz ECC |
GPU Options |
Up to 2x NVIDIA RTX Pro 6000 Blackwell Workstation 96 GB or Up to 4x NVIDIA RTX Pro 6000 Blackwell Max-Q 96 GB or Up to 4x NVIDIA RTX Pro 4500 Blackwell 32 GB or Up to 2x NVIDIA RTX 5090 32 GB Tripple Fan or Up to 4x NVIDIA RTX 6000 Ada 48 GB or Up to 2x NVIDIA RTX 4090 24 GB Tripple Fan or Up to 4x NVIDIA RTX 5000 Ada 32 GB |
Cooling | CPU and GPU high air flow cooled 3 in case high power fans > 100000h MTBF |
Storage |
Onboard Up to 2x 8TB M.2 NVMe SSD PCIe 4.0 7000 MB/s read, 5000 MB/s write or Up to 2x 4TB M.2 NVMe SSD PCIe 5.0 14800 MB/s read, 13400 MB/s write Front Bays Up to 2x 15.36 TB U.2 NVMe SSD PCIe 4.0 6800 MB/s read, 4000 MB/s write |
Network |
2 x 10 GbE LAN RJ56 (Intel X710-AT2) 2 x 1 GbE LAN RJ45 1 x IPMI LAN RJ 45 |
USB |
Front 2 x USB 3.2 Gen1 Type-A Back 2 x USB 3.2 Gen 1 Type-A |
PSU |
2 x 2000 Watt redundant PSUs 80 PLUS Titanium certified (96% efficiency) |
Noise-Level | Idle < 48dBA, Full Load < 65 dBA |
Operating Environment | Operation temperature: 10℃ ~ 30℃ Non operation temperature: -40℃ ~ 60℃ |
Dimensions (WxHxD) | 175 x 438 x 680 mm |