AIME G500 - Deep Learning Workstation
The AIME G500 is designed as maintainable high-end workstation with enough cooling and PSU capacity to host up to four high-end GPUs. It is the 5th generation of AIME workstations supporting the latest AMD Threadripper CPU generation with support for DDR5 memory and PCIe 5.0 bus technology.
The workstation is designed for an optimal air flow which is backed by powerful, temperature controlled high air flow fans which keep the GPUs and CPU on the optimal operating temperature.
CPU and GPUs directly exhaust the hot air outside of the case to prevent building up heat inside the case. This prevents overheating of the GPU array and maintain high performance under full load in 24/7 scenarios.
Definable GPU Configuration
Choose the desired configuration among the most powerful NVIDIA GPUs for Deep Learning and Rendering:
Up to 2x NVIDIA RTX Pro 6000 Blackwell Workstation 96GB
The first GPU of NVIDIAs next generation Blackwell is the RTX Pro 6000 Workstation Edition with unbeaten 24.064 CUDA and 752 Tensor cores of the 5th generation.
With the memory increase to 96 GB GDDR7 GPU memory with an impressive 1.6 TB/s memory bandwidth, it is the inception of a new standard in GPU computing. The NVIDIA RTX Pro 6000 Blackwell Workstation Edition is currently the most powerful workstation GPU available.
Up to 4x NVIDIA RTX Pro 6000 Blackwell Max-Q 96GB
The NVIDIA RTX Pro 6000 Blackwell Max-Q has the same technical specifications as the Workstation Edition with two important differences: it is limit to efficient 300W power intake and comes in active blower GPU format. The dense 2-slot format of the RTX Pro 6000 Blackwell Max-Q allows to pack up to four GPUs in a AIME G500 workstation.
For GPU memory demanding tasks, like large language models, a 4x RTX Pro 6000 Blackwell Max-Q setup has a total of 384 GB fast GPU memory, this allows to run LLM models with up to 320B parameters in fp8 or even 640B parameters in fp4 resolution.
Up to 2x NVIDIA RTX 5090 32GB
The Geforce™ RTX 5090 is the flagship of NVIDIA Geforce Blackwell GPU generation. It is the direct succesor of the RTX 4090. The RTX 5090 enables 680 fifth-generation Tensor Cores and 21.760 next-gen CUDA® cores with 32GB of GDDR7 graphics memory for unprecedented rendering, AI, graphics, and compute performance. Due to the tripple level fan architecture the noise level of the RTX 5090 is a suitable solution for running such powerfull GPUs in an office environment.
Up to 4x NVIDIA RTX 6000 Ada 48GB
The RTX ™ 6000 Ada is built on the latest NVIDIA GPU architecture: Ada Lovelace. It is the direct succesor of the RTX A6000 and the Quadro RTX 6000. The RTX 6000 Ada combines 568 fourth-generation Tensor Cores, and 18.176 next-gen CUDA® cores with 48GB of graphics memory for unprecedented rendering, AI, graphics, and compute performance.
Up to 4x NVIDIA RTX Pro 4500 Blackwell 32GB
The RTX PRO 4500 of NVIDIAs latest Blackwell GPU generation is a strong and efficient entry card. With its 10,496 CUDA, 328 Tensor cores of the 5th generation and 32 GB GDDR7 memory and a low energy profile of 200 Watts power usage, it is well ahead of all previous NVIDIA generations of RTX A4500/4500 Ada and even RTX A5000/5000 Ada series cards in performance, performance/price and memory capabilities.
All NVIDIA GPUs are supported by NVIDIA’s CUDA-X AI SDK, including cuDNN, TensorRT which power nearly all popular deep learning frameworks.
Threadripping Pro CPU Performance
The high-end AMD Threadripper Pro 7000 and Pro 9000 CPU series designed for workstations delivers up to 96 cores with a total of 192 threads per CPU with an unbeaten price performance ratio supporting the latest DDR5 memory and PCIe 5.0 technology.
The available 128 PCIe 5.0 lanes of the AMD Threadripper Pro CPU allow highest interconnect and data transfer rates between all GPUs and the CPU.
A large amount of available CPU cores can improve the performance immensely in case the CPU is used for tasks like prepossessing and delivering of data to optimal feed the GPUs with workloads.
More than 40 TB High-Speed SSD Storage
Deep Learning is most often linked to high amount of data to be processed and stored. A high throughput and fast access time to the data are essential for fast turn around times.
The AIME G500 can be configured with up to two 8 TB onboard M.2 PCIe 5.0 NVMe SSDs and additional in the front bays two U.2 NVME SSDs with up to 15.36 TB each. In total more then 40 TB high speed SSD storage is possible.
All SSDs are connected by PCIe lanes directly to CPU and main memory with up to 6000 MB/s read, 5000 MB/s write rates.
A Workstation suitable for Office and Server Room
The AIME G500 was designed as an office compatible PC workstation with server grade hardware. We recommend to limit the configuration with a maximum of two GPUs for use in an office environment.
When setup in an air ventilated dedicated server room the use of up to four GPUs with no restraints is possible. The G500 supports IPMI LAN and a BMC (Board Management Controller) to remote control and monitor the hardware, essential for serious server setups.
Well Balanced Components
All of our components have been selected for their energy efficiency, durability, compatibility and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.
Tested with Real Life Deep Learning Applications
The AIME G500 was first designed for our own deep learning application needs and evolved in years of experience in deep learning frameworks and customized PC hardware building.
Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like PyTorch and Tensorflow. Just login and start right away with your favorite Deep Learning framework.
AIME G500 Technical Details
Type | Tower Workstation |
CPU Options |
Threadripper Pro 9000WX Series 9985WX (64 cores, 3.2 GHz - 5.4 GHz) or 9995WX (96 cores, 2.5 GHz - 5.4 GHz) Threadripper Pro 7000WX Series 7955WX (16 cores, 4.5 GHz - 5.3 GHz) or 7965WX (24 cores, 4.2 GHz - 5.3 GHz) or 7975WX (32 cores, 4.0 GHz - 5.3 GHz) or 7985WX (64 cores, 3.2 GHz - 5.1 GHz) or 7995WX (96 cores, 2.5 GHz - 5.1 GHz) |
RAM | 64 to 1024 GB DDR5 5600 MHz ECC |
GPU Options |
Up to 2x NVIDIA RTX Pro 6000 Blackwell Workstation 96 GB or Up to 4x NVIDIA RTX Pro 6000 Blackwell Max-Q 96 GB or Up to 4x NVIDIA RTX Pro 4500 Blackwell 32 GB or Up to 2x NVIDIA RTX 5090 32 GB Tripple Fan or Up to 4x NVIDIA RTX 6000 Ada 48 GB or Up to 2x NVIDIA RTX 4090 24 GB Tripple Fan or Up to 4x NVIDIA RTX 5000 Ada 32 GB or Up to 4x NVIDIA RTX A6000 48 GB or Up to 4x NVIDIA RTX A5000 24 GB |
Cooling | CPU and GPU high air flow cooled 3 in case high power fans > 100000h MTBF |
Storage |
Onboard Up to 2x 8TB M.2 NVMe SSD PCIe 4.0 7000 MB/s read, 5000 MB/s write or Up to 2x 4TB M.2 NVMe SSD PCIe 5.0 14800 MB/s read, 13400 MB/s write Front Bays Up to 2x 15.36 TB U.2 NVMe SSD PCIe 4.0 6800 MB/s read, 4000 MB/s write |
Network |
2 x 10 GBit LAN RJ56 (Intel X550) 1 x 1 x GbE LAN RJ45 1 x IPMI LAN RJ 45 |
Audio |
Realtek ALC4080 7.1 Surround Sound HD 3 ports Audio Jack (Audio in/Audio out/Mic) |
USB |
Front 2 x USB 3.0 Type-A Back 1 x USB 3.2 Gen 2 Type-C 1 x USB 3.2 Gen 2 Type-A 4 x USB 3.2 Type-A |
PSU |
2 x 2400 Watt redundant PSUs 80 PLUS Titanium certified (96% efficiency) |
Noise-Level | Idle < 50dBA, Full Load < 70 dBA |
Operating Environment | Operation temperature: 10℃ ~ 30℃ Non operation temperature: -40℃ ~ 60℃ |
Dimensions (WxHxD) | 175 x 438 x 680 mm |