Artificial Intelligence Machines

Server | Workstations | Cloud

Built for Deep Learning & High Performance Computing

Save up to 90% by switching from your current cloud provider to AIME products. Our multi-GPU accelerated HPC computers come with preinstalled frameworks like Tensorflow, Keras, PyTorch and more. Start computing right away!

Trusting customers all over Europe

Throughout Europe, researchers and engineers at universities, in start-ups, large companies, public agencies and national laboratories use AIME products for their work on the development of artificial intelligence.

and many others...

AIME is member of AI Federal Association

AIME Servers

4U Server

AIME A8000

Unleash multiple Peta TensorOps Deep Learning performance with up to eight GPUs or accelerators, dual EPYC CPU, fastest PCIe 4.0 bus speeds and up to 100GB network connectivity. A multi GPU workhorse built to perform at your data center or co-location.

  • GPU: 4-8x NVIDIA A100 80 GB or
    2-8x NVIDIA H100 80 GB or
    4-8x NVIDIA RTX 6000 Ada 48 GB or
    4-8x NVIDIA RTX 3090 24 GB or
    4-8x NVIDIA RTX A6000 48 GB or
    4-8x NVIDIA RTX A5000 24 GB or
    4-8x NVIDIA Tesla V100S 32 GB
  • CPU: Dual AMD EPYC 8 - 64 Cores
  • RAM:128-2048 GB DDR4 ECC
  • SSD: 2-30 TB, U.2 NVMe SSD
  • RAID:2-6x 20 TB HDD SATA RAID 0/1/5/10/50 or
    2-4x 15 TB SSD NVMe RAID 0/1/5/10/50

If you are looking for a server specialized in maximum deep learning training, inference performance and for the highest demands in HPC computing, the AIME A8000 multi-GPU 4U rack server takes on the task of delivering. You get:

  • Definable GPU Configuration
  • Dual EPYC CPU Performance
  • Up to 60 TB High-Speed NVME SSD Storage
  • High Connectivity and Management Interface
  • Optimized for Multi GPU Server Applications
2U Server

AIME A4004

The Deep Learning server powered by the latest EPYC Genoa generation with up to four GPUs or accelerators packed into 2 height units, fastest PCIe 5.0 bus speeds and up to 100GB network connectivity. Built to perform 24/7 at your data center or co-location for most reliable high performance computing.

2-4x NVIDIA H100 80GB | A100 40GB/80GB | RTX 6000 Ada 48GB | RTX A6000 48GB | RTX A5000 24GB
AMD EPYC 16-96 Cores
96-768 GB DDR5 ECC
1-60 TB, NVMe U.2 SSD (with optional RAID)

AIME A4000

The predecessor of the A4004 for EPYC Rome and Milan CPU generations, fast PCIe 4.0 bus speeds and up to 100GB network connectivity. Your high performance computing node built to perform 24/7 reliably.

1-4x NVIDIA H100 80 GB or
2-4x A100 40GB/80GB | RTX 6000 Ada 48GB | RTX A6000 48GB | RTX A5000 24GB | Tesla V100S 32 GB
AMD EPYC 8-64 Cores
64-1024 GB DDR4 ECC
1-32 TB, NVMe U.2 SSD

AIME Workstations



The perfect workstation for Deep Learning development. Own the power of multi-GPUs directly under your desk. Its elaborated cooling concept makes it suitable for use in office environments.

  • GPU: 2x NVIDIA RTX 4090 24 GB or
    2-4x NVIDIA RTX A5000 24 GB or
    1-4x NVIDIA RTX A6000 48 GB or
    1-4x NVIDIA RTX 6000 Ada 48 GB
  • CPU: AMD Threadripper Pro 16-64 Cores (Zen3)
  • RAM: 64-512 GB DDR4 ECC
  • SSD: 2x 1-8 TB, NVMe TLC
  • HDD: 1-4x 16 TB 3.5'' SATA 7.200 RPM 512 MB Cache


GPU Cloud Server

Rent a Multi GPU Server

Rent an AIME server, hosted in our AI cloud on a weekly or monthly basis as long as you need it.
Have full remote access to a bare-metal high-end multi-GPU server, specifically designed for your compute intensive tasks.

No-setup time, no extra costs, your option to try before you buy.

  • Basic: Up to 8x NVIDIA RTX A5000 24 GB, 24 cores, 256 GB memory
  • Professional: Up to 8x NVIDIA RTX 6000 Ada 48GB, 10-40 vCores / 64 cores, 60-512 GB memory
  • Enterprise: Up to 8x NVIDIA A100 80 GB, 14-56 vCores / 64 cores, 120-1024 GB memory

Customized solutions

Are you missing your configuration? We also serve custom made solutions. We have machine learning hardware solutions for:

  • Development Teams
  • Inhouse Company Services
  • Data Centers
  • Cloud Hosting


Optimized for Deep Learning

Our machines are designed and built to perform on Deep Learning applications.

Deep learning applications require fast memory, high interconnectivity and lots of processing power. Our multi-GPU design reaches the hightest currently possible throughput within this form factor.

Well Balanced Components

All of our components have been selected for their energy efficiency, durability and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.

Tested with Real Life Deep Learning Applications

Our hardware was first designed for our own Deep Learning application needs and evolved in years of experience in Deep Learning frameworks and customized PC hardware building.

AIME ML Container Manager

Our servers and workstations ship with the AIME ML Container Manager preinstalled, a comprehensive software stack that enables developers to easily setup AI projects and navigate between projects and frameworks.

The necessary libraries and GPU drivers for each Deep Learning framework are bundled in preconfigured Docker containers and can be initiated with just a single command.

The most common frameworks such as TensorFlow, Keras, PyTorch and Mxnet are pre installed and ready to use.

The AIME ML Container Manager makes life easier for developers so they do not have to worry about framework version installation issues.

Benefits for your Deep Learning Projects

Iterate Faster

To wait unproductively for a result is frustrating. The maximum waiting time acceptable is to have the machine work overnight so you can check the results the next morning and keep on working.

Extend Model Complexity

In case you have to limit your models because of processing time, you for sure don't have enough processing time. Unleash the extra possible accuracy to see where the real limits are.

Train with more data, learn faster what works and what does not with the ability to make full iterations, every time.

Explore Without Regrets

Errors happen as part of the develoment process. They are necessary to learn and refine. It is annoying when every mistake is measurable as a certain amount of money, lost to external service plans. Make yourself free from running against the cost counter and run your own machine without loosing performance!

Protect Your Data

Are you working with sensitive data or data that only is allowed to be processed inside your company? Protect you data by not needing to upload it to cloud service providers, instead process them on your proprietary hardware.

Start Out Of The Box

Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and MXNet. Just login and start right away with your favourite Deep Learning framework.

Read more: AIME Machine Learning Framework Container Management

Save Money

Cloud services which offer a comparable performance have an hourly price rate of €14 or even more. These costs can quickly grow to hundreds of thousands of Euro per year just for a single instance.
Our hardware is available for a fraction of this cost and offers the same performance as Cloud Services. The TCO is very competitive and can save you service costs in the thousands of Euro every month.
If you prefer not buying your own hardware check out our competitive hosted bare-metal server rental service.

Read more: CLOUD VS. ON-PREMISE - Total Cost of Ownership Analysis

Contact us

Call us or send us an email if you have any questions. We would be glad to assist you to find the best suitable compute solution.