Artificial Intelligence Machines

HPC Server | Workstations | Cloud

Built for Deep Learning & High Performance Computing

Save up to 90% by switching from your current cloud provider to AIME products. Our multi-GPU accelerated HPC servers come with preinstalled frameworks like Tensorflow, Keras, PyTorch and more. Start computing right away!

Trusting customers all over Europe

Throughout Europe, researchers and engineers at universities, in start-ups, large companies, public agencies and national laboratories use AIME products for their work on the development of artificial intelligence.

and many others...

AIME is member of AI Federal Association

8x GPU Servers

4U Server

AIME A8004

The AIME A8004 is the ultimate multi-GPU server, optimized for maximum deep learning training, inference performance and for the highest demands in HPC computing: Dual EPYC Genoa or Bergamo CPUs, the fastest PCIe 5.0 bus speeds, up to 90 TB raided NVMe SSD storage and 100 GBE network connectivity.

  • GPU: 1-8x NVIDIA H100 NVL 94 GB or
    1-8x NVIDIA H100 80 GB or
    1-8x NVIDIA A100 80 GB or
    1-8x NVIDIA L40S 48 GB or
    1-8x NVIDIA RTX 6000 Ada 48 GB or
    1-8x NVIDIA RTX 5000 Ada 32 GB or
    1-8x NVIDIA RTX A5000 24 GB
  • CPU: Dual AMD EPYC 16 - 96 Cores (Genoa), 112 - 128 Cores (Bergamo)
  • RAM:256-3072 GB DDR5 ECC
  • SSD: 2-30 TB, U.2 NVMe SSD
  • RAID:2-6x 15 TB SSD NVMe RAID 0/1/5/10/50
4U Server

AIME A8000

With the AIME A8000 you enter the Peta FLOPS HPC computing Deep Learning performance with up to eight GPUs or accelerators, dual EPYC CPU, 128 PCIe 4.0 lanes bus speeds and up to 100GB network connectivity. A multi GPU workhorse built to perform at your data center or co-location.

  • GPU: 1-8x NVIDIA A100 80 GB or
    1-8x NVIDIA L40 48 GB or
    1-8x NVIDIA RTX 6000 Ada 48 GB or
    1-8x NVIDIA RTX A6000 48 GB or
    1-8x NVIDIA RTX 3090 24 GB or
    1-8x NVIDIA RTX A5000 24 GB
  • CPU: Dual AMD EPYC 8 - 64 Cores (Milan)
  • RAM:128-2048 GB DDR4 ECC
  • SSD: 2-30 TB, U.2 NVMe SSD
  • RAID:2-6x 20 TB HDD SATA RAID 0/1/5/10/50 or
    2-4x 15 TB SSD NVMe RAID 0/1/5/10/50

4x GPU Servers

2U Server

AIME A4004

The Deep Learning server AIME A4004 is powered by the latest EPYC Genoa generation with up to four GPUs or accelerators packed into 2 height units, fastest PCIe 5.0 bus speeds and up to 100GB network connectivity. Built to perform 24/7 at your data center or co-location for most reliable high performance computing.

GPU:
1-4x NVIDIA H100 NVL 94 GB | H100 80 GB | A100 80 GB | L40S 48 GB | RTX 6000 Ada 48 GB | RTX A6000 48 GB | RTX 5000 Ada 32 GB | RTX A5000 24 GB
CPU:
AMD EPYC 16-96 Cores (Genoa), 112-128 Cores (Bergamo)
RAM:
96-1536 GB DDR5 ECC
SSD:
1-60 TB, NVMe U.2 SSD (with optional RAID)
2U Server

AIME A4000

The predecessor of the AIME A4004 for EPYC Rome and Milan CPU generations, fast PCIe 4.0 bus speeds and up to 100GB network connectivity. Your high performance computing node built to perform 24/7 reliably.

GPU:
1-4x NVIDIA A100 40 GB / 80 GB or
1-4x L40S 48 GB | RTX 6000 Ada 48 GB | RTX A6000 48 GB | RTX A5000 24 GB | Tesla V100S 32 GB
CPU:
AMD EPYC 8-64 Cores (Milan)
RAM:
64-1024 GB DDR4 ECC
SSD:
1-32 TB, NVMe U.2 SSD

AIME Workstations

Workstation

AIME G400

The perfect workstation for Deep Learning development. Own the power of multi-GPUs and stay mobile. The workstation can be rack-mounted with the optional SlideRails kit.

  • GPU: 2x NVIDIA RTX 4090 24 GB or
    1-4x NVIDIA RTX A5000 24 GB or
    1-4x NVIDIA RTX A6000 48 GB or
    1-4x NVIDIA RTX 6000 Ada 48 GB
  • CPU: AMD Threadripper Pro 16-64 Cores (Zen3)
  • RAM: 64-512 GB DDR4 ECC
  • SSD: 2x 1-8 TB, NVMe TLC
  • HDD: 1-4x 16 TB 3.5'' SATA 7.200 RPM 512 MB Cache

AIME HPC Cloud

GPU Cloud Server

Rent a Multi GPU Server

Rent an AIME server, hosted in our AI cloud on a weekly or monthly basis as long as you need it.
Have full remote access to a bare-metal high-end multi-GPU server, specifically designed for your compute intensive tasks.

No-setup time, no extra costs, your option to try before you buy.

  • Basic / Inference: Up to 8x NVIDIA RTX A5000 24 GB, 24 cores, 256 GB memory
  • Professional: Up to 8x NVIDIA RTX 6000 Ada 48GB, 10-40 vCores / 64 cores, 60-512 GB memory
  • Enterprise: Up to 8x NVIDIA A100 80GB | H100 94GB NVL, 14-56 vCores / 64 cores, 120-1536 GB memory

Customized solutions

Are you missing your configuration? We also serve custom made solutions. We have machine learning hardware solutions for:

  • Development Teams
  • Inhouse Company Services
  • Data Centers
  • Cloud Hosting

Features

Optimized for Deep Learning

Our machines are designed and built to perform on Deep Learning applications.

Deep learning applications require fast memory, high interconnectivity and lots of processing power. Our multi-GPU design reaches the hightest currently possible throughput within this form factor.

Well Balanced Components

All of our components have been selected for their energy efficiency, durability and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability.

Tested with Real Life Deep Learning Applications

Our hardware was first designed for our own Deep Learning application needs and evolved in years of experience in Deep Learning frameworks and customized PC hardware building.

AIME ML Container Manager

for Model Training

Our servers and workstations ship with the AIME ML Container Manager preinstalled, a comprehensive software stack that enables developers to easily setup AI projects and navigate between projects and frameworks.

The necessary libraries and GPU drivers for each Deep Learning framework are bundled in preconfigured Docker containers and can be initiated with just a single command.

The most common frameworks such as TensorFlow, Keras, PyTorch and Mxnet are pre installed and ready to use.

The AIME ML Container Manager makes life easier for developers so they do not have to worry about framework version installation issues.

Read more: AIME Machine Learning Framework Container Management

AIME API Server

for Model Inference

You have a deep learning model running in your console or Jupyter notebook and would like to make it available to your company or deploy it to the world? AIME API is the easy and scalable solution to do so.

With AIME API one deploys deep learning models (PyTorch, TensorFlow) through a job queue as scalable API endpoint capable of serving millions of model inference requests.

Turn a console Python script to a secure and robust web API acting as your interface to the mobile, browser and desktop world.

The AIME API server solution implements a distributed server architecture with a central AIME API server communicating through a job queue with a scalable GPU compute cluster. The GPU compute cluster can be heterogeneous and distributed at different locations without requiring an interconnect. The model compute jobs are processed through so called compute workers which connect to the AIME API server through a secure HTTPS interface. The locality of the compute workers can be independent from the API server, they only need internet access to request jobs and send the compute results.

Read more: AIME API - The Scalable AI Model Inference Solution

Benefits for your Deep Learning Projects

Iterate Faster

To wait unproductively for a result is frustrating. The maximum waiting time acceptable is to have the machine work overnight so you can check the results the next morning and keep on working.

Extend Model Complexity

In case you have to limit your models because of processing time, you for sure don't have enough processing time. Unleash the extra possible accuracy to see where the real limits are.

Train with more data, learn faster what works and what does not with the ability to make full iterations, every time.

Explore Without Regrets

Errors happen as part of the develoment process. They are necessary to learn and refine. It is annoying when every mistake is measurable as a certain amount of money, lost to external service plans. Make yourself free from running against the cost counter and run your own machine without loosing performance!

Protect Your Data

Are you working with sensitive data or data that only is allowed to be processed inside your company? Protect you data by not needing to upload it to cloud service providers, instead process them on your proprietary hardware.

Start Out Of The Box

Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow and PyTorch. Just login and start right away with your favourite Deep Learning framework.

Read more: AIME Machine Learning Framework Container Management

Read more: AIME API - The Scalable AI Model Inference Solution

Save Money

Cloud services which offer a comparable performance have an hourly price rate of €14 or even more. These costs can quickly grow to hundreds of thousands of Euro per year just for a single instance.
Our hardware is available for a fraction of this cost and offers the same performance as Cloud Services. The TCO is very competitive and can save you service costs in the thousands of Euro every month.
If you prefer not buying your own hardware check out our competitive hosted bare-metal server rental service.

Read more: CLOUD VS. ON-PREMISE - Total Cost of Ownership Analysis

Contact us

Call us or send us an email if you have any questions. We would be glad to assist you to find the best suitable compute solution.