• Servers
    • Products
    • Features
    • Benefits
  • Workstations
    • Products
    • Features
    • Benefits
  • GPU Cloud
  • AIME Blog
  • Call: +49 30 459 54 380

Efficient deep learning training - using the practical example of training the ResNet50 model on the ImageNet data set

In order to achieve good results with the shortest possible training times when training deep learning models, it is essential to find suitable values ​​for the training parameters such as learning rate and batch size. The search for suitable values ​​depends on the model to be trained, the amount of data used, but also the available hardware and can therefore prove to be quite time-consuming, since a training run, depending on the model and the training data used, can be very long (up to several days).

read more

Multi GPU training with Pytorch

Training deep learning models consist of a high amount of numerical calculations which can be performed to a great extent in parallel. Since graphical processing units (GPUs) offer far more cores than central processing units (CPUs), GPUs (>10000 cores) outperform CPUs (<= 64 cores) in most deep learning applications by factors. The next level of performance is reached by scaling the calculations accross multiple GPUs, therefore AIME servers can be equipped with up to eight high performance GPUs. To utilize the full power of the AIME machines, it is important to ensure all installed GPUs are participating effectively in the deep learning training.

read more

Deep Learning GPU Benchmarks 2021

An overview of current high end GPUs and compute accelerators best for deep and machine learning tasks. Included are the latest offerings from NVIDIA: the Ampere GPU generation. Also the performance of multi GPU setups like a quad RTX 3090 configuration is evaluated. Overview of the benchmarked GPUs Although we only tested a small selection of all the available GPUs, we think we covered all GPUs that are currently best suited for deep learning training and development due to their compute and memory capabilities and their compatibility to current deep learning frameworks.
  • Machine Learning,
  • Deep Learning,
  • Benchmarks,
  • AIME A4000,
  • NVIDIA A100,
  • RTX 3080,
  • RTX 3080 TI,
  • RTX A5000,
  • RTX A6000

read more

Convenient PyTorch and Tensorflow development on AIME GPU Servers

Modern AI development, especially when it comes to the training of Deep Learning models quickly reaches the performance limits of standard PC and notebook hardware. AIME GPU servers are the perfect solution with optimized multi GPU hardware for this task to enable fastest possible turnaround times. Development on a local machine is straight forward but how to use to the power of an AIME GPU server conveniently? To be able to run your code with the full performance of the AIME GPU Servers, you first need to get the code and data to the storage of the remote machine. The most basic way to achieve this, is to copy your finished code and data to the remote machine.

read more

Deep Learning GPU Benchmarks 2020

An overview of current high end GPUs and compute accelerators best for deep and machine learning tasks. Included are the latest offerings from NVIDIA: the Ampere GPU generation. Also the performance of multi GPU setups like a quad RTX 3090 configuration is evaluated.
  • Deep Learning,
  • RTX 3090,
  • RTX 3080,
  • NVIDIA A100,
  • NVIDIA RTX A6000,
  • NVidia Quadro RTX 6000,
  • Quadro RTX 8000,
  • Multi GPU Server,
  • Machine Learning,
  • Benchmarks

read more

How to Setup a Remote Desktop Connection to an AIME-Server

A description of how to establish a Remote Desktop Connection to our AIME servers. The setup via command line as well as with several programs with graphical user interface for Linux, Windows and macOS is demonstrated.

read more

  • Page 1 of 2
  • ←
  • 1
  • 2
  • →
  • AIME Blog
  • Imprint
  • Terms
  • Privacy Policy
  • Right of Withdrawal
  • Twitter
  • LinkedIn
  • GitHub
  • Jobs

© AIME Website 2020. All Rights Reserved.

  • English
  • Deutsch