Training deep learning models consist of a high amount of numerical calculations which can be performed to a great extent in parallel. Since graphical processing units (GPUs) offer far more cores than central processing units (CPUs), GPUs (>10000 cores) outperform CPUs (<= 64 cores) in most deep learning applications by factors.
The next level of performance is reached by scaling the calculations accross multiple GPUs, therefore AIME servers can be equipped with up to eight high performance GPUs.
To utilize the full power of the AIME machines, it is important to ensure all installed GPUs are participating effectively in the deep learning training.