Home Gadgets Nvidia will construct the ‘world’s quickest AI supercomputer’-Autopresse.eu

Nvidia will construct the ‘world’s quickest AI supercomputer’-Autopresse.eu

Nvidia will construct the ‘world’s quickest AI supercomputer’-Autopresse.eu

Nvidia will construct the ‘world’s quickest AI supercomputer’

2020-10-17 15:00:52

Nvidia and Cineca, an Italian inter-university consortium and main supercomputing middle, have introduced plans to construct ‘the world’s quickest AI supercomputer.’

The upcoming Leonardo system will use practically 14,000 Nvidia A100 GPUs for a wide range of high-performance computing duties. The height efficiency of the system is predicted to hit 10 FP16 ExaFLOPS. 

The supercomputer might be primarily based on Atos’ BullSequana XH2000 supercomputer nodes, every carrying one unknown Intel Xeon processor, 4 Nvidia A100 GPUs and a Mellanox HDR 200Gb/s InfiniBand card for connectivity. The blades are water cooled and there are 32 of them in every HPC cupboard.  

(Picture credit score: Atos)

The BullSequana XH2000 structure could be very versatile, so it will possibly home any CPU and GPU and, to that finish, we are able to solely guess which Intel Xeon processor might be used for Leonardo.

(Picture credit score: Atos)

Scientists from Italian universities plan to make use of Leonardo for drug discovery, area exploration and analysis, and climate modelling. 

Historically, such purposes depend on high-performance simulation and information analytics workloads that require FP64 precision. However Nvidia says that immediately many HPC duties depend on highly effective synthetic intelligence and machine studying – and for such workloads FP16 precision is sufficient.

Fairly naturally, a large variety of GPUs may also carry out high-resolution visualizations. Nvidia’s A100 GPU was designed primarily for computing, so it helps every kind of precision, together with ‘supercomputing’ FP64 and ‘AI’ FP16.  

14,000 Nvidia A100 GPUs can obtain as much as 8.736 FP16 ExaFLOPS (624 TFLOPS per GPU with structural sparsity enabled × 14,000) efficiency. In the meantime, the identical variety of GPUs can present 135,800 FP64 TFLOPS, which is barely under Summit’s 148,600 FP64 TFLOPS. 

Nvidia believes AI and ML are essential for immediately’s supercomputer, so the corporate prefers to cite peak FP16 efficiency with structural sparsity enabled, within the case of the Leonardo supercomputer powered by its A100 GPUs. 

“With the appearance of AI, we now have a brand new metric for measuring supercomputers. In consequence, the efficiency of our supercomputers has exploded because the computational energy of them has elevated exponentially with the introduction of AI,” Ian Buck, VP and GM of Accelerated Computing at Nvidia, advised TechRadar Professional.

“Right this moment’s trendy supercomputers should be AI supercomputers in an effort to be a vital software for science. Nvidia is setting a brand new pattern by combining HPC and AI. Solely AI supercomputers can ship 10 ExaFLOPS of AI efficiency that includes practically 14,000 NVIDIA Ampere architecture-based GPUs.”

Sources: Nvidia press launch, Nvidia weblog submit

Leave a Reply

Your email address will not be published.