AMD unveils Radeon Instinct MI60 and MI50 accelerators for AI and HPC

0
58


radeoninstinctmiflatanglergb5inch.png

AMD on Tuesday unveiled the Radeon Instinct MI60 and MI50, a pair of accelerators designed for next-generation deep learning, HPC, cloud computing and rendering applications. AMD says they are the world’s first 7nm data center GPUs.

amdradeonmi60.pngamdradeonmi60.png

The MI60 delivers up to 7.4 TFLOPS of peak FP64 performance, which should allow scientists and researchers to more efficiently process HPC applications. The use cases, AMD notes, span a range of industries including life sciences, energy, finance, automotive, aerospace and defense. The MI50 delivers up to 6.7 TFLOPS of FP64 peak performance.

The accelerators provide flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities for dynamic workloads, such as training complex neural networks or running inference against those trained networks.

“Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads,” David Wang, SVP of engineering for the Radeon Technologies Group at AMD, said in a statement.

The accelerators feature two Infinity Fabric Links per GPU deliver up to 200 GB/s of peer-to-peer bandwidth, for communications up to 6X faster than PCIe Gen 3 interconnect speeds.. This also enables the connection of up to four GPUs in a hive ring configuration.

They are also the first GPUs capable of supporting next-generation PCIe 4.0 interconnect, AMD says, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies.

The MI60 provides 32GB of HBM2 (second-generation High-Bandwidth Memory) Error-correcting code (ECC) memory, and the MI50 provides 16GB of HBM2 ECC memory.

The MI60 is expected to ship to data center customers by the end of 2018, while the MI50 is expected to begin shipping by the end of Q1 2019.

AMD also announced a new version of ROCm, the open source, programming language-independent HPC/Hyperscale-class platform for GPU computing. ROCm software version 2.0 supports the new Radeon Instinct accelerators, and it provides updated math libraries for the new optimized deep learning operations (DLOPS). It also offers support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu. Additionally, it supports the latest versions of popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others.The ROCm 2.0 software platform is expected to be available by the end of 2018.

Let’s block ads! (Why?)



Source link

Thank you very much for visiting our site!

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Are you struggling to make your living? Looking for great PASSIVE INCOME?

You can make your living from anywhere! What you need is just a Laptop with internet connection.

No tech skill is needed! Please click here for more information!

The Fat Decimator System Revealed: The Secrets our Clients Used to Earn $3 Billion

LEAVE A REPLY

Please enter your comment!
Please enter your name here