Nvidia’s A100 GPUs keep rolling into the cloud as Amazon Web Services becomes the latest public cloud vendor to adopt the technology. The A100 GPUs are powering AWS’ refreshed P-series instances, which can be harnessed to create EC2 “UltraClusters” spanning 4,000+ GPUs.
Announced today, AWS’ new P4d instances are backed by eight A100 “Ampere” GPUs, connected by NVLink, along with 48 Intel Cascade Lake processor cores (96 vCPUs). The new instances are the first with 400 Gbps networking, according to AWS, leveraging Elastic Fabric Adapter (EFA) and Nvidia GPUDirect RDMA (remote direct memory access).
Each 8-GPU instance delivers up to 2.5 petaflops of 16-bit tensor performance (77.6 teraflops at traditional 64-bit) and 320 GB of high-bandwidth GPU memory, and provides 1.1 terabytes of instance memory and 8 terabytes of local NVME-based SSD storage that can deliver up to 16 gigabytes of read throughput per second.
According to AWS, compared to previous-gen v100-based P3 instances, the P4d delivers 2.5x better deep learning performance, twice the double-precision floating point performance, 2.5x the memory, 16x network bandwidth, and 4x local NVMe-based SSD storage compared, while reducing cost by up to 60 percent.
The 400 Gpbs networking is provided by “four 100 Gbps network connections over a dedicated, petabit-scale, non-blocking network fabric, accessible via EFA.” AWS Chief Evangelist Jeff Bar describes the solution — which supports 19 Gbps EBS burst bandwidth at up to 80,000 IOPS — as being custom-designed for the P4 instances.
Elastic Fabric Adapter with Nvidia GPUDirect RDMA technology enables high-throughput and low-latency GPU-to-GPU communication between instances with CPU bypass to scale out distributed workloads for ML training and HPC applications, AWS said.
P4d instances can be configured into EC2 UltraClusters that can scale to “4,000 or more” GPUs, and connect into AWS services such as S3, Amazon FSx for Lustre, and AWS ParallelCluster.
“These clusters can take on your toughest supercomputer-scale machine learning and HPC workloads: natural language processing, object detection & classification, scene understanding, seismic analysis, weather forecasting, financial modeling, and so forth,” said Barr in a blog post.
Use cases run the gamut from medical to automotive to advanced analytics, reflected in a customer roster that includes GE Healthcare, Toyota Research Institute (TRI), OmniSci and Zenotech Ltd.
“[At TRI,] we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, technical lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”
The new P4 instances are currently available in one size: p4d.24xlarge.
Instances can be accessed within the US East (N. Virginia) and US West (Oregon) regions, and can be purchased on-demand, as spot instances, as reserved instances, as dedicated hosts or through the AWS savings plan.
On-demand pricing starts at $32.77 per hour and drops as low as $11.57 per hour for three-year reserved instances.
The P4 debut marks a decade of AWS providing GPU-equipped instances, starting with the Nvidia Tesla M2050 “Fermi” GPGPUs. As GPUs have become ubiquitous for demanding datacenter workloads, the cadence for new launches has contracted. As detailed by Barr, “the first-generation Cluster GPU instances were launched in late 2010, followed by the G2 (2013), P2 (2016), P3 (2017), G3 (2017), P3dn (2018), and G4 (2019) instances.”
AWS is the latest major public cloud vendor to embrace Nvidia’s A100 “Ampere” GPUs. Google Cloud introduced its A2 family, based on A100 GPUs, in July, less than two months after Ampere’s arrival. Microsoft Azure launched its A100-powered NDv4 instances in preview mode in August. The following month, Oracle Cloud announced general availability of GPU4.8, its bare metal A100-fueled instance.
Comments
Something to say?
Log in or Sign up for free