HPC data centers need to support the ever-growing demands of scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires vast interconnect overhead that substantially increases costs without proportionally increasing data center performance.
The NVIDIA Tesla P100 accelerators are the world’s most advanced data center GPUs ever built, designed to boost throughput and save money for HPC and hyperscale data centers. Powered by the brand new NVIDIA Pascal™ architecture, Tesla P100 for PCIe-based servers enables a single node to replace up to half-rack of commodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.
|Product Series||Tesla P100|
|Core Type||NVIDIA CUDA|
|Host Interface||PCI Express 3.0 x16|
|Stream Cores||3584 CUDA Cores|
|PCIe x16 Interconnect Bandwidth||32 GB/s|
|CoWoS HBM2 Stacked Memory Capacity||16 GB|
|CoWoS HBM2 Stacked Memory Bandwidth||720 GB/s|
|Max Memory Bandwidth||720 GB/s|
|Peak Double Precision floating point performance (GFLOP)||4.7 TeraFLOPS|
|Peak Single Precision floating point performance (GFLOP)||9.3 TeraFLOPS|
|Half-Precision Performance||18.7 TeraFLOPS|
|NVIDIA CUDA™ Technology||Yes|