NVIDIA® A100 GPU Computing Accelerator - 80GB HBM2 - PCIe 4.0 x16 - Passive Cooling

NVIDIA part #: 900-21001-0020-000

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

Price: $14,999.00

Add To Order
Main Specifications
Product SeriesNvidia A100
Host InterfacePCI Express 4.0 x16
GPU ArchitectureAmpere
Detailed Specifications
PCIe x16 Interconnect BandwidthPCIe Gen4 64 GB/s
Max Memory Size80 GB
Max Memory Bandwidth1,935 GB/s
Peak FP649.7 TFLOPS
Peak FP64 Tensor Core19.5 TFLOPS
Peak FP3219.5 TFLOPS
Peak TF32 Tensor Core156 TFLOPS
Peak BFLOAT16 Tensor Core312 TFLOPS
Peak FP16 Tensor Core312 TFLOPS
Peak INT8 Tensor Core624 TOPS
NVIDIA NVLink™ Interconnect Bandwidth600 GB/s (via NVLink Bridge for up to 2-GPUs)
Multi-Instance GPUs7 MIGs at 10GB
Dual SlotYes
Max Graphics Card Power (W)300W