Notifications
Notifications
CDW Logo

NVIDIA A100 80 GB - GPU computing processor - A100 Tensor Core - 80 GB

Mfg # NVA100TCGPU80NC-KIT CDW # 7306203 | UNSPSC 43201401

Quick tech specs

  • GPU computing processor
  • 80 GB HBM2E
  • fanless
  • A100 Tensor Core
  • PCIe 4.0 x16
View All

Know your gear

A100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, the A100 easily handles different-sized application needs, from the smallest job to the biggest multi-node workload.

First introduced in the NVIDIA Volta architecture, NVIDIA Tensor Core technology has brought dramatic speedups to AI training and inference operations, bringing down training times from weeks to hours and providing massive acceleration to inference. The NVIDIA Ampere architecture builds upon these innovations by providing up to 20x higher FLOPS for AI. It does so by improving the performance of existing precisions and bringing new precisions - TF32, INT8, and FP64 - that accelerate and simplify AI adoption and extend the power of NVIDIA Tensor Cores to HPC.

As AI networks and datasets continue to expand exponentially, their computing appetite is similarly growing. Lower precision math has brought huge performance speedups, but they've historically required some code changes. A100 brings a precision, TF32, which works just like FP32 while providing 20x higher FLOPS for AI without requiring any code change. And NVIDIA's automatic mixed precision feature enables a further 2x boost to performance with just one additional line of code using FP16 precision. A100 Tensor Cores also include support for BFLOAT16, INT8, and INT4 precision, making A100 an incredibly versatile accelerator for both AI training and inference.

A100 brings the power of Tensor Cores to HPC, providing the biggest milestone since the introduction of double-precision GPU computing for HPC. The third generation of Tensor Cores in A100 enables matrix operations in full, IEEE-compliant, FP64 precision. Through enhancements in NVIDIA CUDA-X math libraries, a range of HPC applications that need double-precision math can now see a boost of up to 2.5x in performance and efficiency compared to prior generations of GPUs.
Availability: Item Backordered
Add to Compare

Enhance your purchase

Better Together

Current Item
NVIDIA A100 80 GB - GPU computing processor - A100 Tensor Core - 80 GB

This Item: NVIDIA A100 80 GB - GPU computing processor - A100 Tensor Core - 80 GB

$22,422.99

Total Price: