Dell NVIDIA A100 40GB HBM2 PCIe G4 x16 Model P1001B CUDA Core GPU 0RH1X7 RH1X7
In-store Pickup
Local Pickup in Dallas, TX Available
- Specifications
- Description
| Manufacturer Part Number | RH1X7 |
| Manufacturer | Dell Nvidia |
| Condition | Refurbished |
| Standard Warranty | 90 Day Replacement Warranty |
| Item Category | Graphic Cards |
| Model | P1001B |
| Form Factor | |
| Interface | |
| External Interfaces | |
| Storage Capacity | |
| Transfer Rate | |
| Network Management Type | |
| Processor Type | |
| Clock Speed | |
| Number of Cores | |
| Cache | |
| Socket Type | |
| TDP | |
| Compatible Port | |
| Number of Ports | |
| CPU Type | |
| HDD Form Factor | |
| HDD Interface | |
| HDD Rotation Speed | |
| HDD Storage Capacity | |
| HDD Transfer Rate | |
| Memory Bus Speed | |
| Memory Capacity Per Module | |
| Memory Features | |
| Memory Form Factor | |
| Memory Kit Brand | |
| Memory Kit Breakdown | |
| Memory Kit Speed | |
| Memory Kit Technology | |
| Memory Kit Total Capacity | |
| Memory Kit Voltage | |
| Memory Number of Modules | |
| Memory Number of Pins | |
| Memory Total Capacity | |
| Memory Type | |
| Network Ports | |
| Processor Model | |
| SSD Capacity | |
| SSD Interface | |
| SSD Storage Capacity | |
| SSD Transfer Rate | |
| SSD Type |
The Most Powerful Compute Platform for Every Workload
The NVIDIA A100 Tensor Core GPU is designed to deliver exceptional acceleration across various scales, powering the highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. Serving as the core of the NVIDIA data center platform, the A100 offers up to 20 times the performance of the previous NVIDIA Volta™ generation. It can efficiently scale or be divided into seven separate GPU instances using Multi-Instance GPU (MIG) technology, providing a flexible platform that enables data centers to adapt to changing workload requirements.
The NVIDIA A100 Tensor Core technology supports a wide range of mathematical precisions, offering a single accelerator for any workload. The newest A100 80GB model doubles GPU memory and introduces the world's fastest memory bandwidth at 2 terabytes per second (TB/s), accelerating the processing time for the largest models and datasets.
The A100 is part of NVIDIA’s comprehensive data center solution, which includes components across hardware, networking, software, libraries, and optimized AI models and applications from the NVIDIA NGC™ catalog. As the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to achieve real-world results and deploy solutions at scale.
More Info: Click here