Skip to main content

NVIDIA

Nvidia H100 80Gb H100 Sxm / A100 80Gb H100 Sxm / Pcie / Nvlink - NVH100XTCGPUCA-KIT

Product code: UK-NVI-NVH100XTCGPUCA-KIT
UPC: 7.51493E+11
Condition: New
Availability: Vendor Direct Ship | ETA - Request A Quote | Email: sales@hssl.uk | Call Us: +44 330 043 2524

Request A Quote

SSL Secure Payment

Nvidia H100 80Gb H100 Sxm / A100 80Gb H100 Sxm / Pcie / Nvlink - NVH100XTCGPUCA-KIT

Nvidia H100 80Gb H100 Sxm / A100 80Gb H100 Sxm / Pcie / Nvlink - NVH100XTCGPUCA-KIT

  • An order-of-magnitude leap For accelerated computing.

     

    Tap into unprecedented performance, scalability, And security For every workload With The Nvidia H100 Tensor Core GPU. With Nvidia ® NVLink® Switch System, up To 256 H100s can be connected To accelerate exascale workloads, along With a dedicated Transformer Engine To solve trillion-parameter language models. H100’s combined technology innovations can speed up large language models by an incredible 30X over The previous generation To deliver industry-leading conversational AI.

     

    img
  • img

    Ready For Enterprise AI?

    Enterprise adoption of AI Is now mainstream, And organizations need end-to-end, AI-ready infrastructure that will accelerate them into this new era.

    H100 For mainstream servers comes With a five-year subscription, including Enterprise support, To The Nvidia AI Enterprise Software suite, simplifying AI adoption With The highest performance. This ensures organizations have Access To The AI frameworks And tools they need To build H100-accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, And more.

  • img

    Securely accelerate workloads from Enterprise To exascale.

    Transformational AI training.

    H100 features fourth-generation Tensor Cores And The Transformer Engine With FP8 precision that provides up To 9X faster training over The prior generation For mixture-of-experts (MoE) models. The combination of fourth-generation NVlink, which offers 900 gigabytes Per second (GB/s) of GPU-to-GPU interconnect; NVLINK Switch System, which accelerates communication by every Gpu across nodes; PCIe Gen5; And Nvidia Magnum IO™ Software delivers efficient scalability from small enterprises To massive, unified Gpu clusters.

    Deploying H100 GPUs at data center scale delivers outstanding performance And brings The next generation of exascale high-performance computing (HPC) And trillion-parameter AI within The reach of all researchers.

  • img

    Real-time deep learning inference.

    Up To 30X higher AI inference performance on The largest models.

    AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has To not only deliver The highest performance but also The versatility To accelerate these networks.

    H100 further extends Nvidia ’s market-leading inference leadership With several advancements that accelerate inference by up To 30X And deliver The lowest latency. Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, And INT8, And The Transformer Engine utilizes FP8 And FP16 together To reduce memory usage And increase performance while still maintaining accuracy For large language models.

  • img

    Exascale high-performance computing.

    Up To 7X higher performance For HPC applications.

    The Nvidia data center platform consistently delivers performance gains beyond Moore’s Law. And H100’s new breakthrough AI capabilities further amplify The power of HPC+AI To accelerate time To discovery For scientists And researchers working on solving The world’s most important challenges.

    H100 triples The floating-point operations Per second (FLOPS) of double-precision Tensor Cores, delivering 60 teraFLOPS of FP64 computing For HPC. AI-fused HPC applications can leverage H100’s TF32 precision To achieve one petaFLOP of throughput For single-precision, matrix-multiply operations, With zero code changes.

    H100 also features DPX instructions that deliver 7X higher performance over Nvidia A100 Tensor Core GPUs And 40X speedups over traditional dual-socket CPU-only servers on dynamic programming algorithms, such as Smith-Waterman For DNA sequence alignment.

  • img

    Accelerated

    data analytics.

    Data analytics often consumes The majority of time in AI application development. Since large datasets are scattered across multiple servers, scale-out solutions With commodity CPU-only servers get bogged down by a lack of scalable computing performance.

    Accelerated servers With H100 deliver The compute power—along With 3 terabytes Per second (TB/s) of memory bandwidth Per Gpu And scalability With NVLink And NVSwitch—to tackle data analytics With high performance And scale To support massive datasets. Combined With Nvidia Quantum-2 Infiniband, The Magnum IO software, GPU-accelerated Spark 3.0, And Nvidia RAPIDS™, The Nvidia data center platform Is uniquely able To accelerate these huge workloads With unparalleled levels of performance And efficiency.

  • img

    Enterprise-ready

    utilization.

    IT managers seek To maximize utilization (both peak And average) of compute resources in The data center. They often employ dynamic reconfiguration of compute To right-size resources For The workloads in use.

    Second-generation Multi-Instance Gpu (MIG) in H100 maximizes The utilization of each Gpu by securely partitioning it into as many as seven separate instances. With confidential computing support, H100 allows secure end-to-end, multi-tenant usage, ideal For cloud service provider (CSP) environments.

    H100 With MIG lets infrastructure managers standardize their GPU-accelerated infrastructure while having The flexibility To provision Gpu resources With greater granularity To securely provide developers The right amount of accelerated compute And optimize usage of all their Gpu resources.

  • img

    Built-in

    confidential computing.

    Today’s confidential computing solutions are CPU-based, which Is too limited For compute-intensive workloads like AI And HPC. Nvidia Confidential Computing Is a built-in security feature of The Nvidia Hopper™ architecture that makes H100 The world’s first accelerator With confidential computing capabilities. Users can protect The confidentiality And integrity of their data And applications in use while accessing The unsurpassed acceleration of H100 GPUs. It creates a hardware-based trusted execution environment (TEE) that secures And isolates The entire workload running on a single H100 GPU, multiple H100 GPUs within a node, or individual MIG instances. GPU-accelerated applications can run unchanged within The TEE And don’t have To be partitioned. Users can combine The power of Nvidia Software For AI And HPC With The security of a hardware root of trust offered by Nvidia Confidential Computing.

  • img

    Unparalleled performance for

    large-scale AI And HPC.

    The Hopper Tensor Core Gpu will power The Nvidia Grace Hopper CPU+GPU architecture, purpose-built For terabyte-scale accelerated computing And providing 10X higher performance on large-model AI And HPC. The Nvidia Grace CPU leverages The flexibility of The Arm® architecture To create a CPU And server architecture designed from The ground up For accelerated computing. The Hopper Gpu Is paired With The Grace CPU using Nvidia ’s ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up To 30X higher aggregate system memory bandwidth To The Gpu compared To today's fastest servers And up To 10X higher performance For applications running terabytes of data.

Supercharge Large Language Model Inference

For LLMs up To 175 billion parameters, The PCIe-based H100 NVL With NVLink bridge utilizes Transformer Engine, NVLink, And 188GB HBM3 memory To provide optimum performance And easy scaling across any data center, bringing LLMs To mainstream. Servers equipped With H100 NVL GPUs increase GPT-175B model performance up To 12X over Nvidia DGX™ A100 systems while maintaining low latency in power-constrained data center environments.

Datasheet

Product Specifications

Form Factor H100 SXM H100 PCIe H100 NVL1
FP64 34 teraFLOPS 26 teraFLOPS 68 teraFLOPs
FP64 Tensor Core 67 teraFLOPS 51 teraFLOPS 134 teraFLOPs
FP32 67 teraFLOPS 51 teraFLOPS 134 teraFLOPs
TF32 Tensor Core 989 teraFLOPS2 756 teraFLOPS2 1,979 teraFLOPs2
BFLOAT16 Tensor Core 1,979 teraFLOPS2 1,513 teraFLOPS2 3,958 teraFLOPs2
FP16 Tensor Core 1,979 teraFLOPS2 1,513 teraFLOPS2 3,958 teraFLOPs2
FP8 Tensor Core 3,958 teraFLOPS2 3,026 teraFLOPS2 7,916 teraFLOPs2
INT8 Tensor Core 3,958 TOPS2 3,026 TOPS2 7,916 TOPS2
GPU memory 80GB 80GB 188GB
GPU memory bandwidth 3.35TB/s 2TB/s 7.8TB/s3
Decoders 7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
14 NVDEC
14 JPEG
Max thermal design power (TDP) Up To 700W (configurable) 300-350W (configurable) 2x 350-400W
(configurable)
Multi-Instance GPUs Up To 7 MIGS @ 10GB each Up To 14 MIGS @ 12GB
each
Form factor SXM PCIe
dual-slot air-cooled
2x PCIe
dual-slot air-cooled
Interconnect NVLink: 900GB/s PCIe Gen5: 128GB/s NVLink: 600GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server options Nvidia HGX H100 Partner And Nvidia -Certified Systems With 4 or 8 GPUs Nvidia DGX H100 With 8 GPUs Partner And
Nvidia -Certified Systems
with 1–8 GPUs
Partner And
Nvidia -Certified Systems
with 2-4 pairs
Nvidia AI Enterprise Add-on Included Included
(Manufacturer Standard Warranty) - Not Clear what product you need, or can't find your specific product/service part number? Call us +44 330 043 2524 | You can also use the quote request page

We can ship to virtually any address in the world. Note that there are restrictions on some products, and some products cannot be ship to international destinations.

Please also note that the shipping rates for many items we sell are weight-based. The weight of any such item can be found on its detail page. To reflect the policies of the shipping companies we use, all weights will be rounded up to the next full pound.

Manufacturer Warranties

Wide Product Range Trusted Quality Scalable Growth Secure & Efficient Expert Guidance Wide Product Range Trusted Quality Scalable Growth Secure & Efficient Expert Guidance Wide Product Range Trusted Quality Scalable Growth Secure & Efficient Expert Guidance

Driven by Innovation and Integrity

We strive to deliver technology solutions that set new standards in quality and reliability. Guided by innovation and integrity, our focus is on creating lasting value for our customers while building trust through every product and service we provide.

Exceptional Quality

Trusted by 25k+ Happy Customers

Premium Solutions, Exceptional Performance

We offer a carefully curated range of top-tier products designed to deliver unmatched performance and reliability. Each solution is selected to ensure your business stays efficient, secure, and ahead of the competition.

Top
HSSL © 2026. All Rights Reserved.
american expressdiners clubdiscovermaestromasterpaypalvisa
Review Your Cart Close Close
Your cart is empty Your cart is empty Your cart is empty