Log in Rent H200
H200 80GB · live marketplace — from $2.95 / hr

Rent an H200
by the minute.
Not the month.

Rent an H200 when memory capacity or bandwidth is the bottleneck. 141 GB HBM3e at 4,800 GB/s — fits Llama-3 405B INT4 across 4 cards instead of 8, runs 70B FP16 single-card with 64-request KV cache, serves DeepSeek-V3 at native 1M-token contexts without paginating to host memory. The serving and long-context card of 2026. Billed per-minute, paid in BTC, USDT/USDC or CLORE.

Per-minute billing SSH + Docker + Jupyter Spot & on-demand 12 regions
DEPLOYMENT MANIFEST REQ-H200-2026-04-29-#82144
01 INSTANCE CONFIGURED
GPUNVIDIA H200 SXM5
COUNT×8 · NVLink
VRAM640 GB HBM3e
FABRICNVSwitch · 900 GB/s
02 REGION & SLA VERIFIED
REGIONUS-EAST-2 · Tier III+
UPTIME99.95% SLA
NETWORK10 Gbit · <2ms
ATTEST.SOC 2 · ISO 27001
03 COMMERCIAL PRICED
SPOT$2.95/hr
ON-DEMAND$2.40/hr
◊ EXECUTABLE CLR · clore.ai/h100 · per-minute billing 29.04.2026 · 14:22 UTC
$2.95/hr
Starting on-demand price
141GB
HBM3e VRAM per card
12
Regions with H200 hosts
<90s
Cold-start to ready
workloads

Training-grade silicon,
by the minute.

H200s are what you rent when minutes matter. FP8 cuts memory and doubles throughput on Transformer workloads; HBM3e keeps the SMs fed; NVLink lets eight cards train as one.

1M-token contexts native, no offload

141 GB HBM3e at 4.8 TB/s eliminates KV-cache offload for everything up to 1M-token contexts and fits 405B INT4 in 4 cards instead of 8. Same compute as H100, but the memory delta transforms 70B+ serving and long-context inference economics. Bandwidth-bound workloads' card.

Llama-3 70B BF16 ~4.5× vs A100

Production LLM serving

TensorRT-LLM, vLLM, SGLang — all tuned for Hopper. Serve 70B FP8 from a single card with margin to spare for KV-cache.

Llama-3 70B FP8 380 tok/s/user

Long-context inference

141 GB HBM3e + FlashAttention-3 means 128k-token contexts run without offload. Ideal for long-doc RAG and agent loops with tool use.

128k ctx, batch-8 no offload
why H200

The fastest GPU
you can rent by the minute.

Hopper is the architecture the 70B and 405B foundation models were trained on. Specs from Nvidia's SXM5 datasheet; pricing reflects the lowest live on-demand floor.

H200 80GB A100 80GB RTX 5090 RTX 4090
Architecture Hopper Ampere Blackwell Ada Lovelace
CUDA cores 16,896 6,912 21,760 16,384
VRAM 141 GB HBM3e 141 GB HBM2e 32 GB GDDR7 24 GB GDDR6X
Memory bandwidth 3,350 GB/s 1,935 GB/s 1,792 GB/s 1,008 GB/s
FP16 / BF16 (dense) 756 TFLOPS 312 TFLOPS ~210 TFLOPS ~165 TFLOPS
From / hr (on-demand) $2.95 $1.20 $0.39 $0.31

// prices are spot-market lows · refreshed every 60 s

pricing

Two ways to rent.
Pay only for the minutes you use.

Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.

Spot

$2.95 / hr
≈ 0.0000218 BTC · 1600 CLORE
  • Lowest possible rate
  • Per-minute billing
  • Can be interrupted by on-demand renter
  • Best for batch training, rendering
Browse spot H200s
MOST RENTED

On-demand

$2.40 / hr
≈ 0.0000268 BTC · 1967 CLORE
  • Guaranteed availability
  • No preemption, ever
  • Per-minute billing
  • Best for inference, dev work, demos
Rent on-demand
Pay with
Bitcoin on-chain
CLORE native token
USDT / USDC ERC-20 · BEP-20
workflow

Four steps to a running H200.

No sales call. No quota request. No three-week procurement. The first four commands are all you need.

01 / FILTER

Pick your card

Filter the marketplace by H200 80GB, country, GPU count, reliability score, network speed.

02 / RENT

Click rent

Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.

$ clore rent --gpu "H200 80GB"
03 / CONNECT

SSH or Jupyter

You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.

04 / STOP

Stop anytime

Per-minute billing rounds to the second. Stop the instance and the meter stops with it.

faq

Questions hosts and renters ask.

When does the H200 beat the H100?

Whenever memory bandwidth or VRAM is the bottleneck — 141 GB HBM3e at 4.8 TB/s eliminates KV-cache offload, fits 405B INT4 across 4 cards instead of 8, and runs 1M-token contexts native. Same compute as H100, but the memory upgrade is significant for serving.

Can I pretrain a 70B model from scratch on this GPU?

Single-card, no - 70B pretraining needs an 8-GPU node minimum. CLORE.AI lists 8x H100, 8x H200, and 8x B200 pods with NVLink fabric for exactly this. A100 80GB pods run 70B FSDP training but at lower throughput than Hopper-class. For multi-week training, contact host operators for reserved-instance terms - listed in the marketplace under 'Reserved'.

What's the FP8 throughput here vs A100 80GB?

A100 80GB has no FP8 - peak is BF16/TF32. H100 introduces FP8 with TransformerEngine and roughly 4x the BF16 training throughput at 2x the rental price - so ~2x perf-per-dollar on FP8-eligible workloads. H200 matches H100 compute but adds 141 GB HBM3e. B200 doubles H100 FP8 again with 192 GB HBM3e. Pick by VRAM and bandwidth ceiling, not just sticker FLOPS.

Does this support NVLink-Switch / NVSwitch fabric?

8-GPU H100 SXM, H200 SXM, and B200 nodes ship with NVSwitch fabric - 900 GB/s peer bandwidth on H100/H200, 1.8 TB/s 5th-gen NVLink on B200. PCIe variants (H100 PCIe, A100 PCIe) have NVLink Bridge in pairs only. Multi-node fabric (NVLink-Switch across racks) is available on B200 hyperscale pods - filter by 'NVSwitch' in the marketplace.

Are 8-GPU pods available for FSDP and DeepSpeed training?

Yes. Multi-GPU listings expose all cards in a single rental as a coherent node with NVSwitch (where present), shared NVMe scratch, and InfiniBand or 100 GbE fabric for multi-node training. The standard PyTorch torchrun, DeepSpeed, and Megatron-LM launchers run unmodified. Filter the marketplace by GPU count to find 8x A100, 8x H100, 8x H200 nodes.

What's the HBM bandwidth comparison vs the predecessor?

V100 (HBM2, 900 GB/s) -> A100 40GB (HBM2e, 1,555 GB/s) -> A100 80GB (HBM2e, 1,935 GB/s) -> H100 (HBM3, 3,350 GB/s) -> H200 (HBM3e, 4,800 GB/s) -> B200 (HBM3e, 8,000 GB/s). Each generation roughly doubles bandwidth or VRAM; KV-cache-bound serving and bandwidth-bound training scale almost linearly with this number.

workload spotlight

Real numbers on the H200.

141 GB HBM3e at 4.8 TB/s — same compute as H100, but the memory upgrade transforms 70B+ serving and 1M-token contexts.

Llama-3 405B INT4 across 4 cards
vLLM + GPTQ 4-bit + tensor parallel
~620 tok/s aggregated, 16 concurrent

141 GB per card fits 405B INT4 in 4 cards instead of 8 H100s — cuts node count in half for the same model.

Read the guide →
Llama-3 70B FP16 single-card
vLLM + Flash Attn 3
~2,100 tok/s aggregated, 64 concurrent

141 GB fits 70B FP16 + 64-request KV cache — no quantization, no model splitting.

Read the guide →
DeepSeek-V3 1M-token serving
vLLM + paged attention + FP8
~720 tok/s @ 1M-token contexts

4.8 TB/s HBM3e bandwidth and 141 GB enable native 1M-token contexts without offload — the H200's signature workload.

Read the guide →
datacenter comparison

Datacenter-tier comparison.

Side-by-side specs across the datacenter tier. Click any row to see that GPU.

GPU
HBM
Mem BW (GB/s)
FP8 TFLOPS
BF16 TFLOPS
NVLink BW
Transformer Engine
Spot $/hr
Tesla V100
32 GB HBM2
900
300 GB/s
$0.28
A100 40GB
40 GB HBM2e
1,555
312
600 GB/s
$0.78
A100 80GB
80 GB HBM2e
1,935
312
600 GB/s
$0.92
H100 tier focus
80 GB HBM3
3,350
1,979
989
900 GB/s
yes
$1.89
H200 / this page
141 GB HBM3e
4,800
1,979
989
900 GB/s
yes
$2.40
B200
192 GB HBM3e
8,000
~4,500
~2,250
1,800 GB/s
yes
$3.40
workload guides

Run these on your rented H200.

Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.

Language Models
vLLM serving
High-throughput LLM serving with PagedAttention.
Training
DeepSpeed multi-GPU training
ZeRO-2/3 training across multiple cards.
Language Models
DeepSeek-V3
671B MoE model serving guide.
Language Models
Llama 3.3 on CLORE.AI
Run Meta's flagship Llama-3.3 on your rented card.
Training
LLM fine-tuning
LoRA / QLoRA fine-tuning workflow.
Vision Models
Llama 3.2 Vision
Meta's multimodal Llama 3.2.
Advanced
Multi-GPU setup
Configure NVLink, NCCL, and distributed training.
See all guides →
other gpus

Compare with similar cards.

H100
80 GB · from $1.89/hr
Rent →
A100 80GB
80 GB · from $0.92/hr
Rent →
B200
192 GB · from $3.4/hr
Rent →

Your training run
is 60 seconds away.

Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.