Log in Rent A100 80GB
A100 80GB 80GB · live marketplace — from $0.92 / hr

Rent an A100 80GB
by the minute.
Not the month.

Rent an A100 80GB for the cheapest HBM + NVLink path in 2026. 80 GB HBM2e at 1,935 GB/s — pretrain 7B–13B from scratch on 8-GPU nodes at ~3,800 tok/s per card, fine-tune 70B with FSDP across 16 cards, serve 70B FP16 across two cards via tensor parallel. Standard FSDP + DeepSpeed pipelines run unmodified on this silicon. Billed per-minute, paid in BTC, USDT/USDC or CLORE.

Per-minute billing SSH + Docker + Jupyter Spot & on-demand 12 regions
DEPLOYMENT MANIFEST REQ-A100 80GB-2026-04-29-#82144
01 INSTANCE CONFIGURED
GPUNVIDIA A100 80GB SXM5
COUNT×8 · NVLink
VRAM640 GB HBM2e
FABRICNVSwitch · 900 GB/s
02 REGION & SLA VERIFIED
REGIONUS-EAST-2 · Tier III+
UPTIME99.95% SLA
NETWORK10 Gbit · <2ms
ATTEST.SOC 2 · ISO 27001
03 COMMERCIAL PRICED
SPOT$0.92/hr
ON-DEMAND$2.40/hr
◊ EXECUTABLE CLR · clore.ai/h100 · per-minute billing 29.04.2026 · 14:22 UTC
$0.92/hr
Starting on-demand price
80GB
HBM2e VRAM per card
12
Regions with A100 80GB hosts
<90s
Cold-start to ready
workloads

Training-grade silicon,
by the minute.

A100 80GBs are what you rent when minutes matter. FP8 cuts memory and doubles throughput on Transformer workloads; HBM2e keeps the SMs fed; NVLink lets eight cards train as one.

Cheapest HBM + NVLink in 2026

When you need 80 GB of HBM and NVLink fabric for 70B FSDP fine-tuning or two-card 70B FP16 serving, A100 80GB is typically 50–60% the rental of an H100 with the same FSDP + DeepSpeed pipelines unchanged. The default training card for budget-conscious ML teams.

Llama-3 70B BF16 ~4.5× vs A100

Production LLM serving

TensorRT-LLM, vLLM, SGLang — all tuned for Ampere. Serve 70B FP8 from a single card with margin to spare for KV-cache.

Llama-3 70B FP8 380 tok/s/user

Long-context inference

80 GB HBM2e + FlashAttention-3 means 128k-token contexts run without offload. Ideal for long-doc RAG and agent loops with tool use.

128k ctx, batch-8 no offload
why A100 80GB

The fastest GPU
you can rent by the minute.

Ampere is the architecture the 70B and 405B foundation models were trained on. Specs from Nvidia's SXM5 datasheet; pricing reflects the lowest live on-demand floor.

A100 80GB 80GB A100 80GB RTX 5090 RTX 4090
Architecture Ampere Ampere Blackwell Ada Lovelace
CUDA cores 6,912 6,912 21,760 16,384
VRAM 80 GB HBM2e 80 GB HBM2e 32 GB GDDR7 24 GB GDDR6X
Memory bandwidth 3,350 GB/s 1,935 GB/s 1,792 GB/s 1,008 GB/s
FP16 / BF16 (dense) 756 TFLOPS 312 TFLOPS ~210 TFLOPS ~165 TFLOPS
From / hr (on-demand) $0.92 $1.20 $0.39 $0.31

// prices are spot-market lows · refreshed every 60 s

pricing

Two ways to rent.
Pay only for the minutes you use.

Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.

Spot

$0.92 / hr
≈ 0.0000084 BTC · 613 CLORE
  • Lowest possible rate
  • Per-minute billing
  • Can be interrupted by on-demand renter
  • Best for batch training, rendering
Browse spot A100 80GBs
MOST RENTED

On-demand

$2.40 / hr
≈ 0.0000127 BTC · 933 CLORE
  • Guaranteed availability
  • No preemption, ever
  • Per-minute billing
  • Best for inference, dev work, demos
Rent on-demand
Pay with
Bitcoin on-chain
CLORE native token
USDT / USDC ERC-20 · BEP-20
workflow

Four steps to a running A100 80GB.

No sales call. No quota request. No three-week procurement. The first four commands are all you need.

01 / FILTER

Pick your card

Filter the marketplace by A100 80GB 80GB, country, GPU count, reliability score, network speed.

02 / RENT

Click rent

Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.

$ clore rent --gpu "A100 80GB 80GB"
03 / CONNECT

SSH or Jupyter

You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.

04 / STOP

Stop anytime

Per-minute billing rounds to the second. Stop the instance and the meter stops with it.

faq

Questions hosts and renters ask.

Is A100 80GB still relevant against H100?

Yes — it's typically 50–60% of H100 rental price with 80 GB HBM2e and supports the same FSDP + DeepSpeed pipelines. For training without FP8 / TransformerEngine, A100 80GB remains the cheapest way to get HBM and NVLink in 2026.

Can I pretrain a 70B model from scratch on this GPU?

Single-card, no - 70B pretraining needs an 8-GPU node minimum. CLORE.AI lists 8x H100, 8x H200, and 8x B200 pods with NVLink fabric for exactly this. A100 80GB pods run 70B FSDP training but at lower throughput than Hopper-class. For multi-week training, contact host operators for reserved-instance terms - listed in the marketplace under 'Reserved'.

What's the FP8 throughput here vs A100 80GB?

A100 80GB has no FP8 - peak is BF16/TF32. H100 introduces FP8 with TransformerEngine and roughly 4x the BF16 training throughput at 2x the rental price - so ~2x perf-per-dollar on FP8-eligible workloads. H200 matches H100 compute but adds 141 GB HBM3e. B200 doubles H100 FP8 again with 192 GB HBM3e. Pick by VRAM and bandwidth ceiling, not just sticker FLOPS.

Does this support NVLink-Switch / NVSwitch fabric?

8-GPU H100 SXM, H200 SXM, and B200 nodes ship with NVSwitch fabric - 900 GB/s peer bandwidth on H100/H200, 1.8 TB/s 5th-gen NVLink on B200. PCIe variants (H100 PCIe, A100 PCIe) have NVLink Bridge in pairs only. Multi-node fabric (NVLink-Switch across racks) is available on B200 hyperscale pods - filter by 'NVSwitch' in the marketplace.

Are 8-GPU pods available for FSDP and DeepSpeed training?

Yes. Multi-GPU listings expose all cards in a single rental as a coherent node with NVSwitch (where present), shared NVMe scratch, and InfiniBand or 100 GbE fabric for multi-node training. The standard PyTorch torchrun, DeepSpeed, and Megatron-LM launchers run unmodified. Filter the marketplace by GPU count to find 8x A100, 8x H100, 8x H200 nodes.

What's the HBM bandwidth comparison vs the predecessor?

V100 (HBM2, 900 GB/s) -> A100 40GB (HBM2e, 1,555 GB/s) -> A100 80GB (HBM2e, 1,935 GB/s) -> H100 (HBM3, 3,350 GB/s) -> H200 (HBM3e, 4,800 GB/s) -> B200 (HBM3e, 8,000 GB/s). Each generation roughly doubles bandwidth or VRAM; KV-cache-bound serving and bandwidth-bound training scale almost linearly with this number.

workload spotlight

Real numbers on the A100 80GB.

80 GB HBM2e + 1.93 TB/s — the 2022–2024 training silicon and still the cheapest HBM + NVLink path in 2026.

Pretrain Llama-7B from scratch
DeepSpeed ZeRO-3 + Flash Attn 2
~3,800 tokens/s/GPU on 8× node

8× A100 80GB pod is the de facto reference for 7B-13B pretraining — ~50% the rental of an H100 node.

Read the guide →
Llama-3 70B FSDP fine-tune
FSDP full-shard + bf16 + Flash Attn 2
~520 tokens/s/GPU on 16× node

70B SFT across 16 cards (2 nodes via NVLink + IB) — standard reference for 70B fine-tunes in 2026.

Read the guide →
70B FP16 inference 2-card
vLLM + tensor parallel
~480 tok/s aggregated, 16 concurrent

Two 80 GB A100s fit 70B FP16 + 16-request KV cache — the cheapest FP16 70B serving setup with NVLink.

Read the guide →
datacenter comparison

Datacenter-tier comparison.

Side-by-side specs across the datacenter tier. Click any row to see that GPU.

GPU
HBM
Mem BW (GB/s)
FP8 TFLOPS
BF16 TFLOPS
NVLink BW
Transformer Engine
Spot $/hr
Tesla V100
32 GB HBM2
900
300 GB/s
$0.28
A100 40GB
40 GB HBM2e
1,555
312
600 GB/s
$0.78
A100 80GB / this page
80 GB HBM2e
1,935
312
600 GB/s
$0.92
H100 tier focus
80 GB HBM3
3,350
1,979
989
900 GB/s
yes
$1.89
H200
141 GB HBM3e
4,800
1,979
989
900 GB/s
yes
$2.40
B200
192 GB HBM3e
8,000
~4,500
~2,250
1,800 GB/s
yes
$3.40
workload guides

Run these on your rented A100 80GB.

Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.

Training
DeepSpeed multi-GPU training
ZeRO-2/3 training across multiple cards.
Training
LLM fine-tuning
LoRA / QLoRA fine-tuning workflow.
Language Models
vLLM serving
High-throughput LLM serving with PagedAttention.
Language Models
Llama 3.3 on CLORE.AI
Run Meta's flagship Llama-3.3 on your rented card.
Training
HF Transformers training
Train and fine-tune with the Trainer API.
Language Models
Mistral / Mixtral
Run Mistral 7B and Mixtral 8x7B / 8x22B.
Advanced
Multi-GPU setup
Configure NVLink, NCCL, and distributed training.
See all guides →
other gpus

Compare with similar cards.

A100 40GB
40 GB · from $0.78/hr
Rent →
H100
80 GB · from $1.89/hr
Rent →
H200
141 GB · from $2.4/hr
Rent →

Your training run
is 60 seconds away.

Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.