Log in Rent A100 40GB
A100 40GB 80GB · live marketplace — from $0.78 / hr

Rent an A100 40GB
by the minute.
Not the month.

Rent an A100 40GB as the canonical 7B pretraining and 13B–34B fine-tuning workhorse. 40 GB HBM2e at 1,555 GB/s, NVLink 3rd-gen, MIG up to 7 instances per card. 8-GPU NVSwitch pods sustain ~27K tok/s aggregate for Mistral-7B-class pretraining runs. The exact training silicon FSDP, DeepSpeed ZeRO-3, and Megatron-LM were originally tuned against. Billed per-minute, paid in BTC, USDT/USDC or CLORE.

Per-minute billing SSH + Docker + Jupyter Spot & on-demand 12 regions
DEPLOYMENT MANIFEST REQ-A100 40GB-2026-04-29-#82144
01 INSTANCE CONFIGURED
GPUNVIDIA A100 40GB SXM5
COUNT×8 · NVLink
VRAM640 GB HBM2e
FABRICNVSwitch · 900 GB/s
02 REGION & SLA VERIFIED
REGIONUS-EAST-2 · Tier III+
UPTIME99.95% SLA
NETWORK10 Gbit · <2ms
ATTEST.SOC 2 · ISO 27001
03 COMMERCIAL PRICED
SPOT$0.78/hr
ON-DEMAND$2.40/hr
◊ EXECUTABLE CLR · clore.ai/h100 · per-minute billing 29.04.2026 · 14:22 UTC
$0.78/hr
Starting on-demand price
40GB
HBM2e VRAM per card
12
Regions with A100 40GB hosts
<90s
Cold-start to ready
workloads

Training-grade silicon,
by the minute.

A100 40GBs are what you rent when minutes matter. FP8 cuts memory and doubles throughput on Transformer workloads; HBM2e keeps the SMs fed; NVLink lets eight cards train as one.

The reference 7B pretraining card

FSDP, DeepSpeed ZeRO-3, and Megatron-LM were tuned against this exact silicon. 40 GB HBM2e + 1.55 TB/s + NVLink — the spec on which the public 7B-class pretraining recipes were originally validated, still the cheapest HBM + NVLink path with full MIG support in 2026.

Llama-3 70B BF16 ~4.5× vs A100

Production LLM serving

TensorRT-LLM, vLLM, SGLang — all tuned for Ampere. Serve 70B FP8 from a single card with margin to spare for KV-cache.

Llama-3 70B FP8 380 tok/s/user

Long-context inference

40 GB HBM2e + FlashAttention-3 means 128k-token contexts run without offload. Ideal for long-doc RAG and agent loops with tool use.

128k ctx, batch-8 no offload
why A100 40GB

The fastest GPU
you can rent by the minute.

Ampere is the architecture the 70B and 405B foundation models were trained on. Specs from Nvidia's SXM5 datasheet; pricing reflects the lowest live on-demand floor.

A100 40GB 80GB A100 80GB RTX 5090 RTX 4090
Architecture Ampere Ampere Blackwell Ada Lovelace
CUDA cores 6,912 6,912 21,760 16,384
VRAM 40 GB HBM2e 40 GB HBM2e 32 GB GDDR7 24 GB GDDR6X
Memory bandwidth 3,350 GB/s 1,935 GB/s 1,792 GB/s 1,008 GB/s
FP16 / BF16 (dense) 756 TFLOPS 312 TFLOPS ~210 TFLOPS ~165 TFLOPS
From / hr (on-demand) $0.78 $1.20 $0.39 $0.31

// prices are spot-market lows · refreshed every 60 s

pricing

Two ways to rent.
Pay only for the minutes you use.

Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.

Spot

$0.78 / hr
≈ 0.0000071 BTC · 520 CLORE
  • Lowest possible rate
  • Per-minute billing
  • Can be interrupted by on-demand renter
  • Best for batch training, rendering
Browse spot A100 40GBs
MOST RENTED

On-demand

$2.40 / hr
≈ 0.0000109 BTC · 800 CLORE
  • Guaranteed availability
  • No preemption, ever
  • Per-minute billing
  • Best for inference, dev work, demos
Rent on-demand
Pay with
Bitcoin on-chain
CLORE native token
USDT / USDC ERC-20 · BEP-20
workflow

Four steps to a running A100 40GB.

No sales call. No quota request. No three-week procurement. The first four commands are all you need.

01 / FILTER

Pick your card

Filter the marketplace by A100 40GB 80GB, country, GPU count, reliability score, network speed.

02 / RENT

Click rent

Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.

$ clore rent --gpu "A100 40GB 80GB"
03 / CONNECT

SSH or Jupyter

You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.

04 / STOP

Stop anytime

Per-minute billing rounds to the second. Stop the instance and the meter stops with it.

faq

Questions hosts and renters ask.

When should I pick the 40 GB over the 80 GB A100?

When you're pretraining 7B from scratch or fine-tuning 13B with offload — 40 GB is plenty. Step up to 80 GB for 34B+ pretraining, 70B fine-tuning, or LongRoPE / 128k-context work that exhausts the smaller model's KV cache.

Can I pretrain a 70B model from scratch on this GPU?

Single-card, no - 70B pretraining needs an 8-GPU node minimum. CLORE.AI lists 8x H100, 8x H200, and 8x B200 pods with NVLink fabric for exactly this. A100 80GB pods run 70B FSDP training but at lower throughput than Hopper-class. For multi-week training, contact host operators for reserved-instance terms - listed in the marketplace under 'Reserved'.

What's the FP8 throughput here vs A100 80GB?

A100 80GB has no FP8 - peak is BF16/TF32. H100 introduces FP8 with TransformerEngine and roughly 4x the BF16 training throughput at 2x the rental price - so ~2x perf-per-dollar on FP8-eligible workloads. H200 matches H100 compute but adds 141 GB HBM3e. B200 doubles H100 FP8 again with 192 GB HBM3e. Pick by VRAM and bandwidth ceiling, not just sticker FLOPS.

Does this support NVLink-Switch / NVSwitch fabric?

8-GPU H100 SXM, H200 SXM, and B200 nodes ship with NVSwitch fabric - 900 GB/s peer bandwidth on H100/H200, 1.8 TB/s 5th-gen NVLink on B200. PCIe variants (H100 PCIe, A100 PCIe) have NVLink Bridge in pairs only. Multi-node fabric (NVLink-Switch across racks) is available on B200 hyperscale pods - filter by 'NVSwitch' in the marketplace.

Are 8-GPU pods available for FSDP and DeepSpeed training?

Yes. Multi-GPU listings expose all cards in a single rental as a coherent node with NVSwitch (where present), shared NVMe scratch, and InfiniBand or 100 GbE fabric for multi-node training. The standard PyTorch torchrun, DeepSpeed, and Megatron-LM launchers run unmodified. Filter the marketplace by GPU count to find 8x A100, 8x H100, 8x H200 nodes.

What's the HBM bandwidth comparison vs the predecessor?

V100 (HBM2, 900 GB/s) -> A100 40GB (HBM2e, 1,555 GB/s) -> A100 80GB (HBM2e, 1,935 GB/s) -> H100 (HBM3, 3,350 GB/s) -> H200 (HBM3e, 4,800 GB/s) -> B200 (HBM3e, 8,000 GB/s). Each generation roughly doubles bandwidth or VRAM; KV-cache-bound serving and bandwidth-bound training scale almost linearly with this number.

workload spotlight

Real numbers on the A100 40GB.

40 GB HBM2e + 1.55 TB/s + NVLink — the canonical 7B pretraining and 13B–34B fine-tuning workhorse.

Pretrain Mistral-7B from scratch
DeepSpeed ZeRO-3 + Flash Attn 2 + bf16
~3,400 tokens/s/GPU, 8K context

8× A100 40GB node hits ~27K tok/s aggregate — a Mistral-7B-class pretraining run completes in weeks of spot.

Read the guide →
Llama-3 13B FSDP fine-tune
Accelerate + FSDP + bf16 + Flash Attn 2
~1,100 tokens/s/GPU on 8× node

Standard 13B SFT pipeline — 40 GB fits FSDP-sharded weights + activations at 4K context.

Read the guide →
MIG 7-way inference partition
MIG + Triton + TensorRT
7× ~5 GB instances, mixed models

Hard isolation for multi-tenant ML platforms — each MIG slice gets dedicated SMs and HBM.

Read the guide →
datacenter comparison

Datacenter-tier comparison.

Side-by-side specs across the datacenter tier. Click any row to see that GPU.

GPU
HBM
Mem BW (GB/s)
FP8 TFLOPS
BF16 TFLOPS
NVLink BW
Transformer Engine
Spot $/hr
Tesla V100
32 GB HBM2
900
300 GB/s
$0.28
A100 40GB / this page
40 GB HBM2e
1,555
312
600 GB/s
$0.78
A100 80GB
80 GB HBM2e
1,935
312
600 GB/s
$0.92
H100 tier focus
80 GB HBM3
3,350
1,979
989
900 GB/s
yes
$1.89
H200
141 GB HBM3e
4,800
1,979
989
900 GB/s
yes
$2.40
B200
192 GB HBM3e
8,000
~4,500
~2,250
1,800 GB/s
yes
$3.40
workload guides

Run these on your rented A100 40GB.

Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.

Training
LLM fine-tuning
LoRA / QLoRA fine-tuning workflow.
Training
DeepSpeed multi-GPU training
ZeRO-2/3 training across multiple cards.
Language Models
vLLM serving
High-throughput LLM serving with PagedAttention.
Training
HF Transformers training
Train and fine-tune with the Trainer API.
Language Models
Llama 3.3 on CLORE.AI
Run Meta's flagship Llama-3.3 on your rented card.
Language Models
Qwen 2.5
Alibaba's Qwen 2.5 family.
Advanced
Multi-GPU setup
Configure NVLink, NCCL, and distributed training.
See all guides →
other gpus

Compare with similar cards.

A100 80GB
80 GB · from $0.92/hr
Rent →
Tesla V100
32 GB · from $0.28/hr
Rent →
H100
80 GB · from $1.89/hr
Rent →

Your training run
is 60 seconds away.

Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.