List your A100 80GB on Clore.ai. The 2022–2024 training silicon still pulls strong on-demand demand because the 70B FSDP fine-tuning workload remains canonical, and not every team has frontier-card budget. Net around $795/month per card before MFP staking, paid per-minute in BTC, USDT, USDC or CLORE. NVSwitch-fabric pod listings command top premiums. Stake CLORE for up to +200% daily emission.
If you've got a A100 80GB — or a whole rack of them — you've already paid for the silicon, the power contract, the rack space. Clore turns that capex into per-minute revenue in your choice of crypto, no sales calls, no minimum commitment.
70B fine-tuning runs across 16 A100 80GBs hold for days at minimum — and that's the workload with the most consistent funding right now. Hosts with proper NVSwitch + InfiniBand topology consistently book full pod rentals weeks in advance. Top revenue density in the datacenter tier.
Earnings credit to your wallet balance every minute the rental runs. Withdraw to BTC, USDT, USDC or CLORE as often as you want — no daily caps.
One A100 80GB in a closet, or up to 192 servers per account onboarded via API. Same console, same fees, same flow.
Stake some CLORE behind your machine and the network pays you a daily bonus on top of every rental — about half of what the renter pays you, again in CLORE. Skip it and you still get paid the normal way; this just stacks more on.
Per-minute, in BTC / USDT / USDC / CLORE. Withdraw any time.
Activated by staking CLORE behind your machine. Paid daily, on top of normal earnings.
Clore takes a small cut of every rental. Half is paid by you, half by the renter — so the number below is the full marketplace fee, not what comes out of your pocket.
You pay 1.25%, the renter pays the other 1.25%. Hold CLORE to cut your share even more.
You pay 5%, the renter pays the other 5%. Hold CLORE to cut your share even more.
// Want the full breakdown of fee reductions and edge cases? Read the fee docs →
All numbers below assume a A100 80GB listed at $2.40/hr. Real numbers depend on demand, your price, and your power costs.
List your card, accept rentals. No CLORE required, no setup beyond the host software. Get paid per minute.
Hold CLORE in your wallet — no lock, no contract. Your half of the marketplace fee drops by up to 50%.
Stake CLORE behind your server to unlock a daily network bonus on top of your rental — adds about half your rental income again, paid in CLORE.
// Numbers are a rough monthly estimate, not a guarantee — bonus depends on competition. Full host guide →
A Linux box, the Clore hosting software, a stable internet connection. One A100 80GB or a tier-3 facility — same flow.
Boot from the Clore Linux image (USB or PXE). Pair the host with your account using your initialization token.
Configure SSH, Docker, and per-card settings. Flip the server to public when it passes the auto-attestation.
List both, or just one. Adjust live — the floor for A100 80GB 80GB spot is $0.92 / hr right now.
Lock 52,000 CLORE for Tier 1 (or up to 260,000 for Tier 2). 24 h warm-up, then up to +200% rental price as daily rewards.
Yes — it's typically 50–60% of H100 rental price with 80 GB HBM2e and supports the same FSDP + DeepSpeed pipelines. For training without FP8 / TransformerEngine, A100 80GB remains the cheapest way to get HBM and NVLink in 2026.
Datacenter-class power (208/415V three-phase recommended for 8-GPU SXM pods at 5.6 kW for H100/H200 and 8 kW for B200), liquid cooling for B200 SXM, redundant 100 Gbps networking, and a host that can saturate it. PCIe variants run on dual-socket x86 servers with 1500-2000W PSUs. Most CLORE.AI datacenter hosts deploy these in colo cages, not home racks.
CLORE.AI is a marketplace - SLAs are negotiated host-to-renter through on-demand reserved listings, not centrally enforced. Operators running 16+ datacenter GPUs commonly offer 99.9% uptime contracts directly to enterprise renters. Talk to the platform team for fleet onboarding tooling and bulk-listing API access if you're deploying 32+ cards.
MFP daily emission reward is up to +200% of the server's rental price, paid in CLORE on top of fiat-equivalent rental income. On an H100 earning $1,479/month, the maximum MFP bonus tracks rental revenue directly - so the absolute bonus on datacenter cards dwarfs consumer-tier numbers. Stake CLORE behind your fleet at the same ratio across all servers; the bonus also zeroes the non-CLORE hoster fee.
Yes. Multi-week pretraining runs on 8-GPU pods are economically catastrophic to interrupt - a checkpoint loss on a 7-day H100 run costs the renter low-five-figures in lost compute. Datacenter hosts with 99.9%+ verified 30-day uptime command premium pricing and clear fleet-scale demand. Reliability score on your listing is the primary signal training renters filter by.
For B200 SXM, yes - 1,000 W TDP per GPU, 8 kW per node, liquid-cooled rear-door heat exchangers or direct-to-chip loops are effectively required. H100 and H200 air-cool in well-designed colo (700 W TDP each), but rack density lands 2-4 GPUs per U at most. A100 (400 W) and earlier are easier - any datacenter with 30 kW/rack handles them comfortably.
80 GB HBM2e + 1.93 TB/s — the 2022–2024 training silicon and still the cheapest HBM + NVLink path in 2026.
8× A100 80GB pod is the de facto reference for 7B-13B pretraining — ~50% the rental of an H100 node.
Read the guide →70B SFT across 16 cards (2 nodes via NVLink + IB) — standard reference for 70B fine-tunes in 2026.
Read the guide →Two 80 GB A100s fit 70B FP16 + 16-request KV cache — the cheapest FP16 70B serving setup with NVLink.
Read the guide →Side-by-side specs across the datacenter tier. Click any row to see that GPU.
Read the host onboarding guide, MFP staking mechanics, and marketplace fee schedule.
Rent one by the minute from $0.92/hr. Spin up in 90 s with full SSH + Docker.
List your card, pick spot or on-demand, and start collecting per-minute earnings in BTC, USDT, USDC, or CLORE. Lock MFP whenever you're ready.