List your A100 40GB on Clore.ai. The canonical training silicon — ML teams pretraining 7B from scratch and fine-tuning 13B with FSDP book A100 nodes by the week, not the hour. Net around $610/month per card before MFP staking, paid per-minute in BTC, USDT, USDC or CLORE. NVSwitch pod listings command material premiums. Stake CLORE for up to +200% daily emission.
If you've got a A100 40GB — or a whole rack of them — you've already paid for the silicon, the power contract, the rack space. Clore turns that capex into per-minute revenue in your choice of crypto, no sales calls, no minimum commitment.
ML teams running 7B pretraining or 13B FSDP fine-tunes hold the pod for the whole run — multi-day to multi-week. That translates to fewer cold-starts and predictable monthly net per card. NVSwitch-fabric pod listings book at the highest premiums in the datacenter tier.
Earnings credit to your wallet balance every minute the rental runs. Withdraw to BTC, USDT, USDC or CLORE as often as you want — no daily caps.
One A100 40GB in a closet, or up to 192 servers per account onboarded via API. Same console, same fees, same flow.
Stake some CLORE behind your machine and the network pays you a daily bonus on top of every rental — about half of what the renter pays you, again in CLORE. Skip it and you still get paid the normal way; this just stacks more on.
Per-minute, in BTC / USDT / USDC / CLORE. Withdraw any time.
Activated by staking CLORE behind your machine. Paid daily, on top of normal earnings.
Clore takes a small cut of every rental. Half is paid by you, half by the renter — so the number below is the full marketplace fee, not what comes out of your pocket.
You pay 1.25%, the renter pays the other 1.25%. Hold CLORE to cut your share even more.
You pay 5%, the renter pays the other 5%. Hold CLORE to cut your share even more.
// Want the full breakdown of fee reductions and edge cases? Read the fee docs →
All numbers below assume a A100 40GB listed at $2.40/hr. Real numbers depend on demand, your price, and your power costs.
List your card, accept rentals. No CLORE required, no setup beyond the host software. Get paid per minute.
Hold CLORE in your wallet — no lock, no contract. Your half of the marketplace fee drops by up to 50%.
Stake CLORE behind your server to unlock a daily network bonus on top of your rental — adds about half your rental income again, paid in CLORE.
// Numbers are a rough monthly estimate, not a guarantee — bonus depends on competition. Full host guide →
A Linux box, the Clore hosting software, a stable internet connection. One A100 40GB or a tier-3 facility — same flow.
Boot from the Clore Linux image (USB or PXE). Pair the host with your account using your initialization token.
Configure SSH, Docker, and per-card settings. Flip the server to public when it passes the auto-attestation.
List both, or just one. Adjust live — the floor for A100 40GB 80GB spot is $0.78 / hr right now.
Lock 52,000 CLORE for Tier 1 (or up to 260,000 for Tier 2). 24 h warm-up, then up to +200% rental price as daily rewards.
When you're pretraining 7B from scratch or fine-tuning 13B with offload — 40 GB is plenty. Step up to 80 GB for 34B+ pretraining, 70B fine-tuning, or LongRoPE / 128k-context work that exhausts the smaller model's KV cache.
Datacenter-class power (208/415V three-phase recommended for 8-GPU SXM pods at 5.6 kW for H100/H200 and 8 kW for B200), liquid cooling for B200 SXM, redundant 100 Gbps networking, and a host that can saturate it. PCIe variants run on dual-socket x86 servers with 1500-2000W PSUs. Most CLORE.AI datacenter hosts deploy these in colo cages, not home racks.
CLORE.AI is a marketplace - SLAs are negotiated host-to-renter through on-demand reserved listings, not centrally enforced. Operators running 16+ datacenter GPUs commonly offer 99.9% uptime contracts directly to enterprise renters. Talk to the platform team for fleet onboarding tooling and bulk-listing API access if you're deploying 32+ cards.
MFP daily emission reward is up to +200% of the server's rental price, paid in CLORE on top of fiat-equivalent rental income. On an H100 earning $1,479/month, the maximum MFP bonus tracks rental revenue directly - so the absolute bonus on datacenter cards dwarfs consumer-tier numbers. Stake CLORE behind your fleet at the same ratio across all servers; the bonus also zeroes the non-CLORE hoster fee.
Yes. Multi-week pretraining runs on 8-GPU pods are economically catastrophic to interrupt - a checkpoint loss on a 7-day H100 run costs the renter low-five-figures in lost compute. Datacenter hosts with 99.9%+ verified 30-day uptime command premium pricing and clear fleet-scale demand. Reliability score on your listing is the primary signal training renters filter by.
For B200 SXM, yes - 1,000 W TDP per GPU, 8 kW per node, liquid-cooled rear-door heat exchangers or direct-to-chip loops are effectively required. H100 and H200 air-cool in well-designed colo (700 W TDP each), but rack density lands 2-4 GPUs per U at most. A100 (400 W) and earlier are easier - any datacenter with 30 kW/rack handles them comfortably.
40 GB HBM2e + 1.55 TB/s + NVLink — the canonical 7B pretraining and 13B–34B fine-tuning workhorse.
8× A100 40GB node hits ~27K tok/s aggregate — a Mistral-7B-class pretraining run completes in weeks of spot.
Read the guide →Standard 13B SFT pipeline — 40 GB fits FSDP-sharded weights + activations at 4K context.
Read the guide →Hard isolation for multi-tenant ML platforms — each MIG slice gets dedicated SMs and HBM.
Read the guide →Side-by-side specs across the datacenter tier. Click any row to see that GPU.
Read the host onboarding guide, MFP staking mechanics, and marketplace fee schedule.
Rent one by the minute from $0.78/hr. Spin up in 90 s with full SSH + Docker.
List your card, pick spot or on-demand, and start collecting per-minute earnings in BTC, USDT, USDC, or CLORE. Lock MFP whenever you're ready.