List your H200 on Clore.ai. 141 GB HBM3e is the spec long-context inference and 405B serving teams chase — and they pay accordingly. Net around $1,818/month per card before MFP staking, paid per-minute in BTC, USDT, USDC or CLORE. Multi-card NVLink-Switch pods clear the highest premiums on the platform. Stake CLORE for up to +200% daily emission.
If you've got a H200 — or a whole rack of them — you've already paid for the silicon, the power contract, the rack space. Clore turns that capex into per-minute revenue in your choice of crypto, no sales calls, no minimum commitment.
Inference teams running Llama-3 405B INT4 cut their node count in half on H200 vs H100 — and they're willing to pay the per-card premium because the total cost still drops. Same applies to 1M-token contexts native. Demand currently outstrips supply on the H200 listings.
Earnings credit to your wallet balance every minute the rental runs. Withdraw to BTC, USDT, USDC or CLORE as often as you want — no daily caps.
One H200 in a closet, or up to 192 servers per account onboarded via API. Same console, same fees, same flow.
Stake some CLORE behind your machine and the network pays you a daily bonus on top of every rental — about half of what the renter pays you, again in CLORE. Skip it and you still get paid the normal way; this just stacks more on.
Per-minute, in BTC / USDT / USDC / CLORE. Withdraw any time.
Activated by staking CLORE behind your machine. Paid daily, on top of normal earnings.
Clore takes a small cut of every rental. Half is paid by you, half by the renter — so the number below is the full marketplace fee, not what comes out of your pocket.
You pay 1.25%, the renter pays the other 1.25%. Hold CLORE to cut your share even more.
You pay 5%, the renter pays the other 5%. Hold CLORE to cut your share even more.
// Want the full breakdown of fee reductions and edge cases? Read the fee docs →
All numbers below assume a H200 listed at $2.40/hr. Real numbers depend on demand, your price, and your power costs.
List your card, accept rentals. No CLORE required, no setup beyond the host software. Get paid per minute.
Hold CLORE in your wallet — no lock, no contract. Your half of the marketplace fee drops by up to 50%.
Stake CLORE behind your server to unlock a daily network bonus on top of your rental — adds about half your rental income again, paid in CLORE.
// Numbers are a rough monthly estimate, not a guarantee — bonus depends on competition. Full host guide →
A Linux box, the Clore hosting software, a stable internet connection. One H200 or a tier-3 facility — same flow.
Boot from the Clore Linux image (USB or PXE). Pair the host with your account using your initialization token.
Configure SSH, Docker, and per-card settings. Flip the server to public when it passes the auto-attestation.
List both, or just one. Adjust live — the floor for H200 80GB spot is $2.95 / hr right now.
Lock 52,000 CLORE for Tier 1 (or up to 260,000 for Tier 2). 24 h warm-up, then up to +200% rental price as daily rewards.
Whenever memory bandwidth or VRAM is the bottleneck — 141 GB HBM3e at 4.8 TB/s eliminates KV-cache offload, fits 405B INT4 across 4 cards instead of 8, and runs 1M-token contexts native. Same compute as H100, but the memory upgrade is significant for serving.
Datacenter-class power (208/415V three-phase recommended for 8-GPU SXM pods at 5.6 kW for H100/H200 and 8 kW for B200), liquid cooling for B200 SXM, redundant 100 Gbps networking, and a host that can saturate it. PCIe variants run on dual-socket x86 servers with 1500-2000W PSUs. Most CLORE.AI datacenter hosts deploy these in colo cages, not home racks.
CLORE.AI is a marketplace - SLAs are negotiated host-to-renter through on-demand reserved listings, not centrally enforced. Operators running 16+ datacenter GPUs commonly offer 99.9% uptime contracts directly to enterprise renters. Talk to the platform team for fleet onboarding tooling and bulk-listing API access if you're deploying 32+ cards.
MFP daily emission reward is up to +200% of the server's rental price, paid in CLORE on top of fiat-equivalent rental income. On an H100 earning $1,479/month, the maximum MFP bonus tracks rental revenue directly - so the absolute bonus on datacenter cards dwarfs consumer-tier numbers. Stake CLORE behind your fleet at the same ratio across all servers; the bonus also zeroes the non-CLORE hoster fee.
Yes. Multi-week pretraining runs on 8-GPU pods are economically catastrophic to interrupt - a checkpoint loss on a 7-day H100 run costs the renter low-five-figures in lost compute. Datacenter hosts with 99.9%+ verified 30-day uptime command premium pricing and clear fleet-scale demand. Reliability score on your listing is the primary signal training renters filter by.
For B200 SXM, yes - 1,000 W TDP per GPU, 8 kW per node, liquid-cooled rear-door heat exchangers or direct-to-chip loops are effectively required. H100 and H200 air-cool in well-designed colo (700 W TDP each), but rack density lands 2-4 GPUs per U at most. A100 (400 W) and earlier are easier - any datacenter with 30 kW/rack handles them comfortably.
141 GB HBM3e at 4.8 TB/s — same compute as H100, but the memory upgrade transforms 70B+ serving and 1M-token contexts.
141 GB per card fits 405B INT4 in 4 cards instead of 8 H100s — cuts node count in half for the same model.
Read the guide →141 GB fits 70B FP16 + 64-request KV cache — no quantization, no model splitting.
Read the guide →4.8 TB/s HBM3e bandwidth and 141 GB enable native 1M-token contexts without offload — the H200's signature workload.
Read the guide →Side-by-side specs across the datacenter tier. Click any row to see that GPU.
Read the host onboarding guide, MFP staking mechanics, and marketplace fee schedule.
Rent one by the minute from $2.4/hr. Spin up in 90 s with full SSH + Docker.
List your card, pick spot or on-demand, and start collecting per-minute earnings in BTC, USDT, USDC, or CLORE. Lock MFP whenever you're ready.