Rent an RTX 4090 — the most-rented consumer card on the network since 2023. 24 GB GDDR6X at 1,008 GB/s, 16,384 Ada cores. Production Flux batch-4, 8B FP16 vLLM serving at 2,200 tok/s, 34B QLoRA, single-card Hunyuan video, dual-card 70B INT4. Spun up in under 90 seconds, billed per-minute, paid in BTC, USDT/USDC or CLORE. The reference consumer GPU for ComfyUI commercial work.
The RTX 4090 has been Clore's most-rented card since launch. It handles 13B–34B LLM fine-tunes, SDXL pipelines, and 4K renders without breaking a sweat.
Battle-tested inventory across 34 regions, 312+ live listings on a typical day, spot floor near $0.31/hr. The de facto reference for ComfyUI commercial pipelines, 34B QLoRA, and dual-card 70B INT4. Whatever you want to run, somebody has already published the recipe for it on a 4090.
SDXL, Flux.1 dev, ComfyUI workflows at production scale. Blender Cycles with OptiX turning out 4K frames at ~58 s/frame.
vLLM and TGI containers run Llama-3 8B FP16, or 70B INT4 across 2× 4090. Real throughput, real cost-per-token.
The 4090 is the sweet spot — datacenter-class throughput at consumer pricing. Specs from Nvidia's reference sheet; pricing is the lowest live rate right now.
// prices are spot-market lows · refreshed every 60 s
Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.
No sales call. No quota request. No three-week procurement. The first four commands are all you need.
Filter the marketplace by RTX 4090, country, GPU count, reliability score, network speed.
Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.
You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.
Per-minute billing rounds to the second. Stop the instance and the meter stops with it.
Yes — Llama-3 70B INT4 fits across two 4090s with tensor parallelism via vLLM or ExLlamaV2. For single-card 70B you'll want an H100 or H200. 13B and 34B fit comfortably on one 4090.
Consumer cards on CLORE.AI cover most hobby and indie workflows: Stable Diffusion 1.5 and SDXL, ComfyUI/Automatic1111, Flux.1, LoRA and QLoRA fine-tuning of 7B-13B LLMs, Whisper transcription, video transcoding, Blender Cycles, and game-server hosting. Anything that fits in 8-32 GB VRAM and runs in Docker runs here. You get full root SSH plus a Jupyter template if you want one.
Cold-start lands in roughly 60-90 seconds for a typical Docker image: server allocation, container pull, GPU passthrough, SSH up. Pre-cached templates (PyTorch, ComfyUI, vLLM, Ollama) are faster because the image is already on the host. Once running you pay per minute, so a 10-minute experiment costs ten minutes of rental, not an hour.
On-demand is a fixed per-hour price the host sets; the rental cannot be revoked while you have funds. Spot is auction-style: you bid, the highest bidder runs, and a higher bidder can preempt you. Spot is typically 30-50% cheaper. CLORE.AI charges 2.5% on spot and 10% on on-demand, split 50/50 with the host.
Spot prices on CLORE.AI usually beat RunPod community pricing because there is no centralized markup; you rent directly from the host with a 2.5% spot fee. Vast.ai is the closest comparison, and on consumer cards CLORE.AI is generally within a few cents per hour. Hold CLORE in your wallet for Proof of Holding and you stack up to 50% off the marketplace fee.
Yes. Point at any registry - Docker Hub, GHCR, Quay, your private registry - then set env vars, port forwards, and your SSH public key in the rent dialog. Templates on the platform are just preset configs; nothing is locked down. You get full root inside the container with GPU passthrough.
24 GB GDDR6X and 1 TB/s bandwidth — the canonical consumer card for Flux production, 34B QLoRA, and 70B INT4.
Production-ready Flux pipeline — 4090 is the de facto reference card for ComfyUI commercial work in 2026.
Read the guide →Serves a small product/feature with one card; horizontal scale by adding cards behind a load balancer.
Read the guide →24 GB is the floor for Hunyuan; 1× 4090 generates short clips, multi-card scales linearly.
Read the guide →Side-by-side specs across the consumer tier. Click any row to see that GPU.
Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.
Per-minute payouts in BTC, USDT, USDC or CLORE. No listing fee, no contracts, withdraw any time.
Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.