Log in Rent RTX 3090
RTX 3090 · live marketplace — from $0.18 / hr

Rent an RTX 3090
by the minute.
Not the month.

Rent an RTX 3090 for the cheapest 24 GB VRAM on the consumer tier. 24 GB GDDR6X at 936 GB/s — runs Flux production at 1024², 13B QLoRA fine-tuning with bitsandbytes, and dual-card 70B INT4 serving via ExLlamaV2 with tensor parallel. Spun up in under 90 seconds, billed per-minute, paid in BTC, USDT/USDC or CLORE. The legacy value pick of 2026.

Per-minute billing SSH + Docker + Jupyter Spot & on-demand 42 regions
SHELL ~/projects/finetune
# install the official Python SDK + CLI $ pip install clore-ai $ clore search --gpu "RTX 3090" --max-price 2.0 486 servers · cheapest spot $0.18/hr · cheapest on-demand $0.27/hr $ clore deploy 38201 --image cloreai/ubuntu22.04-cuda12 --type on-demand --currency bitcoin order #82144 created · waiting for boot… running · ssh ready $ clore ssh 82144
GPU
RTX 3090 ×1
VRAM
24 GB
Rate
$0.27/hr
Status
Running
$0.18/hr
Starting on-demand price
24GB
GDDR6X VRAM per card
42
Regions with 3090 hosts
<90s
Cold-start to ready
workloads

The budget workhorse,
still going strong.

Three years in, the 3090 is still one of the most rented cards on the network. NVLink-capable, 24 GB, and cheap — ideal for hobbyists, students, and side projects.

Cheapest 24 GB VRAM you can rent

Same VRAM ceiling as a 4090 at half the price. No FP8 and slower bandwidth, but for budget-conscious 13B–34B QLoRA, Flux production, and dual-card 70B INT4 the 3090 is the value pick — and there is plenty of supply on the spot market.

Llama-3 8B QLoRA ~7.8k tok/s

Diffusion & rendering

SD 1.5, SDXL, ComfyUI workflows. Blender Cycles with OptiX delivers solid 1080p–4K renders at hobbyist-friendly cost.

SDXL 1024² batch-2 1.8 it/s

Inference at scale

vLLM and TGI containers run 7B–13B FP16 models with comfortable batch sizes. The cheapest path to production-grade open-source inference.

Mistral-7B FP16 64 tok/s/user
why 3090

Same VRAM as the 4090,
at half the price.

Older silicon, but 24 GB is 24 GB. For workloads that fit, the 3090 is the cheapest path to a real GPU. Specs from Nvidia's reference sheet.

RTX 3090 RTX 4090 A100 80GB H100 80GB
Architecture Ampere Ada Lovelace Ampere Hopper
CUDA cores 10,496 16,384 6,912 14,592
VRAM 24 GB GDDR6X 24 GB GDDR6X 80 GB HBM2e 80 GB HBM3
Memory bandwidth 936 GB/s 1,008 GB/s 1,935 GB/s 3,350 GB/s
FP16 / BF16 (dense) ~71 TFLOPS ~165 TFLOPS 312 TFLOPS 756 TFLOPS
From / hr (on-demand) $0.18 $0.31 $1.20 $2.40

// prices are spot-market lows · refreshed every 60 s

pricing

Two ways to rent.
Pay only for the minutes you use.

Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.

Spot

$0.18 / hr
≈ 0.0000016 BTC · 120 CLORE
  • Lowest possible rate
  • Per-minute billing
  • Can be interrupted by on-demand renter
  • Best for batch training, rendering
Browse spot 3090s
MOST RENTED

On-demand

$0.27 / hr
≈ 0.0000027 BTC · 200 CLORE
  • Guaranteed availability
  • No preemption, ever
  • Per-minute billing
  • Best for inference, dev work, demos
Rent on-demand
Pay with
Bitcoin on-chain
CLORE native token
USDT / USDC ERC-20 · BEP-20
workflow

Four steps to a running 3090.

No sales call. No quota request. No three-week procurement. The first four commands are all you need.

01 / FILTER

Pick your card

Filter the marketplace by RTX 3090, country, GPU count, reliability score, network speed.

02 / RENT

Click rent

Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.

$ clore rent --gpu "RTX 3090"
03 / CONNECT

SSH or Jupyter

You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.

04 / STOP

Stop anytime

Per-minute billing rounds to the second. Stop the instance and the meter stops with it.

faq

Questions hosts and renters ask.

Why pick a 3090 over a 4090?

The 3090 has the same 24 GB VRAM as the 4090 at roughly 60% of the rental price. Slower memory bandwidth (936 vs 1,008 GB/s) and no FP8, but for budget-sensitive 24 GB workloads it's the value pick.

What can I actually run on a consumer GPU on CLORE?

Consumer cards on CLORE.AI cover most hobby and indie workflows: Stable Diffusion 1.5 and SDXL, ComfyUI/Automatic1111, Flux.1, LoRA and QLoRA fine-tuning of 7B-13B LLMs, Whisper transcription, video transcoding, Blender Cycles, and game-server hosting. Anything that fits in 8-32 GB VRAM and runs in Docker runs here. You get full root SSH plus a Jupyter template if you want one.

How fast does a rented server actually boot?

Cold-start lands in roughly 60-90 seconds for a typical Docker image: server allocation, container pull, GPU passthrough, SSH up. Pre-cached templates (PyTorch, ComfyUI, vLLM, Ollama) are faster because the image is already on the host. Once running you pay per minute, so a 10-minute experiment costs ten minutes of rental, not an hour.

Spot vs on-demand - what's the difference?

On-demand is a fixed per-hour price the host sets; the rental cannot be revoked while you have funds. Spot is auction-style: you bid, the highest bidder runs, and a higher bidder can preempt you. Spot is typically 30-50% cheaper. CLORE.AI charges 2.5% on spot and 10% on on-demand, split 50/50 with the host.

Is CLORE.AI cheaper than RunPod or Vast.ai?

Spot prices on CLORE.AI usually beat RunPod community pricing because there is no centralized markup; you rent directly from the host with a 2.5% spot fee. Vast.ai is the closest comparison, and on consumer cards CLORE.AI is generally within a few cents per hour. Hold CLORE in your wallet for Proof of Holding and you stack up to 50% off the marketplace fee.

Can I bring my own Docker image and SSH key?

Yes. Point at any registry - Docker Hub, GHCR, Quay, your private registry - then set env vars, port forwards, and your SSH public key in the rent dialog. Templates on the platform are just preset configs; nothing is locked down. You get full root inside the container with GPU passthrough.

workload spotlight

Real numbers on the RTX 3090.

24 GB on a consumer card unlocks 13B–34B QLoRA, Flux production, and dual-GPU 70B INT4 — the legacy value pick of 2026.

Flux.1 dev 1024²
ComfyUI + fp8 dev checkpoint
~1.6 s/it @ 1024² batch 1

24 GB lets you keep T5-XXL encoder resident; cuts cold-start latency vs swapping on a 16 GB card.

Read the guide →
QLoRA fine-tune Llama-3 13B
PEFT + bitsandbytes 4-bit + Flash Attn 2
~1,900 tokens/s, ~14 GB peak VRAM

Standard 13B QLoRA fits with 4K context and gradient checkpointing — a complete fine-tune in a few hours of spot.

Read the guide →
70B INT4 across 2× 3090
ExLlamaV2 + tensor parallel
~22 tok/s aggregated

Two 3090s cost less than a single 4090 and serve 70B INT4 with 32K context for solo developers.

Read the guide →
consumer comparison

Consumer-tier comparison.

Side-by-side specs across the consumer tier. Click any row to see that GPU.

GPU
VRAM
CUDA cores
FP16 TFLOPS (tensor, dense)
Mem BW (GB/s)
Spot $/hr
SDXL 1024² it/s
Llama-3 8B tok/s
RTX 3070
8 GB GDDR6
5,888
~80
448
$0.10
~1.4
~50
RTX 3080
10 GB GDDR6X
8,704
~119
760
$0.14
~2.0
~85
RTX 3090 / this page
24 GB GDDR6X
10,496
~142
936
$0.18
~3.0
~110
RTX 4070
12 GB GDDR6X
5,888
~117
504
$0.16
~2.5
~60
RTX 4080
16 GB GDDR6X
9,728
~195
716
$0.27
~4.5
~95
RTX 4090 tier focus
24 GB GDDR6X
16,384
~165
1,008
$0.31
~7.5
~125
RTX 5080
16 GB GDDR7
10,752
~225
960
$0.28
~5.5
~115
RTX 5090
32 GB GDDR7
21,760
~419
1,792
$0.39
~10.0
~180
RTX 4070 Ti
12 GB GDDR6X
7,680
~160
504
$0.20
~3.2
~75
workload guides

Run these on your rented RTX 3090.

Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.

Image Generation
Flux.1 on CLORE.AI
Run Black Forest Labs' Flux for state-of-the-art image gen.
Image Generation
ComfyUI on CLORE.AI
Node-based pipeline for SDXL, Flux, and SD3.
Language Models
text-gen WebUI
The oobabooga WebUI for chat, RAG, and agents.
Training
DreamBooth training
Fine-tune SDXL on your subject with DreamBooth.
Language Models
ExLlamaV2 fast inference
Fastest GPTQ/EXL2 inference for consumer GPUs.
Training
Kohya SS LoRA training
The standard SDXL LoRA training pipeline.
Comparisons
llm serving comparison
See all guides →
other gpus

Compare with similar cards.

RTX 4080
16 GB · from $0.27/hr
Rent →
RTX 4090
24 GB · from $0.31/hr
Rent →

Your training run
is 90 seconds away.

Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.