Log in Rent RTX 6000 Ada
RTX 6000 Ada · live marketplace — from $0.39 / hr

Rent an RTX 6000 Ada
by the minute.
Not the month.

Rent an RTX 6000 Ada for workstation-tier 48 GB with FP8 tensor paths. 48 GB ECC + Ada FP8 + 4th-gen RT cores at 300 W — roughly 2× the transformer throughput of prior-gen 48 GB workstation silicon, single-card 70B FP8 serving at ~620 tok/s, production text-to-3D rendering with denoised path tracing. Billed per-minute, paid in BTC, USDT/USDC or CLORE. The studios' upgrade path when prior workstation throughput becomes the bottleneck.

Per-minute billing SSH + Docker + Jupyter Spot & on-demand 26 regions
SPECIFICATION SHEET
RTX 6000 Ada · PCIe 4.0
NV-RTX-6000 Ada-48G REV B · 04.2026
ArchitectureAda Lovelace · GA102
Process8 nm Samsung
Memory48 GB GDDR6 · ECC
Bandwidth960 GB/s
CUDA cores18,176
Tensor cores · gen 3336
RT cores · gen 284
FP32 · TF32 (tensor)38.7 / 309.7 TFLOPS
NVLink (×2 cards)112 GB/s · 96 GB pool
TDP · form factor300 W · 2-slot FH/FL
Display out4× DisplayPort 1.4a
ISV certifications42 (CAD / DCC / med)
SPOT $0.49/hr
ON-DEMAND $0.79/hr
RESERVED · 30D $0.61/hr
ANNUAL $0.39/hr
$0.39/hr
Starting on-demand price
48GB
GDDR6 ECC VRAM per card
26
Regions with 6000 Ada hosts
<90s
Cold-start to ready
workloads

48 GB of pro VRAM,
without HBM pricing.

6000 Ada is the workstation-grade 48 GB card — twice the memory of a 3090, all with ECC, all on certified drivers. Mid-tier in price, no-nonsense in production.

Ada FP8 on a workstation 48 GB card

Roughly 2× A6000 transformer throughput at the same 300 W envelope and 48 GB ECC, plus 4th-gen RT cores for VFX. The studios' bridge between A6000-class workloads and full H100 pricing — FP8 paths fit 70B serving on a single card without hyperscaler-tier rates.

Yi-34B QLoRA ~5.8k tok/s

VFX & rendering

48 GB is enough for production-scale Houdini scenes, Blender Cycles with full geometry, and 8K video pipelines. OptiX accelerated.

V-Ray 5 GPU ~1,750 vray

Multi-tenant serving

48 GB hosts 13B FP16 + 7B FP16 on the same card with batched serving. ECC catches the bit flips that crash long-running endpoints.

Llama-3 8B FP16 52 tok/s/user
why 6000 Ada

48 GB ECC
at the right price.

When 24 GB is a constraint and you don't need HBM bandwidth, 6000 Ada is the answer. The cheapest path to a single-card 48 GB workstation on the marketplace.

RTX 6000 Ada RTX A5000 RTX 6000 Ada A100 40GB
Architecture Ada Lovelace Ada Lovelace Ada Lovelace Ada Lovelace
CUDA cores 18,176 8,192 18,176 6,912
VRAM 48 GB GDDR6 ECC 24 GB GDDR6 ECC 48 GB GDDR6 ECC 40 GB HBM2
Memory bandwidth 960 GB/s 960 GB/s 960 GB/s 1,555 GB/s
FP16 / BF16 (dense) ~75 TFLOPS ~55 TFLOPS ~365 TFLOPS 312 TFLOPS
From / hr (on-demand) $0.39 $0.22 $1.1 $0.68

// prices are spot-market lows · refreshed every 60 s

pricing

Two ways to rent.
Pay only for the minutes you use.

Every server is priced by its host. These are the live floors across the marketplace — you'll see hundreds of variants once you're in.

Spot

$0.39 / hr
≈ 0.000005 BTC · 367 CLORE
  • Lowest possible rate
  • Per-minute billing
  • Can be interrupted by on-demand renter
  • Best for batch training, rendering
Browse spot 6000 Adas
MOST RENTED

On-demand

$0.58 / hr
≈ 0.00001 BTC · 733 CLORE
  • Guaranteed availability
  • No preemption, ever
  • Per-minute billing
  • Best for inference, dev work, demos
Rent on-demand
Pay with
Bitcoin on-chain
CLORE native token
USDT / USDC ERC-20 · BEP-20
workflow

Four steps to a running 6000 Ada.

No sales call. No quota request. No three-week procurement. The first four commands are all you need.

01 / FILTER

Pick your card

Filter the marketplace by RTX 6000 Ada, country, GPU count, reliability score, network speed.

02 / RENT

Click rent

Choose a Docker image — PyTorch, vLLM, ComfyUI, Blender — or paste your own.

$ clore rent --gpu "RTX 6000 Ada"
03 / CONNECT

SSH or Jupyter

You get a public endpoint, an SSH key, and Jupyter on port 8888 in under 90 s.

04 / STOP

Stop anytime

Per-minute billing rounds to the second. Stop the instance and the meter stops with it.

faq

Questions hosts and renters ask.

How does the RTX 6000 Ada compare to the A6000?

Roughly 2× transformer throughput at the same 300 W envelope and 48 GB ECC. Ada's FP8 tensor cores and 4th-gen RT make it the upgrade for studios already running A6000-class workloads who need more headroom without moving to H100 pricing.

Why pick an ECC pro card over a consumer 4090 or 5090?

ECC memory catches single-bit errors silently in flight - mandatory for production CAD pipelines, V-Ray and Octane farms, regulated medical or financial ML, and any research where bit-flip integrity affects results. Pro cards (A4000/A5000/A6000/RTX 6000 Ada/A40) also carry ISV certifications consumer cards do not. If your client SLA references ECC or ISV validation, the consumer 4090 disqualifies.

Are these cards ISV-certified for V-Ray, Octane, and SolidWorks?

The NVIDIA RTX A-series and RTX 6000 Ada carry full ISV certifications: V-Ray, Octane, SolidWorks, Rhino, DaVinci Resolve, ANSYS, COMSOL, and the Adobe Creative Cloud chain. Consumer Ada cards (4090/5090) are not on those lists. If your renderer's support matrix excludes GeForce, you need a pro card - which is exactly what CLORE.AI lists in this tier.

Can I run multi-GPU NVLink workloads on pro cards?

Yes - the A5000 and A6000 expose NVLink in pairs (no Switch fabric), giving 112 GB/s peer bandwidth and unified memory across two cards (48 GB on A5000 pair, 96 GB on A6000 pair). Filter by 'NVLink' in the marketplace to find listings. The RTX 6000 Ada and A40 do not have NVLink connectors but pair via PCIe with FSDP.

How do these compare to A100 and H100 for studios?

Pro cards (A6000 / RTX 6000 Ada / A40) give you 48 GB ECC at one-quarter to one-third the rental price of an A100 80GB and one-fifth of an H100. You give up HBM bandwidth and FP8 tensor cores, but for production rendering, virtual workstations, and 13B-34B inference under ECC the pro tier hits the price-performance sweet spot.

Are pro GPUs quieter than 4090s in shared studio environments?

The cards themselves run cooler and quieter at lower TDP - A4000 is single-slot 140W, A5000 is dual-slot 230W, A6000 is 300W with a blower-style cooler designed for rack airflow. CLORE.AI is a remote rental platform, so the noise question only applies to your own studio if you're hosting; pro cards are explicitly the quieter pick there.

workload spotlight

Real numbers on the RTX 6000 Ada.

48 GB ECC + Ada FP8 + 4th-gen RT — a workstation H100-lite for studios, VFX, and 34B fine-tuning.

Production text-to-3D rendering
OptiX + 4th-gen RT cores
~2.0× A6000 path-tracing throughput

4th-gen RT cores accelerate VFX final-frame renders — the upgrade studios pick when A6000 thermal/perf becomes the bottleneck.

Read the guide →
34B FP8 fine-tune
TransformerEngine + FSDP
~1,400 tokens/s, 48 GB peak

Ada FP8 path roughly doubles A6000 throughput on transformer training at the same 300 W envelope.

Read the guide →
Llama-3 70B FP8 single-card serving
vLLM + TensorRT-LLM FP8
~620 tok/s aggregated, 8 concurrent

FP8 quant fits 70B in 48 GB — cheaper-than-H100 path for serving 70B at moderate scale.

Read the guide →
pro comparison

Pro-tier comparison.

Side-by-side specs across the pro tier. Click any row to see that GPU.

GPU
VRAM (ECC)
TDP (W)
NVLink
ISV cert.
V-Ray CUDA score
Spot $/hr
RTX A4000
16 GB GDDR6
140
yes
~1,180
$0.13
RTX A5000
24 GB GDDR6
230
112 GB/s
yes
~1,700
$0.22
RTX A6000 tier focus
48 GB GDDR6
300
112 GB/s
yes
~2,300
$0.42
RTX 6000 Ada / this page
48 GB GDDR6
300
yes
~4,000
$0.55
A40
48 GB GDDR6
300
112 GB/s
yes
~2,150
$0.32
workload guides

Run these on your rented RTX 6000 Ada.

Step-by-step guides verified on CLORE.AI hardware. Pick a workload, copy the docker image, ship in minutes.

Other Workloads
Blender + Cycles GPU
Production rendering with Cycles on CUDA/OptiX.
Training
LLM fine-tuning
LoRA / QLoRA fine-tuning workflow.
Language Models
vLLM serving
High-throughput LLM serving with PagedAttention.
Video Generation
Wan Video
Alibaba's Wan-2.1 text/image-to-video.
Language Models
Llama 3.3 on CLORE.AI
Run Meta's flagship Llama-3.3 on your rented card.
Image Generation
Flux.1 on CLORE.AI
Run Black Forest Labs' Flux for state-of-the-art image gen.
Advanced
Multi-GPU setup
Configure NVLink, NCCL, and distributed training.
See all guides →
other gpus

Compare with similar cards.

RTX A6000
48 GB · from $0.42/hr
Rent →
RTX 5090
32 GB · from $0.39/hr
Rent →
A100 80GB
80 GB · from $0.92/hr
Rent →

Your training run
is 90 seconds away.

Hosts around the world are accepting workloads right now. Sign up, top up your wallet, and the next hour is yours.