Updated April 2026

GPU Cloud Price Comparison

H100, A100, L4 and T4 on-demand, reserved and spot pricing across AWS, Azure, GCP, Lambda Labs and CoreWeave. Updated monthly.

Filter by GPU: Sort by:
ProviderInstanceGPUGPUsGPU MemoryvCPURAM On-Demand/hr1yr Reserved/hr3yr Reserved/hrSpot/hrBest For

🧮 Training Run Cost Calculator

Estimate the cost of a training run based on GPU hours needed.

$0
Cost per Training Run
$0
Monthly Training Cost
$0
Annual Training Cost

GPU Cloud — Common Questions

H100 vs A100 — Which should I choose?

H100 is 3-6x faster than A100 for transformer model training. For models over 13B parameters, H100 is almost always the better choice on total cost-per-training-run. For inference, L4 or T4 are more cost-effective than H100 due to better utilisation characteristics.

Why is GCP A4 so much cheaper than AWS p5?

Both use identical NVIDIA H100 SXM 80GB GPUs. The price difference ($32.77 vs $98.32/hr for 8 GPUs) is purely Google's strategic pricing decision — GCP is pricing aggressively to capture AI workloads from AWS. The hardware performance is identical.

When should I use Lambda Labs or CoreWeave?

Specialist GPU clouds like Lambda Labs offer H100 at ~$2.49/hr/GPU with long-term reservations — significantly cheaper than hyperscalers. Best for: dedicated training pipelines with predictable GPU hours. Downside: no managed services, storage, or networking ecosystem.

What is the cheapest way to run GPU workloads?

For training: GCP Spot A4 (H100) at ~$9.83/hr with checkpointing. For inference: GCP g2 L4 instances — $0.72/hr/GPU with strong inference performance. For occasional bursts: AWS Spot p3 (V100) — older but very cheap. Always implement auto-scaling and pay-per-second billing.

Need a Full AI Infrastructure Cost Analysis?

Use TCOIQ's AI Project Cost Estimator to model your full AI infrastructure cost — training, fine-tuning, inference and storage.

AI Project Cost Estimator →