Real-Time GPU Cloud Prices
A100 · H100 · RTX 4090 · L40S · A6000 · RTX 3090
Compare GPU prices across RunPod, Vast.ai, TensorDock, and more. Updated every 60 seconds.
| GPU Model | Trend (24h) | Best Price | Action |
|---|---|---|---|
Loading prices... | |||
| Architecture | Use Cases | |||||
|---|---|---|---|---|---|---|
🚀 NVIDIA H100 Flagship GPU for large-scale AI training and inference | 80 GB Best | 1979.0 Fastest | 3350 GB/s Best | 700W | Hopper 2022 | 🧠 🤖 🔬 |
⚡ NVIDIA A100 80GB Versatile datacenter GPU for AI workloads | 80 GB Best | 312.0 | 2039 GB/s | 400W | Ampere 2020 | 🧠 💬 🔬 |
🔧 NVIDIA L40S Latest professional GPU for AI and graphics workloads | 48 GB | 183.2 | 864 GB/s | 350W | Ada Lovelace 2023 | 💬 ⚙️ 🏢 |
🎮 NVIDIA RTX 4090 High-performance consumer GPU for AI inference and generation | 24 GB | 165.2 | 1008 GB/s | 450W | Ada Lovelace 2022 | 💬 🎨 ⚙️ |
🏢 NVIDIA RTX A6000 Professional workstation GPU with large VRAM | 48 GB | 77.4 | 768 GB/s | 300W Efficient | Ampere 2020 | 💬 ⚙️ 🏢 |
💎 NVIDIA RTX 3090 Budget-friendly option for AI development and experimentation | 24 GB | 71.0 | 936 GB/s | 350W | Ampere 2020 | 💬 🎨 💰 |
Performance Metrics: FP16 TFLOPS = Half-precision floating-point performance (higher is better for AI workloads). Memory Bandwidth = Data transfer speed between GPU and VRAM (higher is better for large models). TDP = Thermal Design Power, power consumption in watts (lower is more efficient).
What is the cheapest GPU cloud today?
Based on real-time data from multiple providers, our dashboard shows you the most competitive GPU prices across the market. Whether you're training large language models, running AI inference, or rendering graphics, finding the right GPU at the best price is crucial for your budget.
Why are GPU prices different across providers?
GPU cloud pricing varies based on several factors including datacenter location, availability, demand, and provider overhead. RunPod, Vast.ai, and TensorDock each have different pricing strategies and infrastructure costs. Our platform aggregates these prices in real-time so you can make informed decisions.
How to choose the best GPU for AI training?
Consider these factors when selecting a GPU:
- Model size and memory requirements - Larger models need GPUs with more VRAM (A100 80GB, H100)
- Training duration and budget - Balance performance with cost per hour
- Provider reliability - Check uptime and support quality
- Availability - Popular GPUs may have limited availability during peak times
Use our comparison dashboard to find the perfect balance between performance and cost for your specific workload.