ComputeHub
Back to Dashboard

New Deployment

Configure and launch your GPU instance

Quick start with pre-configured environments, or skip to configure manually

Image Generation

Stable Diffusion WebUI

Generate images instantly in your browser
LLM Inference

OpenAI-compatible API

Deploy an inference endpoint in minutes

Pre-configured with OpenAI-compatible API
No setup required

ComfyUI

Visual workflow for generative models

Build complex pipelines without code
Custom Docker

Full control over your environment

For advanced users and experiments

Advanced Users Only
Full control over your runtime environment.

Choose GPU type and provider based on your requirements

NVIDIA RTX 4090

24GB VRAM • Best value for inference and fine-tuning small models.

Loading prices...

NVIDIA RTX 3090

24GB VRAM • Cost-effective option for medium workloads.

NVIDIA A100

80GB VRAM • Industry standard for large scale training and inference.

NVIDIA H100

80GB VRAM • Maximum performance for massive workloads.

NVIDIA RTX A6000

48GB VRAM • Professional GPU for creative and AI workloads.

Specify deployment name and Docker image

Pre-configured image for this template

Estimated Cost