Compute

Get AI-optimized clusters for training and inference based on the latest NVIDIA GPUs.

Flexible capacity planning

Get access to the latest NVIDIA GPU platforms or CPU-only servers and balance reserved, and on-demand pricing models that align perfectly with your needs.

AI performance without penalty

Receive bare-metal-level performance from dedicated hosts — we do not virtualize and share GPU and network cards.

InfiniBand-powered AI clusters

Create a multi-host clusters for AI workloads with non-blocking NVIDIA Quantum InfiniBand fabric. It delivers 3.2 Tbit/s throughput per 8-GPU host and ensures direct GPU-to-GPU communication.

AI-ready operational system

Save time when creating instances or configuring a cluster for AI workloads by using an AI/ML-ready image that contains pre-installed GPU and network drivers, to start a GPU-accelerated environment quickly.

Network storage volumes

Reduce the cluster recovery time by leveraging network disks mounted to every virtual instance. This provides cloud-native elasticity and a quick VM restart when failure occurs.

Integrated monitoring

Receive detailed information about your cluster and virtual machine performance by using our integrated Monitoring service. Our dashboards display AI-specific metrics alongside general system performance data.

GPU host configurations

Contact us

NVIDIA GB300 NVL72

  • Blackwell Ultra GPU
  • 279 GB of GPU Memory
  • NVIDIA Quantum-X800 InfiniBand
On-demand

NVIDIA HGX B300

  • Blackwell Ultra GPU
  • 270 GB of GPU Memory
  • NVIDIA Quantum-X800 InfiniBand
Contact us

NVIDIA GB200 NVL72

  • Blackwell GPU
  • 186 GB of GPU Memory
  • NVIDIA Quantum-2 InfiniBand
On-demand

NVIDIA HGX B200

  • Blackwell GPU
  • 180 GB of GPU Memory
  • NVIDIA Quantum-2 InfiniBand
On-demand

NVIDIA HGX H200

  • Hopper GPU
  • 141 GB of GPU Memory
  • NVIDIA Quantum-2 InfiniBand
On-demand

NVIDIA HGX H100

  • Hopper GPU
  • 80 GB of GPU Memory
  • NVIDIA Quantum-2 InfiniBand
On-demand

NVIDIA RTX PRO 6000

  • Blackwell GPU
  • 96 GB of GPU Memory
  • PCI Express Gen5
On-demand

NVIDIA L40S

  • Ada Lovelace GPU
  • 48 GB of GPU Memory
  • PCIe Express Gen4

CPU host configurations

Available

Intel

  • 2x or 48x vCPU Intel Xeon Gold 6338
  • 8 or 192 GB DDR5
  • Ubuntu 22.04 LTS
Available

AMD

  • 4x or 128x vCPU AMD EPYC 9654
  • 16 or 512 GB DDR5
  • Ubuntu 22.04 LTS

Try self-service console

Up to 32 NVIDIA GPUs are available immediately via web console

Block network storage

Choose one of three options of network disks that differ by performance, reliability and pricing:

  • SSDs with no data replication,
  • SSDs with erasure coding,
  • SSDs with data mirroring.

Observability and monitoring

Control the cluster state and detect performance issues early, by using our monitoring capabilities. We display a wide range of performance metrics, from GPU utilization to InfiniBand network parameters on the web UI dashboards or as pre-assembled Grafana dashboards.

Getting started

Create and manage GPU clusters on the cloud platform on your own or contact us to learn more about working with one of our experts.