Run Cloud Virtual Machines Securely and Efficiently

Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) implemented in Rust that focuses on running modern, cloud workloads, with minimal hardware emulation.

Get Started

Get the source on GitHub

Secure 🔒

Minimal emulated devices and implemented in Rust to avoid many common security issues

Fast ⚡️

Boot to userspace in less than 100ms with direct kernel boot

🪟 & 🐧

Supports running modern Linux and Windows guests

Kata Containers

Supported by Kata Containers for running secure containerised workloads

Powerful REST API

Programmatically control the lifecyle of the VM using an HTTP API


Minimal memory overhead for dense deployments

Cross platform

Runs on both x86-64 and aarch64

Broad device support

Support for wide range of paravirtualised devices and physical device passthrough

Live migration

Migrate VMs from one host to another without interruption

Get Involved:

Cloud Hypervisor is governed openly as part of the Linux Foundation and supported by multiple organisations:

  • Alibaba
  • AMD
  • Ampere
  • ARM
  • ByteDance
  • Intel
  • Microsoft
  • Tencent Cloud

Join our Slack community: Invite

Participate in our community activities: Slack channel

Check out and participate in our roadmap on GitHub

For full details of our governance model please see our community repository on GitHub and our founding charter.

For bug reports please use GitHub isssues; for broader community discussions please use our mailing list

Latest news from Cloud Hypervisor project:

Cloud Hypervisor v38.0 Released!

Posted February 16, 2024 by Cloud Hypervisor Team ‐ 2 min read

This release has been tracked in our roadmap project as iteration v38.0. The following user visible changes have been made:

Group Rate Limiter on Block Devices

Users now can throttle a group of block devices with the new --rate-limiter-group option. Details can be found from the I/O Throttling documentation

CPU Pinning Support for Block Device Worker Thread

Users now have the option to pin virt-queue threads for block devices to specific host cpus.

Optimized Boot Time with Parallel Memory Prefault

The boot time with prefault option enabled is optimized via parallel memory prefault.

New ‘debug-console’ Device

A ‘debug-console’ device is added to provide a user-configurable debug port for logging guest information. Details can be found from the Debug IO Ports documentation.

Improved VFIO Device Support

All non-emulated MMIO regions of VFIO devices are now mapped to the VFIO container, allowing PCIe P2P between all VFIO devices on the same VM. This is required for a wide variety of multi-GPU workloads involving GPUDirect P2P (DMA between two GPUs), GPUDirect RDMA (DMA between a GPU and an IB device).

Extended CPU Affinity Support

Users now can set the vcpu affinity to a host CPU with index larger than 255.

Notable Bug Fixes

  • Enable HTT flag to avoid crashing cpu topology enumeration software such as hwloc in the guest (#6146)
  • Fix several security advisories from dependencies (#6134, #6141)
  • Handle non-power-of-two CPU topology properly (#6062)
  • Various bug fixes around virtio-vsock(#6080, #6091, #6095)
  • Enable nested virtualization on AMD if supported (#6106)
  • Align VFIO devices PCI BARs naturally (#6196)


Many thanks to everyone who has contributed to our release:


See the GitHub Release for the release assets.