At the heart of Microsoft’s more recent operating system updates, in the cloud and on-premises, is virtualization. By isolating operating instances from one another on the same hardware, virtualization has powered Azure’s high-density infrastructure- and platform-as-a-service offerings, as well as improved desktop security in Windows 11.
Virtualization is also the technology at the root of Microsoft’s confidential computing services, offering a way to work with encrypted data securely, ensuring protection in storage, in motion, and in operation. Nesting encrypted virtual environments on top of traditional hypervisors works well enough, though it limits the operating system functions accessible within a trusted execution environment.
Extending the hypervisor
This is where an alternate approach to virtualization comes in, what Microsoft is calling a “paravisor.” It builds on the concept of paravirtualization, which provides more links between the host and virtualized environments. This approach requires the client OS to be virtualization-aware, with a defined set of APIs and drivers that can use those APIs when necessary. It lets the client OS handle isolated compute, and the host OS share I/O and other common services between host and virtualized processes.
If you’re using the virtualization-based security features in Windows, you’re using a VM that supports paravirtualization. This ensures that secured operations have the same priority and hardware access as their unsecured counterparts, avoiding performance bottlenecks and giving users the same experience whether they’re inside or outside a secured process’s trust boundaries.
Tools like Azure’s Confidential Computing platform depend on paravisors. They don’t need operating system updates every time there’s an update to the underlying virtualization service. Your code will keep running as new hardware is deployed and new features enabled. Microsoft’s definition of a paravisor is a useful way of thinking about them: They are an execution environment that runs inside a guest VM but with higher privileges than the VM, and they provide services to that VM.
Using a paravisor for confidential computing reduces overall risk, as it doesn’t require a special version of a guest OS designed to run in a trusted execution environment. If you didn’t have a paravisor, your guest OS needs to be “enlightened” with additional code to support running in a confidential computing environment.
This would need to be updated every time there’s a new OS build, not only limiting the Oses used to those trusted by the platform vendor, but ensuring that guest OS builds lag other, non-enlightened versions, with all the security risks that come with from that delay, however short it might be.
Instead, with a paravisor, there’s no need for special OS releases, and you can use whatever supported OS you like; you don’t have to wait for Microsoft, Canonical, RedHat, or whoever to build, test, and package a confidential computing-ready release. If there’s a zero-day exploit with a security update for your chosen guest OS, you can simply roll it out as part of your standard OS and image update process.
Introducing OpenHCL
Azure’s paravisor used to be closed source, built on proprietary code. That’s all changed with the announcement of a new open source version, OpenHCL. This is being developed on GitHub where you can add your own contributions (if you sign Microsoft’s standard contributor license agreement). It’s designed to run on most common platforms, including Linux and macOS and works with Microsoft’s own hypervisors, Apple’s Hypervisor framework, and KVM. This includes both x64 and Arm64 environments.
Microsoft’s new paravisor architecture is relatively simple. It works with your existing hypervisor to provide an abstraction layer from the underlying hardware, with a host OS that provides support for management tools and storage. Inside an OpenHCL-enabled VM is a small Linux kernel that supports device drivers. On top of that is an OpenVMM environment that supports the guest OS.
OpenVMM is where the OpenHCL user-mode processes run, managing device access and handling translation between the underlying host OS and the guest. OpenVMM is written in Rust, which lessens the risks associated with system applications that have high privilege levels. By using a memory-safe language, OpenVMM reduces the risks associated with memory leaks, something that’s critical when hosting trusted execution environments.
There’s another useful feature that comes from using OpenVMM. As it supports UEFI boot, you can use it to support what Microsoft is calling “trusted launch VMs.”
Building a test environment with OpenHCL and OpenVMM
Running OpenHCL in a test environment is relatively uncomplicated. You can build your own binaries on a Linux development system (you can use WSL2). Alternatively you can download a prebuild binary in Independent Guest Virtual Machine (IGVM) format. For now, OpenHCL is unsupported and only intended for use in development and test environments.
Once you have a copy, you can use Hyper-V or OpenVMM to try it out. Running under Windows with Hyper-V is the closest you’ll get to how Microsoft uses it on Azure, running on top of Azure’s own Windows-derived Host OS. Support for OpenHCL in Windows is very new, so you can only use the latest Windows 11 version, 24H2.
Once you have enabled Hyper-V support, you will need PowerShell to enable support for unsigned images. Next you need to put the download OpenHCL .bin file in an accessible directory. The documentation suggests using a directory under windowssystem32. You then need to create a pair of shell variables for the path to OpenHCL and the name you want to give your VM.
These form the basis of a simple script that sets up trusted launch for the OpenHCL environment. This creates an OpenHCL VM without any virtual disks, as what you’re doing is adding support for a paravisor inside the Hyper-V environment. All you need to do now is attach a virtual hard drive with a ready-to-run image, either one you’ve built yourself or downloaded from a trusted source.
Once you have a VM running inside OpenHCL you can start to use the OpenVMM tool to add features to your VM. At the heart of the management tools is a CLI that gives you ways to manage disks and ports, as well as tunes the available vCPUs and memory. Other options include a serial console for working with the guest OS, and VNC access to a graphical console.
Don’t be surprised if you can’t find documentation for all the OpenHCL features. This first public release is still very new, and a lot of the documentation is scaffolding awaiting content. Still, it’s clear that something very ambitious is happening here that provides insights into the future of Windows (both client and server) as well as Azure’s infrastructure-as-a-service platform.
Making all computing confidential
An interesting phrase in the blog post that announced OpenHCL talks about “moving towards closing the feature gaps of confidential VMs.” When you consider that OpenHCL supports both standard and confidential VMs, it’s clear that Microsoft’s end game for VMs in Azure is that all hosted VMs will be confidential VMs, keeping your computation as secure as your data and having all compute happen in trusted spaces.
That future won’t happen overnight; there’s still a lot to be done to ensure that secure VMs have the same access to devices and OS-level services as their unsecured counterparts, as well as the same performance. We can see the start of that move with the arrival of this new open paravisor and with VBS enclaves in Windows and in Windows 11’s requirement for hardware-based trusted storage for encryption keys.
Running everything on your PC, in the cloud, and on-premises in secure VMs may seem far-fetched today, but tools like OpenHCL are the key to a safer and more secure world, where trusted execution is everywhere. We’re at the beginning of a long road, but it’s possible to use this first public release and its GitHub repository to get a glimpse of where we’re going.