Monday, August 1, 2011

kvm nested virtualization


 KVM Nested Virtulization





KVM is a unique hypervisor. The KVM developers, instead of creating major portions of an operating system kernel themselves, as other hypervisors have done, devised a method that turned the Linux kernel itself into a hypervisor. This was achieved through a minimally intrusive method by developing KVM as kernel module. Integrating the hypervisor capabilities into a host Linux kernel as a loadable module can simplify management and improve performance in virtualized environments. This probably was the main reason for developers to add KVM to the Linux kernel.


This approach has numerous advantages. By adding virtualization capabilities to a standard Linux kernel, the virtualized environment can benefit from all the ongoing work on the Linux kernel itself. Under this model, every virtual machine is a regular Linux process, scheduled by the standard Linux scheduler. Traditionally, a normal Linux process has two modes of execution: kernel and user. The user mode is the default mode for applications, and an application goes into kernel mode when it requires some service from the kernel, such as writing to the hard disk. KVM adds a third mode, the guest mode. Guest mode processes are processes that are run from within the virtual machine. The guest mode, just like the normal mode (non-virtualized instance), has its own kernel and user-space variations. Normal kill and ps commands work on guest modes. From the non-virtualized instance, a KVM virtual machine is shown as a normal process, and it can be killed just like any other process. KVM makes use of hardware virtualization to virtualize processor states, and memory management for the virtual machine is handled from within the kernel. I/O in the current version is handled in user space, primarily through QEMU.

A typical KVM installation consists of the following components:

- A device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm.
- A user-space component for emulating PC hardware; currently, this is handled in the user space and is a lightly modified QEMU process.
- The I/O model is directly derived from QEMU's, with support for copy-on-write disk images and other QEMU features.

Virtualize the Processor
when kvm virtualizes a processor, the guest sees a cpu that is similar to the host processor, but does not have virtualization extensions. This means that you cannot run a hypervisor that needs these virtualization extensions within a guest (you can still run hypervisors that do not rely on these extensions, such as VMware, but with lower performance). With the new patches, the virtualized cpu does include the virtualization extensions; this means the guest can run a hypervisor, including kvm, and have its own guests.

There are two uses that immediately spring to mind: debugging hypervisors and embedded hypervisors. Obviously having svm enabled in a guest means that one can debug a hypervisor in a guest, which is a lot easier that debugging on bare metal. The other use is to have a hypervisor running in the firmware at all times; up until now this meant you couldn't run another hypervisor on such a machine. With nested virtualization, you can.
The reason the post surprised me was the relative simplicity in which nested virtualization was implemented: less than a thousand lines of code. This is due to the clever design of the svm instruction set, and the ingenuity of the implementers.

I don’t think anyone will be running a guest within a guest for any practical purposes but certainly there can be some uses on the development side. We can test other hypervisors within KVM such as VMware, Xen etc

How to Enable Nested Virtualization for KVM

Please follow the following steps to enable Nested virtulazation with KVM Hypervisor
#/etc/init.d/libvirt-bin stop
#modprobe -r kvm_amd
#modprobe kvm_amd nested=1

Next we will want to either add a new script to apparmor and the bin directory or change the current symlinked /usr/bin/kvm file.
Let’s start off with the safe way, by creating separate script. This one I’ve named kvm-nested.

#!/bin/bash
exec /usr/bin/kvm -enable-nesting "$@"

Once that is done, you’ll need to edit the /etc/apparmor.d/abstractions/libvirt-qemu file and add the line below line into it in the section for “the various binaries”
/usr/bin/kvm-nested rmix,

The one drawback to this method is you will need to manually edit each VM’s xml file to point to that new script /usr/bin/kvm-nested in this line.
/usr/bin/kvm-nested

The other way which I found more simple was deleting the current kvm symlink to qemu-system-x86_64 and pointing it to the script below.

cat /usr/bin/kvm
#!/bin/bash
/usr/bin/qemu-system-x86_64 -enable-nesting "$@"

#ls -l /usr/bin/kvm
lrwxrwxrwx 1 root root 10 2011-01-21 19:13 /usr/bin/kvm -> /usr/bin/kvm-nested

2 comments:

virtualization said...

Currently I work for Dell and thought your article on virtualization is quite impressive. I think Virtualization technology is software technology which uses a physical resource such as a server and divides it up into virtual resources called virtual machines (VM's).Virtualization allows users to consolidate physical resources, simplify deployment and administration, and reduce power and cooling requirements.

Anonymous said...

kvm removed support for -enable-nesting as it is not supported =\