Monday, January 30, 2012

Cloud Computing Security Issues


What is virtual machine escape?

Virtual machine escape is an exploit in which the attacker runs code on a VM that allows an operating system running within it to break out and interact directly with the hypervisor. Such an exploit could give the attacker access to the host operating system and all other virtual machines (VMs) running on that host. Although there have been no incidents reported, VM escape is considered to be the most serious threat to virtual machine security.

Virtual machines are designed to run in self-contained, isolated environments in the host. Each VM should be, in effect, a separate system, isolated from the host operating system and any other VMs running on the same machine. The hypervisor is an intermediary between the host operating system and virtual machines. It controls the host processor and allocates resources as required to each guest operating system.

Here's Ed Skoudis' explanation of the risk:

"If the attacker can compromise the virtual machines, they will likely have control of all of the guests, since the guests are merely subsets of the program itself. Also, most virtual machines run with very high privileges on the host because a virtual machine needs comprehensive access to the host's hardware so it can then map the real hardware into virtualized hardware for the guests. Thus, compromising the virtual machine means not only that the guests are goners, but the host is also likely lost."
To minimize vulnerability to VM escape, Skoudis recommends that you:

Keep virtual machine software patched.
Install only the resource-sharing features that you really need.
Keep software installations to a minimum because each program brings its own vulnerabilities.

References 
http://www.cse.wustl.edu/~jain/cse571-09/ftp/vmsec/index.html
http://www.jot.fm/issues/issue_2009_01/column4/
http://www.cloudsecurityalliance.org/topthreats





Monday, January 9, 2012

Kernal Based Virtual Machine

The KVM Module

The KVM (Kernel-based Virtual Machine) module turns a Linux host into a VMM (Virtual Machine Monitor), and it has been included in the mainline Linux kernel since version 2.6.20. A VMM allows multiple operating systems to run concurrently on a computer. These guest operating systems execute on the real (physical) processor, but the VMM (or hypervisor) retains selective control over certain real system resources, such as the physical memory and the I/O capabilities.

When a guest tries to perform an action on a controlled resource, the VMM takes control from the guest and executes the action in a fashion that keeps it from interfering with other guest operating systems. As far as the guest knows, it thinks it is running on a platform with no VMM—that is, it has the illusion of running on a real machine. For example, the guest can do memory paging and segmentation and interrupt manipulation without interfering with the same mechanisms within other guest operating systems or within the VMM itself.

The examples presented here require a recent Linux kernel with the KVM module installed and the LibKVM library to interact with the module from userspace. You can install the corresponding package(s) from your favorite distribution or compile the KVM source package (from SourceForge) to create both the module and LibKVM library. Note that the KVM module works only on platforms with hardware support for virtualization; most newer Intel and AMD 64-bit-capable processors have this support.

Introducing KVM, its internals and how to configure and install it.

Virtualization has made a lot of progress during the last decade, primarily due to the development of myriad open-source virtual machine hypervisors. This progress has almost eliminated the barriers between operating systems and dramatically increased utilization of powerful servers, bringing immediate benefit to companies. Up until recently, the focus always has been on software-emulated virtualization. Two of the most common approaches to software-emulated virtualization are full virtualization and paravirtualization. In full virtualization, a layer, commonly called the hypervisor or the virtual machine monitor, exists between the virtualized operating systems and the hardware. This layer multiplexes the system resources between competing operating system instances. Paravirtualization is different in that the hypervisor operates in a more cooperative fashion, because each guest operating system is aware that it is running in a virtualized environment, so each cooperates with the hypervisor to virtualize the underlying hardware.
Both approaches have advantages and disadvantages. The primary advantage of the paravirtualization approach is that it allows the fastest possible software-based virtualization, at the cost of not supporting proprietary operating systems. Full virtualization approaches, of course, do not have this limitation; however, full virtualization hypervisors are very complex pieces of software. VMware, the commercial virtualization solution, is an example of full virtualization. Paravirtualization is provided by Xen, User-Mode Linux (UML) and others.
With the introduction of hardware-based virtualization, these lines have blurred. With the advent of Intel's VT and AMD's SVM, writing a hypervisor has become significantly easier, and it now is possible to enjoy the benefits of full virtualization while keeping the hypervisor's complexity at a minimum.
Xen, the classic paravirtualization engine, now supports fully virtualized MS Windows, with the help of hardware-based virtualization. KVM is a relatively new and simple, yet powerful, virtualization engine, which has found its way into the Linux kernel, giving the Linux kernel native virtualization capabilities. Because KVM uses hardware-based virtualization, it does not require modified guest operating systems, and thus, it can support any platform from within Linux, given that it is deployed on a supported processor.

Kernal based virtual machine
KVM is a unique hypervisor. The KVM developers, instead of creating major portions of an operating system kernel themselves, as other hypervisors have done, devised a method that turned the Linux kernel itself into a hypervisor. This was achieved through a minimally intrusive method by developing KVM as kernel module. Integrating the hypervisor capabilities into a host Linux kernel as a loadable module can simplify management and improve performance in virtualized environments. This probably was the main reason for developers to add KVM to the Linux kernel.
This approach has numerous advantages. By adding virtualization capabilities to a standard Linux kernel, the virtualized environment can benefit from all the ongoing work on the Linux kernel itself. Under this model, every virtual machine is a regular Linux process, scheduled by the standard Linux scheduler. Traditionally, a normal Linux process has two modes of execution: kernel and user. The user mode is the default mode for applications, and an application goes into kernel mode when it requires some service from the kernel, such as writing to the hard disk. KVM adds a third mode, the guest mode. Guest mode processes are processes that are run from within the virtual machine. The guest mode, just like the normal mode (non-virtualized instance), has its own kernel and user-space variations. Normal kill and ps commands work on guest modes. From the non-virtualized instance, a KVM virtual machine is shown as a normal process, and it can be killed just like any other process. KVM makes use of hardware virtualization to virtualize processor states, and memory management for the virtual machine is handled from within the kernel. I/O in the current version is handled in user space, primarily through QEMU.

A typical KVM installation consists of the following components:
A device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm.
A user-space component for emulating PC hardware; currently, this is handled in the user space and is a lightly modified QEMU process.
The I/O model is directly derived from QEMU's, with support for copy-on-write disk images and other QEMU features.

How KVM Compares to Existing Hypervisors

In many ways, VMware is a ground-breaking technology. VMware manages to fully virtualize the notoriously complex x86 architecture using software techniques only, and to achieve very good performance and stability. As a result, VMware is a very large and complex piece of software. KVM, on the other hand, relies on the new hardware virtualization technologies that have appeared recently. As such, it is very small (about 10,000 lines) and relatively simple. Another big difference is that VMware is proprietary, while KVM is open source.
Xen is a fairly large project, providing both paravirtualization and full virtualization. It is designed as a standalone kernel, which only requires Linux to perform I/O. This makes it rather large, as it has its own scheduler, memory manager, timer handling and machine initialization.
KVM, in contrast, uses the standard Linux scheduler, memory management and other services. This allows the KVM developers to concentrate on virtualization, building on the core kernel instead of replacing it.
QEMU is a user-space emulator. It is a fairly amazing project, emulating a variety of guest processors on several host processors, with fairly decent performance. However, the user-space architecture does not allow it to approach native speeds without a kernel accelerator. KVM recognizes the utility of QEMU by using it for I/O hardware emulation. Although KVM is not tied to any particular user space, the QEMU code was too good not to use—so we used it.

Processor Support for Virtualization
The processor to support a hypervisor directly and simplifies the task of writing hypervisors, as is the case with KVM. The processor manages the processor states for the host and guest operating systems, and it also manages the I/O and interrupts on behalf of the virtualized operating system.

##Commands
QCOW2 - QEMU's Copy-on-write-format
# qemu-img create -f qcow image.img 6G
# qemu-kvm -m 384 -cdrom guestos.iso -hda image.img -boot d
-m: memory in terms of megabytes.
-cdrom: the file, ideally an ISO image, acts as a CD-ROM drive to the VM. If no cdrom switch is specified, the ide1 master acts as the CDROM.
-hda: points to a QEMU copy-on-write image file. For more hard disks we could specify:
#qemu-kvm -m 384 -hda vmdisk1.img -hdb vmdisk2.img -hdc vmdisk3.img
-boot: allows us to customize the boot options; the -d switch boots from the CD-ROM.

The default command starts the guest OS in a subwindow, but you can start in full-screen mode, by passing the following switch:
-full-screen
"""vm-install
vm-install --os-type sles11 --name "sles11_test" \
--vcpus 2 --memory 512 --max-memory 768 \
--disk /var/lib/kvm/images/sles11/hda,0,disk,w,8000,sparse=1 \
--disk /iso/SLES-11-SP1-DVD-x86_64-GM-DVD1.iso,1,cdrom \
--nic mac=52:54:00:05:11:11,model=virtio \
--graphics cirrus --config-dir "/etc/libvirt/qemu"

The directory in which the XML configuration file for the virtual machine will be stored. It is strongly recommended to use the default
directory /etc/libvirt/qemu.

virsh -c qemu:///system define PATH_TO_XMLFILE

KVM is a full virtualization solution for x86 processors supporting hardware virtualization (Intel VT or AMD-V). It consists of two main
components: A set of Kernel modules (kvm.ko, kvm-intel.ko, and kvm-amd.ko) providing the core virtualization infrastructure and processor
specific drivers and a userspace program (qemu-kvm) that provides emulation for virtual devices and control mechanisms to manage VM Guests
(virtual machines). The term KVM more properly refers to the Kernel level virtualization functionality, but is in practice more commonly
used to reference the userspace component

VM Guests (virtual machines), virtual storage and networks can be managed with libvirt-based and QEMU tools. libvirt is a library that
provides an API to manage VM Guests based on different virtualization solutions, among them KVM and Xen. It offers a graphical user
interface as well as a command line program. The QEMU tools are KVM/QEMU specific and are only available for the command line.

"" Hardware requirement """
egrep '(vmx|svm)' /proc/cpuinfo

libvirt: A toolkit that provides management of VM Guests, virtual networks, and storage. libvirt provides an API, a daemon, and a shell
(virsh).
virt-manager (Virtual Machine Manager): A graphical management tool for VM Guests.
vm-install: Define a VM Guest and install its operating system.
virt-viewer: An X viewer client for VM Guests which supports TLS/SSL encryption of x509 certificate authentication and SASL authentication

modprobe kvm-intel # on Intel machines only
modprobe kvm-amd # on AMD machines only
#kvm-ok

"" The following general restrictions apply when using KVM:
Overcommits
KVM allows for both memory and disk space overcommit. It is up to the user to understand the implications of doing so. However, hard
errors resulting from exceeding available resources will result in guest failures. CPU overcommit is also supported but carries
performance implications.


Time Synchronization
Most guests require some additional support for accurate time keeping. Where available, kvm-clock is to be used. NTP or similar network
based time keeping protocols are also highly recommended (for VM Host Server and VM Guest) to help maintain a stable time. Running NTP
inside the guest is not recommended when using the kvm-clock . Refer to Section 9.5, “Clock Settings” for details.
MAC addresses

If no MAC address is specified for a NIC, a default MAC address will be assigned. This may result in network problems when more than one
NIC receives the same MAC address. It is recommended to always assure a unique MAC address has been assigned for each NIC.

Live Migration
Live Migration is only possible between VM Host Servers with the same CPU features and no physical devices passed from host to guest.
Guest storage has to be accessible from both VM Host Servers and guest definitions need to be compatible. VM Guest's need to have proper
timekeeping installed.

User Permissions
The management tools (Virtual Machine Manager, virsh, vm-install) need to authenticate with libvirt—see Chapter 7, Connecting and
Authorizing for details. In order to invoke qemu-kvm from the command line, a user has to be a member of the group kvm.
Suspending/Hibernating the VM Host Server Suspending or hibernating the VM Host Server system while guests are running is not supported.

"""Hardware limitations """
Max. Guest RAM Size
512 GB
Max. Virtual CPUs per Guest
64
Max. Virtual Network Devices per Guest 8

Max. Block Devices per Guest
4 emulated (IDE), 20 para-virtual (using virtio-blk)
Max. Number of VM Guests per VM Host Server
64

vm-install
Define and install VM Guests via vm-install including specifying RAM, disk type and location, video type, keyboard mapping, NIC type,
binding, MAC address, and boot method.

Virtual Machine Manager
Manage guests via Virtual Machine Manager using the following functions: autostart, start, stop, restart, pause, unpause, save, restore,
clone, migrate, special key sequence insertion, guest console viewers, performance monitoring, and CPU pinning.

virsh
Manage guests via the command line.
Restrictions: Requires XML descriptions as created by vm-install or virt-manager

virsh define domainName
virsh start domainName
virsh edit domainName
virsh undefinedomainName
virsh define /tmp/foo_new.xml
virsh list
virsh destroy domainName
Virsh dominfodomainName
virsh reboot domainName
virsh dumxmpldomainName

kvm-ok [ This command will test the CPU flags for Virtualization ]
egrep'(vmx|svm)' /proc/cpuinfo(To check CPU Flags )

start - virsh -c qemu:///system start sles11
pause - virsh -c qemu:///system suspend sles11
reboot - virsh -c qemu:///system reboot sles11
Graceful shutdown - virsh -c qemu:///system shutdown sles11
force down - virsh -c qemu:///system destroy sles11
turn on autostart - virsh -c qemu:///system autostart sles11

""saving and restoring ""
virsh save 37 /virtual/saves/opensuse11.vmsave
virsh restore /virtual/saves/opensuse11.vmsave

""" VNC PASSWORD for GUESTS """
Change the configuration in /etc/libvirt/qemu.conf as follows
vnc_listen = "0.0.0.0"
vnc_password = "PASSWORD"

""" Hypervisor Connections """
qemu:///system
Connect to the QEMU hypervisor on the local host having full access (type system). This usually requires that the command is issued by the
user root.

qemu+ssh://tux@mercury.example.com/system
Connect to the QEMU hypervisor on the remote host mercury.example.com. The connection is established via an SSH tunnel.

qemu+tls://saturn.example.com/system
Connect to the QEMU hypervisor on the remote host mercury.example.com. The connection is established TLS/SSL

""" Migrating VM Guests """
One of the major advantages of vitualization is the fact that VM Guests are portable. When a VM host Server needs to go down for
maintenance. or when the host gets overloaded, the guests can easily be moved to another VM Host Server. KVM and XEN even support
"live" migrations duing which the VM Guest is constantly available.
To migrate a VM Guest with virsh migrate, you need to have direct or remote shell access to the VM Host Server, because the command needs
to be run on the host. Basically the migration command looks like this

Command : virsh migrate [OPTIONS] VM_ID_or_NAMECONNECTION URI [--migrateuri tcp://REMOTE_HOST:PORT]

example
virsh migrate --live --persistent --undefinesource 37 \
qemu+tls://root@jupiter.example.com/system
virsh migrate --live --persistent 37 \
qemu+tls://root@jupiter.example.com/system

virsh migrate --live opensuse11 qemu+ssh://root@jupiter.example.com/system (Transient live migration with default parameters)
virsh migrate 37 qemu+ssh://root@jupiter.example.com/system (Offline migration with default parameters)

""""Transient vs. Persistant Migrations""""
By default virsh migrate creates a temporary (transient) copy of the VM Guest on the target host. A shut down version of the original
guest description remains on the source host. A transient copy will be deleted from the server once it is shut down.

In order to create a permanent copy of a guest on the target host, use the switch --persistent. A shut down version of the original guest
description remains on the source host, too. Use the option --undefinesource together with --persistent for a “real” move where a permanet
copy is created on the target host and the version on the source host is deleted.It is not recommended to use --undefinesource without the --persistent option, since this will result in the loss of both VM Guest definitions when the guest is shut down on the target host.

"" To Create an ISO image from an existing CD or DVD , use dd:
dd if=/dev/cd_dvd_device of=my_distro.iso bs=2048

""" QEMU-KVM """
qemu-img create -f raw /images/sles11/hda 8G

qemu-kvm -name "sles11" -M pc-0.12 -m 768 \
-smp 2 -boot d \
-drive file=/images/sles11/hda,if=virtio,index=0,media=disk,format=raw \
-drive file=/isos/SLES-11-SP1-DVD-x86_64-GM-DVD1.iso,index=1,media=cdrom \
-net nic,model=virtio,macaddr=52:54:00:05:11:11 \
-vga cirrus -balloon virtio

After the installation
qemu-kvm -name "sles11" -M pc-0.12 -m 768 \
-smp 2 -boot c \
-drive file=/images/sles11/hda,if=virtio,index=0,media=disk,format=raw \
-net nic,model=virtio,macaddr=52:54:00:05:11:11 \
-vga cirrus -balloon virtio

""" qemu-img create """
qemu-img create -f fmt -o options fname size

""" qemu-img convert """
Use qemu-img convert to convert disk images to another format. To get a complete list of image formats supported by QEMU, run qemu-img -h
and look at the last line of the output

qemu-img convert -c -f fmt -O out_fmt -o options fname out_fname
qemu-img convert -O vmdk /images/sles11sp1.raw \
/images/sles11sp1.vmdk

""" qemu-img check """
Use qemu-img check to check the existing disk image for errors. Not all disk image formats support this feature
#qemu-img check -f fmt fname

qemu-img check -f qcow2 /images/sles11sp1.qcow2

""" Increase the size of an existing disk image """
When creating a new image, you must specify its maximum size before the image is created
To increase the size of an existing disk image by 2 Gigabytes

qemu-img resize /images/sles11sp1.raw +2GB

""" Snapshorts """
Use qemu-img snapshot -l disk_image to view a list of all existing snapshots saved in the disk_image image
Creating snapshots
qemu-img snapshot -c backup_snapshot /images/sles11sp1.qcow2
""" image info """
qemu-img info /images/sles11sp1_base.raw

Example:-

virt-install --connect qemu:///system --name RHEL5 --ram 1000 --disk path=/var/lib/libvirt/images/RHEL5.img,size=8 --network network:default
--accelerate --vnc--cdrom/iso-images/RHEL5.2-x86_64.iso --os-type=linux

virt-install --connect qemu:///system --name RHELassignment_test --ram 4000 --disk path=/var/lib/libvirt/images/rhel_assignment.qcow2,size=8 --network network:default --accelerate --vnc --cdrom /home/gyani/Desktop/rhel-server-5.4-i386-dvd.iso --os-type=linux

virt-install --connect qemu:///system --name Demo_Guest --ram 4000 --disk path=/var/lib/libvirt/images/Demo_Guest.qcow2,size=8 --network network:default --accelerate --vnc --cdrom /home/gyani/Desktop/rhel-server-5.4-i386-dvd.iso --os-type=linux


qemu-img create -f qcow2 newdisk.qcow2 10G
qemu-img info filename

PXE with Virt-install (KVM)

virt-install --accelerate --hvm --virt-type=kvm --pxe --name=vm5 --ram=1024  --vcpus=1 --arch=x86_64 --uuid=9d50ff43-746a-48a0-8d02-1ee8a6c3bcf9 --os-type=linux --os-variant=rhel5 --disk  path=/var/lib/libvirt/images/vm5.qcow2,size=7,bus=virtio,cache=none --network=bridge=br0,model=virtio,mac=52:54:ac:14:24:f6 --vnc --vnclisten=0.0.0.0 --noautoconsole --wait=-1


virt-install --accelerate --hvm --pxe --name=vm5 --ram=1024  --vcpus=1 --arch=x86_64  --os-type=linux --os-variant=rhel5 --disk  path=/var/lib/libvirt/images/vm5.qcow2,size=6,bus=virtio,cache=none --network=bridge=br0,model=virtio