Tuesday, March 20, 2012

Elevate cloud security with privilege delegation

Virtual machines make it possible to separate hardware acquisition and deployment from software deployment, and can improve delivery within an enterprise to 10, 20, or even 30 times faster. Thomas J. Bittman, VP, Distinguished Analyst, Gartner


In today's economic environment, organizations are focused on reducing costs and doing more with less while still trying to remain competitive. This means that IT departments are facing greater scrutiny to ensure that they match key business needs and deliver intended results in the most efficient and cost-effective manner. To meet these challenges, IT organizations are increasingly moving away from device-centric views of IT, to one that is focused more on the defining characteristics of cloud computing on applications, information, and people.


As an emerging trend that provides rapid access to dynamically scalable and virtualized IT resources, cloud computing promises new and exciting opportunities for organizations to create lean, robust, cost-effective IT infrastructures that better align with business goals. However, certain tradeoffs concerning control, compliance, and security must be addressed before fully realizing those benefits.

This article describes the elements driving data centers migration to the cloud, including the role of virtualization in public cloud infrastructures, and outlines the security and compliance implications of cloud computing to provide insight into the protection of sensitive data in the cloud through two key methods: Administrative access and privileged delegation.

Why journey into the cloud?
Why would organizations want to move their data center to the cloud? It's simple: The flexibility provided by virtualized servers and the economies of scale of larger private or public clouds create a better economic model for today's computing needs.

Virtualization provides the starting point for the better model: Higher utilization of server and storage hardware when workload varies:
Add the economies of scale and even higher utilization when resources are shared across business units in a public cloud or across companies in a public cloud and you have a lower cost model.
Add the flexibility to pay for resources only as used rather than incurring large fixed costs and large chunks of capital expenditures and IT can better match the business requirements in many industries.
However, beyond the simple economics, the cloud model provides significant operational benefits. Virtualization again provides the starting point for a better operation model by reducing the time to provision needed applications and workloads. The cloud model builds on these capabilities by abstracting the end user from the complexity of both the physical infrastructure and the details of the provisioning and management processes making computing as easy to buy and manage as any other business service, as well as providing metering for measured service and service level agreements. Add to that, increased reliability and greater accessibility for mobile or remote users and the cloud becomes a very compelling value proposition

Virtualization as an enabler

While the cloud is not in and of itself virtualization, virtualization is a critical component and major enabler of cloud computing. Virtualized servers and storage allow higher utilization of physical hardware when workload varies.
The ability to automatically move workloads whenever required increases reliability without the need to provide redundant (and often underutilized) hardware for every application. Cloud providers build on the economic advantages of virtualization; combining that with economies of scale and advanced automation of routine systems administration is what creates the cost savings that allow cloud-based data centers to be an economically viable alternative or supplement. Still, organizations moving data onto the cloud must consider the risks they face if the virtual environment is not administered properly.
Additionally, virtualization is enabling the IT department itself to be, in effect, a service provider for the business. Virtualization again provides the starting point for a better operation model by reducing the time to provision needed applications and workloads. By abstracting the end user from the complexity of both the physical infrastructure and the details of the provisioning and management processes, server virtualization "helps IT behave more like a cloud provider, and prepares the business to be a better consumer of cloud computing." (From GartnerGroup, "Server Virtualization: One Path That Leads to Cloud Computing", RAS Core Research Note G00171730, Thomas J. Bittman, 29 October 2009.)
So what does this mean for the data center and IT operations? The first characteristic of a heavily virtualized data center is a dramatic increase in the number of servers to be managed. This increasing scale — from hundreds to thousands and thousands to tens of thousands of servers — adds high degree of complexity to data center operations. Change and configuration management become for more important and challenging and automation moves from a nice way to save money to a fundamental requirement.

Because of this additional complexity in virtual and cloud environments, client data is now exposed to security vectors not found in purely physical environments. The addition of a virtualization layer to the IT stack introduces a new point of failure in the established security model and a new attack surface for intruders of malicious insiders. Any breach of security at the hypervisor level undermines all of the security on the stack above it, from the operating system through the data and application layers.

The dangers of a cloud data center


According to an IDC Enterprise Panel survey, the number one concern of companies moving into cloud computing environments is security

Security is number one concern when moving into the cloud

Image missing...




















Virtualization with KVM/XEN


## Brief Introduction about Virtualization Concepts ##

virtual machine as file
virtual machine on physical host

layman
virtualization : It is a method to create a virtual machine inside already existing machine
like running multiple pc inside one physical set of hardware

The mechanism to run multiple instances/copies of various operating systems inside a base operating system, mainly to utilize under-used resources on the physical host, where base operating system is running.

virtualization/emulation software, creates virtual network
#Benefits:
- You don't require anything else other than your Pc

#History
- The IBM system/360 Model 67(s/360-67) was a mainframe, and first shipped in july 1996. It included features to facilitate time-sharing applications, notably virtual memory hardware and 32 bit addressing.
Note: Virualization was not the whole idea behind creation of IBM system/360

#Need
- power was expensive
- reduce the heat
- more efficient computing
- multiple instances

# VMware
- VMware was founded in 1998 and delivered its first product, VMware workstation , in 1999
- It was a emulation software
- Emulated all the h/w virtually to fool the virtual PC ... look you got everything...

# Xen ( paravirtualization )
- In 2003 xen came out as a research project
- Xen originated as a research project(XenoServer) at the university of Cambridge,
- We don't have to emulate anything.
- Lot of overhead is reduced on the physical host, the machine ran pretty fast
- The guest machine... the os which is running virtual machine... it would know it is running on vitualized machine

Note: Xen support para-virtualization and also supports full virtualization
# Xen supported Architecture
- 32-bit x86 with PAE support ( physical Address Extensions )
- Xen's full-virtualization additionally requires availability of Intel VT-x or AMD-V technology within the processor
- Full virtualization uses the processors extensions
- Note1: Xen does not support committing more RAM to Vm's(in total) than the total physical RAM you have on the physical host. Means you cannot over-commit RAM
- Note2: Xen allows/supports committing more CPUs to VMs (in total) than the total physical CPUs you have on the physical host. That will, however have a nagative effect on the  performance

- Full virtualization came up with intel-VT and Amd-V in 2005

# QEMU ( Quick Emulator )
QEMU was presented in USENIX 2005 Annual Technical Conference. Qemu was written by Fabrice Bellard and is free Software
Specifically, the QEMU virtual CPU core library is released under the GNU Lesser General Public License (GNU GPL)

Difference Between VMware and QEMU
- QEMU is GNU GPL
- VMware proparatory

QEMU is a machine emulator: it can run an unmodified target operating system (such as Windows or Linux ) and all its application in a virtual machine. QEMU itself runs on several host operating systems

QEMU has the ability to emulate multiple processors and multiple architecture
NOte:QEMU code is used by KVM and XEN and also used by virtual Box
QEMU used Bochs(think inside the bochs )


# KVM ( Kernel based virtual Machine )
- Normal linux kernel has the facility of virtulization in it by KVM
- KVM is Open source software
- KVM ( kernel based virtual machine ) was developed by Qumranet, Inc.
- On september 4,2008 Qumranet was acquired by Redhat Inc
- KVM is a full virtualization solution  for linux on x86 hardware containing virtulization extensions (intel VT or AMD-V ) Full virtualization uses the processor based virtulization extensions
- KVM will not work with old processors, it works with recent intel VT or AMD-v processors.
- Using KVM, one can run multiple VMs running unmodified Linux or Windows Images
- Each virtual machine has private virtualized hardware: a network card, disk , graphices adapter, etc
- I was under a impresion that...
- The kernel component of KVM in included in mainline linux as of 2.6.20
- KVM management tools: ovirt, virtual Machine Manager(KVM/XEN) etc
- Recent kernel has the facility of virtualization in it


# Parallels
- Parallels uses Intel Core's virtualization technology to allow the virtual machine direct access to the host computer's processor
- VM inside the chroot - system level virtulization
- Parallels virtuozzo containers is an operating system-level virtualization product designed gro large-scale homegeneous server environmnents and data centers

# OpenVZ
- OpenVZ is an operating system-level virtualization technology based on the linux kernel and operating system.
  OpenVZ allows a physical server to run multiple isolated operating system instances known as containers
  It is similar to FreeBSD jails and Solaris Zones
- Note: We can run only linux based virtual machines running on it, they claim that there are very fast

# Virtual Box
- Oracle VM virtualbox is an x86 virtualization software package, originally created by German software company innotek, now developed by Oracle corporation as part of its family of virtualization products.
- very nice, runs on almost all oses

# Other examples of x86 virtualization software
- Microsoft's virtual PC, Hyper-V and Microsoft Virtual Server
- ESX runs direcly runs on the hardware whereas VMware workstation runs on host OS


# Virtualization Terminology
- Virtualization is a term that refers to the abstraction of computer resources in simple words, the mechanism to run multiple instances/copies of various operating systems insisde a base operating system, mainly to utilize under-used resources on the physical host
- Hyper-visor or virtual machine monitor(VMM): It is so called layer of virtualization s/w which provides virtualization capabilies. They can interact with either with the base OS or the Hardware

It is the software which manages and supports the virualization environment. It runs the virtual machines and isolates them from real hardware
- There are three types of Hyper-Visors
- Type 1 Hyper-visor: A hyper-visor running on base metal hardware e.g. Linux KVM, IBM z/VM VMware ESX etc
(bare metal ) KVM is intergral part of linux kernel, virtual machines will interact with it directly
- Type 2 Hyper-visor: Virtualization software that runs on the host OS. e.g. VMware workstation, VMware server(formerly known as GSX server), parallels Desktop, Microsoft Virtual Server etc
Here the VM communication happens like the instruction will pass to the virtualization s/w, then it handit over to base OS and then it goes to H/w layer

Note: Xen comes into both type1 and type2 it also called hybrid Hyper-visor
- Type 3 Hybrid Hyper-visor: Runs directly on bare metal like Type 1, but depends heavily on drivers and supports from one ot its (privileges) virtual machines to function propertly. E.g. Xen. Dom-0 is the special VM, which is needed by kernel-Xen.

Emulator: Emulator is a software which emulates all pieces of hardware for it's VM e.g. VMware, Qemu etc

Shared kernel: like Openvz and parallel
Used in chrooted/jailed virtual environments. All machines share the same kernel, and most of the libraries. only some parts of the OS are virtualized or made available to the VM through separate diretories
Domain: Any Virtual Machine running on Hypervisor

There are two types of Domain
- Domain-0
- Domain-U

In xen terms Domain-0 is the privileged Domain and Domain-U is the unprivileged Domain
Domain-0 is the Complete OS which booted from the xen-kernel( Priviledge Domain )

Domain-0 /Privileged Domain: A virtual Machine having privileged access to the hypervisor. It manages the hypervisor and the other VMs. This domain is always started first by the hyper-visor, on system boot. Also referred to as Management Domain or Management Console. Dom-0 can be used in "Thick" or "Thin" model. Thick model means that a log of software is present to assist virtual machine management. such as laptops, desktops etc  used for development and testing. Thin model means that Dom-0 is kept as thin as possible by providing just the base minimum software components to the hyper-visor to run the virtual machines properly. This results in lesser resource utilization by the Dom-0, and providing more resources to the guest domains. Used in production environments, on production servers etc

Domain-U/Guest Domains/ User Domains: VM created by DOM-0. Sometimes simply known as Guest  or Dom-U

PAE: Physical Address Extension, is a feature first implemented in the Intel pentium Pro to allow x86 processors to access more than 4 gigabytes of random access memory if the operating system supports it. It was extended by AMD to add a level to the page table hierarchy, to allow it to handle up to 52-bit physical address, add NX bit functionality, and make it the mandatory memory paging model in long mode

Intel VT-x and AMD-V
Processor hardware assistence extensions, various capabilites and provide to hypervisors

Processor Capability Identification tips:
egrep '(vmx/svm)' /proc/cpuinfo
cat /proc/cpuinfo | grep flag ( looks for pae, svm, lm (long mode... in term says it is 64 bit capable processor )

uname -a ( To check 64/32 bit OS )

PVM: Para-Virtual Machine. A virtual machine created using Xen's para-virtualization technology.
There is no hardware emulation( no fake ), what ever u want to execute give it to base OS
In paravirtuliztion the virtual machine runs with the same processor spec whatever the base OS, whereas in emulation we get a fake processor, network card, vga, etc ( Which slows down things )

HVM: Hardware-assisted virtual Machine. A virtual machine created using xen's or KMV'x hardware-assisted full virtualization technology, on a physical host which supports intel VT-x/AMD-V extensions in the processor.

Domain-0/privileged Domain:
Xen Scenario:
/etc/grub.conf

title Centos ( 2.6.18.164.15.1.el5xen )
root ( hd0,0)
kernel /boot/xen.gz-2.6.18....
module /boot/vmlinuz...
module /boot/initrd...

Here in the above example when the os is booted with xen kernel then the normal kernel will be loaded as the module

# XEN
We have to boot with the xen kernel to work on the virtual machine

Why use Virtualization ?
- Consolidation( Reduce 10 servers from the rack )
Technically reduction comes into picture , power, Rack/Desk/Floor Space, Hardware, HVAC, Wiring/cabling etc
- Efficient utilization of under-utilized resources
CPU/memory, disks, bandwidth etc ( Dedicated webserver on physical host uses 7% CPU
- Support for applications only supporting older versions of some OS
- Service/domain/role based compartmentalization
e.g. mail server and web server on separate VMs
- Fail-over and Load Balancing features( night time no load , day time load... shutdown some physical host/sleep mode... Business peak seasion... we can increase )
Fail-over: If one virtual machine fails at one physical host... instantenously we can invoke a virtual machine sitting on another host
- Developers can test code on test servers
- Easy roll-backs
- Replica of production server can be created as a VM, so patches etc can be tested
- Programs/Applications targeted to run on different OS/platforms can be tested e.g a web application is need to be tested on firefox running on linux or on windows
- Virus testing , spam testing, password cracking, sniffing , DOS etc can all be tested safely

- Training
- Virtual labs can be setup with less resources
- security training can be delivered without concerns of busting out in the production network
- Each student can have his own (virtual) lab in his own PC/Laptop, in addition to the lab provided by the instructor

- Virtual Appliances
- Appliances, such as hardened mail server, can be created, which simply would need to be started as a VM in your host OS, etc. Same can be done to create full functional web hosting servers. ( I have deployed few web hosting servers, using this method )
- Ease of machine migration in case of hardware failure (e.g. No need to re-install/reconfigure your favourite mail server from scretch !)

- Legacy application support
- Application benefit from the newer hardware such as speed, and thus run faster
- And someone said, less screw-drivers

## Why "not" virtualization ?  // disadvantages
- Administration of VMs, when more than a few, is more complicated, and sensitive than more than a few physical servers. The different VM interfaces, such as VMware's Virtual infrastructure Center, and KVM's ovirt, try to address this

- For live migrations involving movement of a VM from one physical host to another, involves extra IPs etc. Plus the shared storage, sometimes cluster file systems etc
- Various networking problems arise, such as firewalls, routing, switching, bridging etc
- Some service "bridged connections" from the rented server. This adds extra complexity in managing the physical host's firewall, routing tables etc
- Hardware needs to be more fault-tolerant, and relatively powerful, compared to single server/service requirements
- console access, block device access, recovery, system trouble-shooting, etc are complex areas to handle

## Emulation-based Full Virtualization ( The virtual machine request the hypervisor, and the hypervisor requests the OS )
- Slower than hardware-based full virtualization
- Hyper-visor simulates the virtual machine in software, by analyzing all instructions and converting each one appropriately before it gets to the CPU
- Dynamic translation is a technique used to improve performance. which is, the hypervisor analyses the binary instructions just before they are run allowing safe instrucutions to run unmodified, but converting sensitive instructions just before they execute. The converted code is also cached in memory, to speed up future (sensitive) instructions coming in for execution
- Dynamic recompilation optimizes frequently reused sequences on the fly
- Full virtulization with Dynamic Recompilation is the basic technique used by VMware for it's initial/basic products, VMware Workstation, VMware Server etc
- Full Emulation can also be used to simulate non-standard processor architecture, needed by different OS/applications, by converting all instructions
- This method of simulating/emulating results in very slow VMs
- QEMU, Bochs are example of non-native/non-standard processor emulators for/on Linux

## Native/Hardware-based/Hardware=assisted full/virtualization
(The Virtual Machine directly talks to the Hardware CPU)
NOte: ** We can run unmodified guest Operating System on hardware Operating System
- Requires CPU based Hardware acceleration (Intel VI-X, AMD-V )
- Base-metal look and feel, Access to HW is controlled through hyper-visor
- Almost all code cooming in from VM is run directly by the CPU of the physical host, without any changes for efficiency
- The hyper-visor only steps in when the code coming infrom the VM uses sensitive instructions, that would interface with the state of the hypervisor itself, or the environement it is supported by.
- Such sensitive instructions must be intercepted by the hyper-visor and translated/replaced with safe equivalents before they are acutally executed on the CPU of the physical host.
- To do this, all sensitive instructions in the CPU's instruction set Architecture (ISA), must be defined as privileged
- Traditional x86 architecture's instruction set has about 17 instructions which are sensitive but they are not defined as privileged, which are unable to trap such instructions coming from VM. The latest intel itanium-2 has three instructions which are sensitive, but are still not defined as privileged
- Intel VT-x and AMD-V technologies were developed to overcome this problem on modern 32-bit and 64 bit x86 processors
- In Linux, Xen hyper-visor "can Use" these new CPU features. Whereas, KVM "needs/requires" these features in the CPU, for it (KVM hypervisor) to work
- Examples are: KVM, VMware ESX
- Un-modified Guest OS can be used as VM. e.g. Windows

# Para virtualization/ Cooperative Virtualization
- Works without the newly available CPU based hardware acceleration technologies, such as Intel VT-x and AMD-V
- e.g. Xen
- The "hyper-visor aware" code is integrated into the kernel of the operating systems running on the virtual machines. This results in a "modified kernel", commonly known as "kernel-xen" instead of simply "kernel". That is why you will see "kernel-xen" instead of simply "kernel. That is why you will see "kernel-xen-x.y" booting up when you power up your virtual machine OS.
NOTE: Windows cannot be run as paravirtualized guest OS. Not for Production
The base OS/ domain-0 already runs under kernel-xen. Generally, no other changes are required on the rest of the software on the virtual machines. Xen is the actual hypervisor, which runs directly on the CPU of the physical host, with "full speed". In other words, the (modified) kernel of each virtual machines's OS actually runs on the hypervisor, assuming the hyper-visor to be CPU itself. This eliminates the need to have a separate trapping/translation mechanism to be present in the hyper-visor
- The above description implies that only modified Guest OS can be used as VM, which understand the hyper-visor. That means windows and family products cannot be run in para-virtulization environment (one of the excellent books on XEN: "The book of xen"
- This also means that all versions/derivatives of Linux, which have "kernel-xen" included  in their package list can be used as Dom-U/Guests
- Only the hyper-visor has privileged access to the CPU, and is designed to be as small and limited as possible
- The Xen hyper-visor interacts with the OS running under it's control, using very few well-defined interfaces, called hyper-calls. Xen has about 50 hyper-calls compared to about 300 for Linux
Note: xen-kernel very limited hyper-calls
- Hyper-calls are "asynchronous", so that the hyper-calls themselves don't block other
- The base OS, which actually installs Xen hyper-visor on the physical host, is also referred to as "Privileged Domain" or "Domain-0" or "Dom-0". This privileged domain is in-turn used to manage the hypervisor. This privileged domain is in-turn used to manage the hypervisor. This privileged domain manages all other virtual machines created under Xen hyper-visor. These other virtual machines are referred to as "Guest Domains" or "User Domains" or "Dom-U". That means the OS of the privileged domain, also runs as a VM, under Xen hyper-visor, just like other virtual machines on the same physical host. "but", "with more privileges". Dom-0 as direct

Note: Xen Good Book
HowDoesXenWork.pdf and WhatisXen.pdf

#Application Virtualization
- Application creates a sandbox environment in browser, etc e.g. JRE
- Applet runs inside a small java instance, java container

# API-Level Virtualization
- Virtualization provided to support single application
- e.g. WINE is used to run Windows programs in Linux Environment

- cat /proc/cpuinfo | grep flag
- uname -a

# Xen Architecute
- As mentioned earlier, Xen hyper-visor runs directly on machine's hardware, in place of the operating system. The OS is in-fact loaded as a module by GRUB
- When GRUB boots, it loads the hyper-visor, "kernel-xen"
- The hyper-visor then creates initial Xen domain, Domain-0 (Dom-0 for short)
- Dom-0 has full/privileged access to the hardware and to the hyper-visor, through it's control interfaces
Note:
When xen is loaded as kernel then the linux will be loaded as Module
-xend, the user-space service is started to support utilities, which in-turn can install and control other domains and manage the xen hyper-visor itself
- It is critical, and thus included in Xen's design, to provide security to Dom-0, if that is compromised, the hyper-visor and the other virtual machines/domains can also be compromised on same machine
- One Dom-u, Cannot directly access another Dom-u. All user domains are accessed only by Domain-0

# The Privilege Rings Architecture
- Security Rings, also known as privilege rings, or privilege levels, are a design feature of all the modern day processors. the lowest numbered ring has the highest privilege and the highest numbered ring has the lowest privilege. Normally four rings are used, numbered 0-3
Note:
1, Security Architecture, Compartmentalization
2. Kernel xen runs in ring 0, VM runs on low privileged rings
- In a non virtualized environment/case, the normal linux operating system's kernel runs in ring-0, where it has full access to all of the hardware. The user-space programs are run in ring-3, which has limited access needed to the hardware. the user-space programs request to system programs in ring-0
- Para-virtualization works using Ring-Compression. In this case, the hyper-visor itself runs on ring-0. The kernels of Dom-0 and Dom-Us of the PVMs are run in lower privileged rings, in the following manner.
- On 32-bit x86 machines, Dom-0 and Dom-U kernels run in ring-1; and segmentation is used to protect memory
- On 64-bit architecture, segmentation is not supported. In such case, kernel-space and user-space for virtual domains must both run in ring-3. Paging and context switches are used to protect the hyper-visor and also protect the kernel-address-space and user-spaces of virtual domains from each other

# Hardware-assisted Full virtualization works differently. The new processor instructions (Intel VT-x and AMD-V) places the CPU is new execution modes, depending on situation
- When executing instructions for a hardware-assisted virtual machine (HVM), the CPU switches to "non-privileged" or "non-root" or "guest" mode, in which the VM kernel can run in ring-0 and the userspace can run in ring-3
- When an instruction arrives, which must be trapped by the hyper-visor, the CPU leaves this mode and returns to the normal "privileged" or "root" or "host" mode, in which, the privileged hyper-visor us running in ring-0.
- Each processor for each virtual machine, has a virtual machine control block/structure also known as a page. This block/structure stores the information about the state of the processor in that particular virtual machine's "guest" mode
Note: That this can be used i parallel to para-virtualization. This means that some virtual machines may be setup to run as para-virtualized, and at the same time, on the same physical host, some virtual machines may use the virtualization extensions(intel VT-x/ Amd-V ). This is of-course only possible on a physical host, which has these entensions available and enable in CPU,


#Xen Networking Concepts/Architecture
This is the most important topic after the basic Xen Architecuture concepts. Disks/Virtual Block Devices will be following it.
Xen provides two types of network connectivity to the guest OS/ Dom-Us.
1. Shared Physical Device (xenbr0)
2. Virtual Network (virb0)

- Shared Physical Device (xenbr0)
When a VM needs to have the IP of the same network, to which xen physical host is connected to, it needs to be "bridged" to the physical network. "xenbr0" is the standard bridge or a virtual switch, which connects VMs to the same network, where the physical host itself is connected.
- Xenbro never has an IP assigned to it, because it is just a forwarding switch/bridge
- This kind of connectivity is used when the VMs have publicly accessible services running on them, such as an email server, a web server, etc
- This is a much easier mode of networking/network connections to the VMs.
- This was the default method of networking VMs in RHEL 5.0


#Virtual Network (virbr0)
When a VM does not have to be on the same network as of the physical host itself, it can be connected to another type of bridge/virtual switch, which is private ot the physical host only. This is named "virb0" in Xen, and in KVM installations too.

- The Xen physical host is assigned a default IP of 192.168.122.1 and is connected to this private switch. All VMs created/configured to connect to this switch get the IP of the same private subnet 192.168.122.0/24. This physical host's 192.168.122.1 interface works as a gateway for these VM's traffic to go out of the physical host and allow them to communicate with the outside world
- This communication is done through NAT. The physical host/Dom-0 acts as NAT router, and also as a DHCP and DNS server for the virtual machines connected to virb0. A special machines connected to virb0. a special service running on physical host/Dom-0, named "dnsmasq" does this.
- Another advantage is that even if your physical host is not connected to any network, the virb0 still has an IP(192.168.155.1), thus all virtual machines and the physical host are always connected to each other. This is not possible is shared physical device mode (xenbr0), because if the network cable is unplugged from the physical host and it does not have an IP of it's own, the virtual machines, also don't have an IP of their own. (Unless if they are configured with static IPs )

- Since the machines do not obtain the IP from the public network of the physical host, the public IPs are not wasted
- Each Xen guest domain/Dom-U, can have up to three (3) virtual NICs assigned to it. The physical interface on the physical host/Dom-0 is renamed to "peth0" (Physical eth0). This becomes the "uplink" from this Xen physical host, to the physical LAN switch. In fact a virtual network cable is connected to from this peth0 to the virtual bridge created by Xen.


# Check-list for perfoming an actual Xen installation on physical host
Things to check before you start

- Make sure that PAE's supported by your processor(s) at a minimum, which is needed by Xen, if para-virtulization is needed.
- Make sure you have enough processors/Processing power for both Dom-0 and Dom-U to function properly
- If you want to use Hardware-assisted full virtualization, make sure that Intel VT-x/AMD-V extensions are available in your processor(S)
- Make Sure you have enough processors/ processing power for both Dom-0 and Dom-U to function properly
- If you want to use Hardware-assisted full virtualization, make sure that Intel VT-x/AMD-V extensions are available in your processor(s)
- At least 512 MB RAM for each domain, including Dom-0 and Dom-U. It can be brought down to 384 MB, or even 256 MB in some cases, depending on the software configurations you select.
- Enough space in active partition of the OS, for each VM, if you want to use one large file as virtual disks for your virtual machines. Xen creates virtual disks in the location: /var/lib/xen/images
- You can also create virtual disks on Logical Volumes and snap-shots, as well as on a SAN, normally ISCSI based IP-SAN
- Enough free disk are, to create raw partitions, which can be used by virtual machines, as their virtual disks. In this case, free space in active linux partitions is irrelevant
- Install Linux as you would normally. ( We are only focusing on RHEL, CENTOS, Fedora in this text, though there are other distributions out there too )
- You may want to select the package group named "Virtualization" during install process.
- If you did not select the package-group "Virtualization" during install process, install it now.
- Make sure that you have kernel-xen , xen, libvirt and virt-manager are installed
- Make sure that your default boot kernel in GRUB is the one with "kernel-xen" in it. You can also set "DEFAULTKERNEL = kernel-xen in /etc/sysconfig/kernel" file.
- You may or may not want to use SELINUX. If you are not comfortable with it, disable it. Xen and KVM are SELINUX aware, and work properly ( with more security ) when used (properly) on top of SELINUX enabled Linux OS
- Lastly, make sure that xend and libvirtd services are set to ON on boot up
chkconfig --level 35 xend on
chkconfig --level 35 libvirtd on

e.g. chkconfig --list | grep 3:on | egrep "xend|libvirtd"

- It is normally quite helpful to disable un-necessary services, depending on your requirements. I normally disable sendmail, cups, bluetooth, etc, on my servers.
- It is important to know that while creating para-virtual machines, you cannot use the ISO image of your Linux distribution, stored on physical host, trying to use ti as install media for the VMs being created. When you need to create PV machines, you will need an exploded version of the install CD or RHEL/CENTOS 4.5 or higher, accessible to this physical host. Normally this is done by storing the exploded tree to the installation CD/DVD on the hard disk of the physical host and making it available through NFS, HTTP or FTP. Therefore you must cater for this additional disk space requirement, when you are installing base OS on the physical host.

Para-virtulization
- There is no concept of BIOS
- No Hardware Emulation


Kick Start file
less /var/www/html/kvm-ks.cfg
%packages
@core # Install the core package
-krb5-workstation  # remove the pkg from the core package
- up2date   # remove
so on and so forth !  (As minimal at the best ! )


# rpm -qa | egrep "kernel-xen | xen | libvirt | virt-manager "
# yum groupinstall virtualization

To see the Default Kernel
cat /etc/sysconfig/kernel

chkconfig --list | grep 3:on | egrep "xend|libvirtd"
chkconfig --list | grep 3:off | egrep "xend|libvirtd"
xendomains service --- To start some virtual machines automatically at physical machines reboot
service xend status
service libvirtd status

Stop all the services in GNU/Linux
#nmap localhost

stop all the unnecessary services... idealy on any linux machines
bluetooth,sendmail, cups

#Paravirtualization not know any emulation, it don't know any CDROM...
the iso has to extract and shall keep in a folder

- It is normally quite helpful to disable un-necessary services, depending on your requirements. I normally disable sendmail, cups, bluetooth, etc on my servers

- It is important to know that while creating para-virtual machines, you cannot use the ISO image of your linux distribution, stored on physical host, trying to use it as install media for the VMs being created. when you need to create PV machines. you will need an exploded version of the install CD or RHEL/CENTOS or higher, accessible to this physical host. Normally
this is done by storing the explodd tree of the installation CD/DVD on the hard disk of the
physical host and making it available through NFS, HTTP of FTP

cd /var/lib/xen/images -- hard disk images
cd /etc/xen -- config file
UUID -- Automatically generated by the hypervisor
vif -- virtual interface

chkconfig --level 35 xend on
chkconfig --level 35 libvirtd on

paravirtualization doesn't support pxe booting !

Installation media URL : http://192.168.122.1/CentOS-5.4-x86_64
#vncserver -kill :1   # kill the vnc session

cat /proc/cpuinfo  # Same processor is avilable to virtual machine and physical server--- paravirtualization

#dmesg | grep -i eth

virbr0 --- virtual bridge... private to physical host
xen renames the physical host "eth0" to "peth0"

Since Dom-0 runs...
The network card maps to device-0 to domain-0
i.e. eth0 - vif0:0 (virtual interface )

vif4:0 --- virtual interface of domain 4 connected to first network interface of the domain
Domain 4 - eth0
since there is a virtual patch cable running from vif4:0 --- eth0 ...which established the network connection

Each interface on the virtual machine has the connection to the physical box and shows up as vif interface

Bridge mode
Guest domains are ('transparently') on the same network as dom0
Routing mode
Guest domains 'sit behind' dom0. Packets are relayed to the network by dom0
NAT mode
Guest domains 'hide behind' dom0 using dom0's IP for external traffic

http://wiki.xensource.com/xenwiki/XenNetworking

#Backend pieces consideration
Understanding the Xen networing backend pieces will aide in the problems that crop up. The following outlines what happens when the default Xen networking script runs on single NIC system:
1. the script creates a new bridge names xenbr0
2. "real" ethernet interface eth0 is brought down
3. the IP and MAC address of eth0 are copied to virtual network interface - veth0
4. real interface eth0 is renamed peth0
5. Virtual interface veth0 is renamed eth0
6. 'peth0' and 'vif0.0' are attached to bridge 'xenbr0' as bridge ports
7. the bridge, peth0, eth0 and vif0.0 are brought up

peth0 is cable connection establishing connection between Guest machines and Switch.
'peth0' gets connected to 'virbr0' and 'xenbr0'
The process works wonderfully if there is only one network device present on the system. When multiple NICs are present, this process can be confused or limitations can be encountered

In this process, there is a couple of things to remember
1. pethX is the physical device, but it has no MAC or IP address
2. xenbrX is the bridge between the internal Xen virtual network and the outside network, it
   does not have a MAC or IP address
3. vethX is a usuable end-point by either Dom0 or DomU and may or may not have an IP or MAC address
4. vifX.X is a floating end-point for vethX's that is connected to the bridge
5. ethX is a renamed vethX that is connected to xenbrX via vifX.X and has an IP and MAC address

para-virtulalization Virtual machine
#kickstart file automatically generated by anaconda
install
test
url --url http://192.168.122.1/centOS-5.4-x86_64
lang en_US.UTF-8
keyboard us
network --device eth0 --bootproto dhcp --hostname test-vm-2
rootpw --iscrypted $$$$$$$$$$
firewall --enabled --port=22:tcp
authconfig --enableshadow --enablemd5
selinux --enforcing
timezone Asia/kolkata
bootloader --location=mbr --driveorder=xvda
clearpart --linux
part / --fstype ext3 --size=100 --grow
part swap --size=384

%packages
@core

Trick**
put Cd and boot ---- add entry in the boot prompt

linux text ks=http://192.168.122.1/test.ks

Good Book
High availability data center on the Laptop/Desktop with Xen 3.0
Martin Bracher and Cyrill Muller --- LinuxWorld 2006 trivadis

DOM-0 - Is the management Domain
In paravirtualization ... we won't see the grub... the boot loader... It just boot from normal
python boot loader
- lspci -- command doesn't show any information in the paravirtulization machine -NULL output
           Because not hardware
-- Device model for Xen is also QEMU, it's being used for Emulation purpose !
 
#LVM Logical Volume Manager
The benefits of LVM are, you can always add disks or extend the storage pool or reduce the
storage pool, snapshots as well

#xm create dsl ( xen creates the machines )
#rpm -q xen
#rpm -q libvirt qemu

lvm snapshot
#lvcreate -v --size 2.9G --snapshot --name LinuxVM1DiskSnapShot1 /dev/DiskStore/LinuxVM1Disk
#lvscan

Keep the master copy of the VM harddisk backed up ! work with the snapshot !
The role of LVM is very significant !
cd /etc/xen  (Where all the virtual machine configuration Held)

Next few minutes is going to be bit advance to the viewers!..
I will be trying to make it as simple as...

#kpartx
can be used to extract the virtual machine harddisk ... Please follow the below steps

#kpartx -l /dev/DiskStore/LinuxVM1DiskSnapShot1
list the partitions in the particular snapshot

#kpartx -v -l /dev/DiskStore/LinuxVM1DiskSnapShot1
#kpartx -a /dev/DiskStore/LinuxVM1DiskSnapShot1
# ls /dev/mapper ( Will holds all the partitions mounted ! )
All will be mounted...

# mkdir Testdir ( Create test dir to mount the partitions which mapped in /dev/mapper )
# mount /dev/mapper/LinuxVM1DiskSnapShot1p1 /mnt/Testdir

Add some files in mounted partitions of VMDisk
NOTE: NEVER EVER MOUNT ANY LOGICAL PARTITIONS WHEN IT IS RUNNING !

with LVM ADVANTAGES !
IF we want to do some experiment with VM... take the snapshot of VM Harddisk lvm snapshot...
and edit the file with ---lvmsnaphost harddisk

**Interesting Documents
Building a Virtualization Cluster based on Xen and iSCSI-SAN

Note: DO NOT RUN 'fsck' on mounted Partitions it will corrupt the partitions

#e2fsck /dev/mapper/LinuxVM1DiskSnapShot1p1
shutdown the machine ---

#kpartx -d /dev/DiskStore/LinuxVM1DiskSnapShot1
--- Delete the mapper for the particual LVM ---
It won't delete the lvm device... it just detact for the /dev/mapper

# lvremove -v /dev/disk/LinuxVM1DiskSnapShot1
remove the lvm snapshot
# lvscan --- you won't find the partitions

LVM is cool... file Based storage

# dd if=vserver.img of=/tmp/verser-tmp.img
We can also play around with .img  harddisk
kpartx
e.g.
#kpartx -l verser-tmp.img
#kpartx -a verser-tmp.img
# /dev/mapper
# mount the drives to Directory
umount dir
# make sure it's not mounted
e2fsck /dev/mapper/loop0p1
#kpartx -d verser-tmp.img... (Delete the mapper)

ACPI ( Advanced configuration and Power interface )
ACPI aims to consolidate and improve upon existing power and configuration standards for hardware devices. It provides a transition from existing standards to entirely ACPI-compliant hardware, with some ACPI operating systems already removing support for legacy hardware.

APIC (Advanced programmable interrupt controller )
In computing, an advanced programmable interrupt controller (APIC) is a more complex programmable interrupt controller (PIC) than intel's original types such as 8259A. APIC devices permit more complex priority schemata, and Advanced IRQ (Interrupt Request) management

noapic --- suck the CPU ... It takes lots of CPU cycles, hitting the roof...
Infact, it's  not doing anything !
consuming so much of CPU

mycomputer --> device manager
- MPS Multiprocessor PC
- QEMU Harddisk

withACPI
mycomputer--> device manager
- ACPI Multiprocessor PC
- Processors different
- system devices --- got long list of system devices

#lvscan
#vgdisplay --- will tell you how much disk space do we have !

# Steps to create a VM with LVM
--- first create a lvm partitiion with 8G /dev/diskstore/ubuntu-test
--- from the grapical when the type of harddisk appears... chose the block device entry !
    specify the path

# howto setup apic to avoid ... better resource utilization , to get good speed
Has soon as the VM starts up... make it force it off... first time while installing the OS..
to change the parameters from backend

Edit the configuration file
- /etc/xen/vmconfig file
- /var/libvirt/qemu/vmconfig file

By default the parameter exists as below
acpi = 0
apic = 0

to acpi = 1
   apic = 1
   boot = "d" # boot from cdrom

#lvremove -v /dev/DiskStore/winXP-test
#lvscan

carefully we have to change the bootable device to CDROM
else... no bootable device found !

hvm -- hardware assisted virtual machine
extra features available in the modern processors

When ever emulation is coming... it just slows down
when ever we install paravirtual drivers, it removes the emulated hardware, gives direct access to talk to dom-0

drivers paravitual
http://meadowcourt.org/downloads/

##
Openfiler
openfiler.com

- Running pre-made Xen images [ Xen OpenFiler Dom-U appliance ]
1) Create a 3GB disk ( LVM /disk ) for openfiler root / filesystem
1)a) Create two partitions (root) (swap)
1)b) extract openfiler tarball into (root)
1)c) Create a PVM by hand, and start it.
2) Additional storage LVM /disk (10GB) attach to openfiler

Generally it's being installed on the physical harddisks
openfiler-2.3-x86_64.tar.gz

1) Create a 3GB disk (LVM /disk) for openfiler root / filesystem
1) a) create two partitions (root) (swap)
1) b) extract openfiler tarball into (root)
1) c) Create a PVM by hand, and start it.

2) Additional storage LVM /disk (10GB) attach to openfiler

steps
- lvcreate -v --size 3G --name openfiler-disk DiskStore
- fdisk /dev/DiskStore/openfiler-disk
  create new partition   +3000M  (create 2 primary partitions )
   root and swap (83,82)
- partprobe
#kpartx -l /dev/DiskStore/openfiler-disk
#kpartx -a /dev/DiskStore/openfiler-disk
NOTE: It's not mounted
# format the disk
#mke2fs -j /dev/mapper/openfiler-disk1
#mount -t ext3 /dev/mapper/openfiler-disk1 /mnt/openfiler-disk1
# cd /mnt/openfiler-disk1
#tar xzf /data/cdimages/openfiler-2.3-x86_64.tar.gz
#umount  /mnt/openfiler-disk1
# kpartx -d /dev/DiskStore/openfiler-disk

xen vm file -- openfiler
name = "openfiler-VM"
maxmem = 512
memory = 512
vcpus = 2
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
disk = [ "phy:/dev/DiskStore/openfiler-disk,xvda,w" ]
root="/dev/xvda1 ro"
vif = [ "bridge=virbr0,script=vif-bridge" ]
vfb = [ "type=vnc,vncunused=1,keymap=en-us" ]
------------------

xm create openfiler-VM

ERROR:
cnrl+D ERROR...

Genrally the root partition label "/" but openfiler guys "root"
we need to change that...

Again #kpartx -a lvm-partition
# e2label /dev/mapper/openfiler-disk1
#make a label
# e2label /dev/mapper/openfiler-disk1 root

#unmap the device
#kpartx -d /lvm/device

boot the VM ...

GUI Address fire in the Browser

username/password --- openfiler/password

Steps to add Volume to openfiler
1. #lvcreate -v -L 10G -n openfiler-storage Diskstore
2. #lvscan

#xm list
#virsh list

#xm block-attach 28 'phy:/dev/DiskStore/openfiler-storage' xvdb w
so on... as you add on the disks, xvda, xvdb, xvdc etc...
Note: Important to understand the convention


- Good naming scheme conventions for VMs ( and their LVMs/files in general)
- [HVM/PVM/KVM]-[CENTOS/RHEL/FEDORA/DEBIAN/WINDOWS]-[VERSION(54/12/XP/2K/2K8/07)]-bits(64/32)-FQDN

Recommended: Don't use dots in the Virtual Machine name
e.g.
PC5464-server02-wbitt-com
KD3064-server03-example-com

Current setup
2 physical host
xen1 and xen2
storage physical host (https://192.168.1.100:446)

vim /etc/grub.conf
kernel /boot/xen.gz-2.6.18-164.15.1.el5 dom0_mem=512MB
--- This means the dom0 will have access to only 512MB RAM
dom0 will use only 512 mb RAM... we need to reboot the system to take effect the new changes

vi /etc/xen/xend-config.sxp

#vncserver -kill :1
#/usr/bin/vncserver -name xenhost2.example.com -geometry 900x700

#grep dom0 /etc/grub.conf


--- openfiler ---
https://ip:446
openfiler/password

--- openfiler installed on dedicated host machine with 160 GB harddisk
- root,swap, create another lvm partition with 40 GB... Being used has the storage partition..
once u create a lvm partition...It will be show in the storage web Interface

Goto --- volumes---block devices

# create a partition in /dev/sda
primary --- physical volume -- starting cylinder --- end cylinder --- Create some amount of space
check block device

# Create a new volume group ---storage
Make sure the harddisk type should be Linux LVM --- id (8e)

add volume, create volume
volume name
volume description
required space
filesystem/volume type - iscsi

two volumes are created !

iscsi target !
enable iscsi target
lun mapping (logical unit number )
volume group tab... make sure partition id should be 8e

Two partitions with LVM(8e).../dev/sda3,/dev/sda4... partprobe..
Create a new volume group... VGStroage... select 1 /dev/sda3
add volume group..

--- creating a logical volume...
vm1.disk -- name
Disk for VM1

iSCSI Storage:
1) Create / provide Physical disk / partition
2) Create a Volume Group, comprising/ on top of physical disk/partitions
3) Create Logical Volumes on top of this Volume Group
4) create Target name and make the LVs part of target
5) Create network definition (HomeLAN 192.168.1.0/255.255.255.0 share )
6) allow access to the members of this network
7) Our IQN for one target is : iqn.2010-05.com.example:storage.target1

start the iscsi target started

iscsi target
press add

LUN mapping (Logical Unit Number )
example iscsi target
- iqn.2010.05.com.example.storage.target1

There can be n number of logical volumes included in the target

NOTE: Network ACL... you need to give access to the target to get access

Network Access Configuration
New HomeLAN 192.168.1.0 255.255.255.0 Share
Access - Allow ( update)

#Virtual machine live migration...
- Two kvm/xen host machine
- Shared storage (iscsi or NFS )
- The have to be in the same layer 2 subnet 192.168.1.0/24
- Both have to be in the same processor architecture...
- Arch 32 bit or 64 bit...
- kernel version also have to be the same...

#yum -y update kernel-xen kernel-headers kernel-xen-devel libvirt virt-manager xen
#yum -y install iscsi-initiator-utils

https://storage.example.com:446

Centos
#yum -y update kernel-xen kernel-headers kernel-xen-devel libvirt virt-manager xen
#yum -y install kernel-headers libvirt-cim
#yum -y update iscsi-initiator-utils

I encountered an error...It is serving by purpose

#rpm -q kernel-xen libvirt virt-manager xen iscsi-initiator-utils

xen live migration ( memory state has been saved )
#xm migrate --live mydomain destination.ournetwork.com

Xen kickstart file
clearpart --all --initlabel
zerombr
part / --fstype ext3 --size=100 --grow
part swap --size=384

%packages
@core
-krb5-workstation
-up2date
-isdn4k-utils
-bluez-utils
-yum-security
-yum-metadata-parser
-yum-updatesd
-yum
-yum-rhn-plugin
-ksh

%post
for service in autofs avahi-daemon bluetooth cups firstboot hidd iptables ip6tables netfs nfslock pcscd portmap rawdevices restorecmd rpcgssd rpcidmapd sendmail smartd microcode-ctl cpuspeed iscsi iscsid libvirtd lvm2_monitor mdmonitor xend xendomains xfs ; do
chkconfig --level 35 $service off; done

Howto mount the iscsi initator target
copy the iscsi target ...

Type of dev disk mounting...
#ls /dev/disk/by-
by-id by-label by-path by-uuid

The best way to use the iscsi disk is /dev/disk/by-path
The path is going to retain..itself... path is more consistent. IP is going to be same in virtual datacenter

#checks for iscsi  (IQN -- ISCSI Qualified Name )
service iscsi status
time has to be sync... it is trivial

cp /etc/xen/xend-config.sxp /etc/xen/xend-config.sxp.orig
vi /etc/xen/xend-config.sxp (changes below)
(xend-relocation-server yes)
(xend-relocation-port 8002)
(xend-relocation-address '')
(xend-relocation-hosts-allow '')
#(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$')

On all the host above things
#service xend restart
#service libvirt restart

NOTE: Xen problem... Howto fix it.
#virsh --connect xen+ssh://root@xenhost2.example.com
The problem is with virt-manager... ssh fingerprint.. problem..
solved... create ssh key on the hypervisors..
#ssh-keygen -t rsa

#ssh-copy-id -i /root/.ssh/id_rsa.pub root@xenhost2.example.com
#ssh-copy-id -i /root/.ssh/id_rsa.pub root@xenhost1.example.com

Howto corrupt the harddisk
#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000 (10MB)

##live migration
#virsh migrate --live 1 xen+ssh://xenhost1

Virtualization Business benefits
- Less DataCenter Space
- Fewer $ for Energy
- Less Capital Requirements
Lots' of performance tunning has been done, with Virtualization layer
tool for performance check - vmarch
Email Server, Database server, java server, file server
that's terrific ... lower cost to the customer... technology innovation

#Energy Efficiency
We will reduce the number of physical server
The commitment to being green, in this global Environment.
Very Important to have a balance between:-
- Memory speed
- Network speed
- processor speed
- storage I/O

# Innovative Technologes
VMotion - It allows the customer to dynamically move Virtual machine from one physical host to another physical host on the fly with out any
downtime. It gives a platform to run the virtual Machine

DRS - Dynamic Resource Scheduler - It allows to load balance the virtual machine. Example Email Server, To move the Email server on the fly to the
with DRS on physical host where there is capacity
It provides a complete logging facility

#Migrate Virtual Machine from ESX 3.5 -- Move the virtual machine from one physical host to another physical host

# Proven Reliability
Intel Xenon Processors', IBM Mainframe

KVM establish above and beyond
Virtualization makes all our Servers work Efficiently

#Infrastructure Simplification
IBM BladeCenter

Integrating server, networking and storage resources
- hardware usability
- packaging density
- Unified management
- Power/cooling savings

Virtualization Gives you Server Mobility
Combining a few applications on a single server for greater utilization
- Industry-standard design
- Price/performance
- Price/Performance
- Compatibility

The key thing is from the virtualization stand point is it allows many applications on single platform, Higher Utilization

# Server Consolidation
Consolidating large numbers of underutilized servers for greatest TCO
- Performance
- Scalability
- Strong reliability features

#IBM BladeCenter
- compatibility across chasis
- Comprehensive ecosystem
- Power management
- Two Redundant high-speed fabrics

# Two processor
- More memory DIMMS per processor than some competitors
- More I/O slots per processor than some competitors

#Processor Rack
Reliability
Scalability
Manageability

My additional Questions/Concers please drop me a mail ?
gyani.pillala@gmail.com