Showing posts with label OpenNebula. Show all posts
Showing posts with label OpenNebula. Show all posts

Thursday, June 17, 2010

The Open Source Tool kit for Cloud Computing

What is OpenNebula ?
It is an open source toolkit for building clouds.
The latest release has built-in support for KVM, Xen Kernel and VmWare Server.

How OpenNebula works ?
OpenNebula is a distributed application containing two components
The first component referred to as OpenNebula front-end runs on VX64 server.
sudo apt-get install opennebula
The second component referred to as OpenNebula Node should be installed on all hosts part of the compute cluster. This package prepares the machine for being a node in an OpenNebula cloud. It in turn configures the following dependent packages:
1) KVM
2) libvirt
3) oneadmin user creation
4) ruby
sudo apt-get install opennebula-node
What are the components of opennebula front-end ?
OpenNebula front-end spawns the following processes when it starts: ( Note that front-end should always be started as oneadmin user using the command sudo -u oneadmin one start )
1) OpenNebula Daemon ( oned ) - Responsible for handling all incoming requests ( either from CLI or from API ). Talks to other processes whenever required.
2) Scheduler ( mm_sched ) - It does match making to find a suitable host ( amongst the hosts part of compute cluster ) for bringing up virtual machines.
3) Information Manager ( one_im_ssh.rb ) - Collects resource availability/utilization information for hosts/VMs respectively. ( Resources include CPU and Memory)
4) Transfer Manager ( one_tm.rb ) - Responsbile for Image Managemnt ( Clone Image, Delete Image, etc. )
5) Virtual Machine Manager ( one_vmm_kvm.rb )- Acts as interface to the underlying Hypervisor. ( All operations to be performed on Virtual Machines go through this interface )
6) Hook Manager ( one_hm.rb ) - Responsible for executing Virtual Machine Hooks. ( Hooks are programs which are automatically triggered on VM state changes. They must be configured prior to starting front-end )
OpenNebula front-end can be configured to a great extent by modifying the contents of file /etc/one/oned.conf. Consider a sample configuration below:

##############################
HOST_MONITORING_INTERVAL=10 # Used by Information manager to decide the frequency at which resource availability details have to be collected for hosts
VM_POLLING_INTERVAL=10 # Used by Information manager to decide the frequency at which resource utilization details have to be collected for VMs
VM_DIR=/mnt/onenfs/ # Should be shared across all hosts in compute cluster. Contains Disk Images required for booting VMs
PORT=2633 # All supported API calls are converted to XML-RPC calls. front-end runs an XML-RPC Server on this port to handle these calls
DEBUG_LEVEL=3 # DEBUG_LEVEL: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
NETWORK_SIZE = 254 # Default size for Virtual Networks ( applicable while using onevnet )
MAC_PREFIX = "00:03" # Default MAC prefix to use while generating MAC Address from IP Address ( applicable while using onevnet )
# The following configuration supports KVM hypervisor. Note that the executables one_im_ssh one_vmm_kvm one_tm and one_hm can be found in /usr/lib/one/mads/
IM_MAD = [
name = "im_kvm",
executable = "one_im_ssh",
arguments = "im_kvm/im_kvm.conf" ]
VM_MAD = [
name = "vmm_kvm",
executable = "one_vmm_kvm",
default = "vmm_kvm/vmm_kvm.conf",
type = "kvm" ]
TM_MAD = [
name = "tm_nfs",
executable = "one_tm",
arguments = "tm_nfs/tm_nfs.conf" ]
HM_MAD = [
executable = "one_hm" ]
#################################
Working with OpenNebula CLI ( OpenNebula CLI is available only on front-end )
1) Adding a new Host to compute cluster
onehost create
Note that im_mad, vmm_mad, tm_mad in our case should be im_kvm, vmm_kvm and tm_nfs respectively as we have configured them in /etc/one/oned.conf
Also note that Information manager needs to collect resource availability ( CPU and Memory ) information for the host we have added. This requires:

oneadmin user on front-end should be able to ssh to host without entering password ( test this using sudo -u oneadmin ssh oneadmin@ on front-end )
Inorder for this to work copy the contents of /var/lib/one/.ssh/id_rsa.pub on front-end to /var/lib/one/.ssh/authorized_keys on host
Type onehost list to check the status:
Notice the value of STAT attribute. If the attribute has a value of 'on' then the host has been successfully added to compute cluster:
ID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT
0 192.168.155.127 0 200 198 198 1800340 1341620 on
Look into /var/log/one/oned.log on front-end for debugging
Once a host has been successfully added use onehost disable/enable to toggle its status.

2) Submitting a new Virtual Machine job
In order to provision a Virtual Machine in the compute cluster we need to construct a template and submit it using
onevm create