############# Solaris 10 ##############
Solaris ### Grub concepts ####
1. BIOS (Ensure Hardware health) - CPU(s), Memory, Hard Disk(S)
2. GRUB (Presents menu to user and defaults to a selection within timer )
Grand Unified Bootloader... Program reads the first sector of the harddisk..
If we have multiple harddisks... In Bios will be specifying from which harddisk we have to boot
3. OS Kernel (Solaris/Linux/Windows/etc. )
4. Sched - PID 0 (parent process ) , Init - PID 1
5. INIT
6. SMF ( loads all the dependencies of the services... ) Service Management Facility
7. Operational System
control is passed form - GRUB ==> KERNEL ==> sched ==> INIT
Solaris single user Modes
-s ( to the Kernel will bring you to the single user mode )
### INIT in Detail ###
/usr/sbin/init is represented in the process table as PID 1
INIT's config file is : /etc/inittab
INIT supports entering and exiting various runlevels:
Supported Runlevels: 0 -S
0 - shutdown/halt runlevel
1 - Single User Mode - NO Networking support Is provided
2 - multi-user support without NFS
3 - multi-user support with NFS - Default Runlevel
4 - unused ( We can define .. how system has to boot ) used by ISVs, Application Vendors (payroll, databases)
5 - Interactive Boot
6 - reboot
S - Single User Mode - No Networking Support is provided
runlevel or who -r ( find out runlevel )
####SMF - SERVICE MANAGEMENT FACILITY ####
Features include:
- Provides service mangagement via service configuration database ( list of services and their
various supported methods ) stop/start/enable/disable/restart
- Provides legacy rc script support ( old programs will work )
- Facilitates service dependencies ( A & B )... A cannot start with out B
- Permits automatic restarts of failed and/or stopped services
- Provides service status information (online/offline, dependencies )
- Causes each defined service to log individually to: /var/svc/log
- Defines a Fault Management Resource Identifier (FMRI)
- Can load mutually exclusive services in parallel
- SMF supports multiple instances of services
Service States:
1. Online/Offline (Running.../stopped)
2. legacy_run/maintenance
3. uninitialized/degraded/disabled – “ will show you momentarily... “
3 Primary SMF Utilities:
svcs - lists services and provides additional info ( Report Services status )
svcadm - permits interaction with services, including state transitions (Manipulate service instances )
svccfg - permits interaction with service configuration database
svcs -a (legacy scripts ) or svcs ( list out all the services )
svcadm enable SERVICE
svcadm disable SERVICE
svcs | grep ssh
svcadm restart svc:/network/ssh:default
svcadm enable svc:/network/ssh:default
svcadm disable svc:/network/ssh:default
svc.started - is the default service restarter/manager
inetadm - is the default, delegated service restarter for INETD daemons
FMRIs provide categories for services:
1. network
2. milestones
Syntax for 'svcs'
svcs -d FMRI - returns required services for FMRI
svcs -D svc:/network/smtp:sendmail - returns services dependent upon Sendmail
svcs -l FMRI - returns verbose dependencies
svcs -l smtp - FMRIs can usually be referenced by their unique parts
i.e. svcs -l sendmail || svcs -l smtp
SMF's default Log location for services is : /var/svc/log/Close_to_FMRI.log
i.e. Sendmail: /var/svc/log/network-smtp:sendmail.log
svcs -x daemon_name ( information about the daemon, Which are in maintenance and problems )
### Service management with 'svcadm' ###
Disable service:
svcadm disable FMRI
svcadm disable -s FMRI - stops each service instance
svcadm disable -s sendmail - stops default instance
svcadm disable -t FMRI - effects temporary disable
Note: svcadm really delegates service management to default restarter. i.e. svc.startd
svcadm enable FMRI - enables FMRI across ALL reboots
svcadm enable -t FMRI - enables FMRI for current session
svcadm enable -r FMRI - ALL dependencies are enabled
svcadm eneble -s FMRI - enables each service instance
svcadm -v refresh smtp
svcs -l smtp
svcadm -v disable -t sendmail
### Service management with 'inetadm' ###
inetadm controls INETD services
Note: INETD is a super-server which proxies connections to servers
client -> INETD -> telnet
Note: INETD services are traditionally defined in /etc/inetd.conf
Note: inetadm permits control of key/value or name/value pairs of services
inetadm -d FMRI - disables service
inetadm -e FMRI - enables service
client1 -> INETD -> TFTPD
client2 -> INETD -> TFTPD
INETD (effect changes globally. i.e. bind_address, etc.) - inetadm -M
-telnet
-tftpd (effect changes service scope based. i.e. bind_address ) - inetadm -m
### Package Management ###
1. installer - shell script that runs with test/GUI interfaces
installer is located on Solaris CDs/DVDs
Note: ./installer -nodisplay - runs installer without GUI
Task: access installer script for Software Companion DVD via NFS
Note: installer script facilitates installation of programs that were not selected during
installation
Setup NFS on installation server to share DVD contents:
share -F nfs -0 ro,anon=0 /export/home/SolarisCompanion
Mount remote server's NFS share-point for Companion DVD tree
mount -F nfs linuxcbtsun2:/export/home/SolarisCompanion /export/home/SolarisCompanion
Execute 'installer' shell script using the following: './installer'
Note: You may only install packages from the same installation category chosen during initial
installation of the OS.
### prodreg - application to manage (add/remove) programs on Solaris box ##
Note: prodreg also permits launching 'installer' scripts from ancillary Solaris CDs/DVDs/NFS/etc. locations
## Shell-based package management tools ##
pkg|info|add|rm|chk
## pkginfo ##
1. pkginfo - dumps to STDOUT(screen/terminal)all installed packages
pkginfo returns - category,package name, description
2. pkginfo -l [package_name] - detailed package listing
pkginfo -l SFWblue - returns full information about bluefish
3. pkginfo -x - returns an extracted list of packages.
abbreviation,name,architecture,version
4. pkginfo -q - queries for a package and returns exit status
5. pkginfo -i | -p - returns fully/partially installed packages, respectively
## pkgchk ##
1. pkgchk -v SFWblue - checks/lists files included with SFWblue package
pkgchk -v SFWblue, SFWjoe
2. pkgchk -lp /opt/sfw/bin/bluefish - return the package that 'bluefish' belongs to
3. pkgchk -ap /opt/sfw/bin/bluefish - restores permissions to values in database
4. pkgchk -cp /opt/sfw/bin/bluefish - audits content - based on sum information
5. pkgchk -cp /opt/sfw/bin/bluefish - audits content - based on sum information
6. pkgchk -q - returns useable exit status
### pkgadd ###
Common Solaris Package
1. www.sunfreeware.com
2. www.blastwave.org
3. www.sun.com
1. pkgadd -d pakcage_name - installs from Current directory or spool directory (/var/spool/pkg)
pkgadd - this will install programs in the spool directory
pkgadd -d nano...
pkginfo -x | grep nano
pkgchk -v SMCnano
Note: decompress downloaded packages prior to installatin
2. pkgadd - this will install programs in the spool directory
3. pkgadd -d nano... | curl ... -s /var/spool/pkg
4. pkgadd -d URL i.e. pkgadd -d http://location/package_name.pkg
Note: when installing via HTTP, packaged MUST be in package stream from. Use 'pkgtrans' to
transfrom packaged to package stream format.
## pkgrm ##
1. pkgrm package_name
### Zone configuration ###
- Zone provides complete runtime environment for application
- It allows us to isolate the environment as a result we can secure application with
seperate application
- zones are containers, which provides virtualization and provides program isolations
Features:
1. Virtualization - i.e. VMware
2. Solaris Zones can host only instances of Solaris. NOt other OSs.
3. Limit of 8192 zones per Solaris host
4. Primary zone(global) has access to ALL zones
5. Non-global zones, do NOT have access to other non-global zones
6. Default non-global zones derive packages from global zone
7. Program isolation - zone1(Apache), zone2(MySQL)
8. Provides 'Z' commands to manage zones: zlogin, zonename, zoneadm, zonecfg
9. Global zone(Management zone) is a container for all non-global zone
### Features of GLOBAL zone ###
1.Solaris ALWAYS boots(cold/warm) to the global zone
2.knows about ALL hardware devices attached to the system
3. knows about ALL non-global zones
### Features of NON-GLOBAL zones ###
1. Installed at a location on the filename of the GLOBAL zone 'zone root path' /export/home/zone/{zone1,zone2,...}
2. Share packages with GLOBAL zone
3. Manage distinct hostname and tables files
4. Cannot communicate with other non-global zones by default. NIC must be used, which means, use standard network API(TCP)
5. GLOBAL zone admin. can delegate non-global zone administration
i.e. - which zonename
Key Utilities
zlogin,zoneadm,zonecfg,zonename,zsched ( To sched all the containers...)
## Zone Configuration ###
Use: zonecfg - to configure zones
Note: zonecfg can be run: interactively, non-interactively, command-file modes
Requirements for non-global zones:
1. hostname
2. zone root path. i.e. /export/home/zones/testzone1
3. IP address - bound to logical or physical interface
Zones Types:
1. Sparse Root Zones - share key files with global zone
2. Whole Root zones - Require more storage
Steps for configuring non-global zone:
1. mkdir /export/home/zones/testzone1 && chmod 700 /export/home/zones/testzone1
2. zonecfg -z testzone1
3. create
4. set zonepath=/export/home/zones/testzone1 - sets root of zone
5. add net; set address=192.168.1.60
6. set physical=e1000g0
7. (optional) set autoboot=true - testzone1 will be started when system boots
zonecfg:testzone1> info
8. (optional) add attr; set name=comment; set type=string; set value="TestZone1"
9. Verify zone - verifies zone for errors
10. commit changes - commit
11. Zone Installation - zoneadm -z testzone1 install - places 'testzone1' into 'installed'
state. NOT ready for production
12. zoneadm -z testzone1 boot -- boots the zone, changing its state
Quit Console - ~.
zoneadm list -iv ( list all and installed zones )
zonecfg -z testzone1 info
### Zlogin - is used to login to zones ###
Note: each non-global zone maintains a console. Use 'zlogin -C zonename' after
installing zone to complete zone configuration
Note: Zlogin permits login to non-global zone via the following:
1. Interactive - i.e. zlogin -l username zonename
2. Non-interactive - zlogin options command
3. Console mode - zlogin -C zonename
4. Safe mode - zlogin -S
zoneadm -z testzone1 reboot - reboots the zone
zlogin testzone1 shutdown
############################ ZFS ######################################
Zettabyte File System (ZFS)
CLI,GUI,Mirroring,Raid-z,snapshots,clones
Features:
supports Very large Storage Space --
It can address 256 Quadrilion Zettabytes
Quadrilion - 1 million
1 256 Quadrillion Zettabytes (Terrbytes - Petabytes - Exabytes - Zettabytes ) ( 1024 Exabytes - 1 Zettabyte)
2 File system for the Feature
3. RAID -0/1 Mirroring,striping & RAID-Z ( RAID-5 with enhancements ) ( 2-required virtual devices )
4. Snapshots - Great Features - Read-only copies of file systems or volumes.. Be able to take a snapshot of current filesytem ...can revert back to the previous file system
5. Uses Storage Pools to manage storage - aggregates virtual devices .. Since the filesystems are attached to pools... They can dynamically grow
6. File Systems attached to pools grow dynamically as storage is added
7. We can attach the filesystem with out interupting any transaction
8. File systems may span multiple physical disks
9. ZFS is transactional (less likely to corrupt data)
Eg; Traditional file system 100 MB 80% is written 20% failed due to some reason which leads to data corruption...
But in zfs if it writes total 100 or nothing written ... After writing 100 MB it will do commit... so there is less chance of data corruption. Important feature of mission critical information .. 100 Mb will be written or nothing will be written
10. Pools & file systems are auto-mouted. NO need to maintain /etc/vfstab ... (Virtual file system tab ) Pools should have unique names ... Pools name must be unique... Within pool filesystem should also be unique
11. Supports file system hierarchies: /pool1/{home (5GB) ,var (10 GB) ,etc (15 gb)}
12. Supports reservation of storage: 36 /pool1/{home,var} .. We put reservation .. ensuring the home always get 10Gb ...
Inshort a specific filesystem will have it's specified size always...
13. Provides a secure web-based management tool - https://localhost:6789/zfs
Note: Compiling Reasons... Above about Zfs file system
############# ZFS CLI ###########
Command Line Interface
which zpool
zpool list - lists known pools
zpool create pool_name(alphaanumeric,_,-,:,.)
Pool Name Constraints: Reserved name (Do Not Use These Names For your Pool Names):
1. mirror
2. raidz
zpool create pool_name devices_name1, device_name2, device_name3, etc
Eg;
zpool create pool1 c0t1d0 | /dev/dsk/c0td10
Note: format (searches for disks )
It will scan for the connected disks.
Eg:
- zpool create pool1 c0t1d0
- echo $?
- mount
- ls -l /pool1/
- zpool list
ZFS Pool Statuses:
1. ONLINE - available
2. DEGRADED - failed or mirror failed
3. FAULTED - In Accessible, Before removing a HD make it offline and remove it
4. OFFLINE
5. UNAVAILABLE
zfs list - returns ZFS dataset info
zfs mount - returns pools and mount points
zpool status - returns virtual devices - The most important command to run ... probably after creation of pools
zpool status -v pool_name - To get verbose information about the pool
Note: ZFS requires a minimum of 128 MB virtual device to create a pool
zpool destroy pool1 - Destroys pool and associated file systems
############## Create file systems within pool1 ##############
zfs create pool1/home - creates file system named 'home' in pool1
home is subset of pool1
Note: Default action of 'zfs create pool1/home' assigns all storage available to 'pool1`, to 'pool1/home'
### Set Quota on existing file System ####
- zfs set quota=10G pool1/home
- zfs list
# Create user-based file system beneath pool1/home ##
- zfs create pool1/home/vxadmin ( We can specify the size as well)
- zfs list
Note: ZFS inherits properties from immediate ancestor
- zfs set quota=2G pool1/home/gyani
- zfs get -r quota pool1
- zfs get -r compression pool1 - returns compression property for file systems associated with 'pool1'
Note: Be default the compression will be in off state to the file systems
### Rename File System ########
zfs rename pool1/home/unixcbt pool1/home/unixcbt2
## Extending dynamically, Pool #####
- format - Search out the available disks
- zpool add pool1 c0t2d0 (device_name) [ Able to address dynamically added storage ]
- zfs list
- zpool status
### ZFS WEB GUI #######
ls -ltr /usr/sbin/smcwebserver
netstat -anP -tcp | grep 6789
Note: By Default Nfs doesn't share, due to security reason
legacy filesystem - means /etc/vfstab
### ZFS Redundancy/Replication #####
1. Mirroring - RAID-1
2. RAID-5 - RAID-Z
Virtual Devices:
#Mirroring
- zpool create poolmirror1 mirror c0t1d0 c0t2d0
- zfs create poolmirror1/home
- zfs set quota=10G poolmirror1/home
# RaidZ
2 - minimum number of disks required
format -> 1 -> partition -> print
#Create the pool
/usr/sbin/zpool create -f poolraidz1 raidz c0t1d0 c0t2d0
- zfs list
# Change a mount point
/usr/sbin/zfs set mountpoint=/poolraidz2 poolraidz1
zfs set quota=10G poolraidz1/home
# Change a mount point back to inherited
/usr/sbin/zfs inherit mountpoint poolraidz1
### ZFS Snapshots/Clones ######
Snapshots allows as to create a readonly copy of file systems or volume
Commerical products like NetApps,SAN,EMC's similar capabilities
Features:
1. Read-only copies of volumes or file systems
2. Use no additional space, initially
- zfs list -t snapshot - returns available snapshots
#snap shot syntax
- zfs snapshot poolraidz1/home@homesnap1
- zfs list -t snapshot
- snapshots are stored inside the hidden directory
/poolrraidz1/home/.zfs/snapshot/homesnap1
#Destroy the snapshot
- zfs destroy poolraidz1/home@homesnap1
# Rename Snapshot
- zfs rename poolraidz1/home@homesnap3 poolraidz/home@homesnap20060703
- zfs list -t snapshot
# Snapshots Roleback... It has to unmount and mount
- zfs rollback -f poolraidz1/home@homesnap20060703
### Clones
Clones are writeable copies.
Features:
1. Writable file systems or volumes
2. Linked to a snapshot... We cannot create a clone without snapshot
3. Clone can be stored anywhere in ZFS hierarchy
###ZFS Clone
- zfs clone poolraidz1/home@homesnap20060703 poolraidz1/homeclone1
Note: Clones will inherit the attributes whereas snapshots won't inherits anything.
Note: clone is write able whereas snapshot is not
Note: It we delete the snapshot... It will delete the clone as well.. Directly proportional
#### SSH Port Forwarding ###
Facilitates Local & Remote Port forwarding
1. Local - means to forward a port from the local system to a remote system
2. Remote - means to forward a remote port to our local host
LOCAL:
Flow: Client -> Port(2323) -> SSH- Tunnel Remote Host (2323)
Syntax:
ssh -L 2323:DestinationHost:2323 SSHD_Router_Server
Note: Port Forwarding in Solaris 10 supports ONLY TCP traffic
ssh -L 2323:linuxcbtmedial:80
Note: Ensure that local port is free, and destination port is listening
Note: Default port forwarding provides connectivity ONLY to localhost
Cross-Check : telnet localhost 2323 Use ^] to print the web-page
netstat -anP tcp | grep 2323
rcapache2 start
rcsshd restart
svcs -l apache2 (services list) - maintenace mode
svcadm clear apache2 (service adm)
svcs -l apache2 - online
#### Remote-Desktop
rdesktop -f -a 16 ip
#### Remote Port Forwarding ####
Note: Remote port forwarding instructs remote server's SSHD to bind to a port that becomes available to the remote system's users
ssh -R 2424:127.0.0.1:80 user@IP
ssh -R 2424:localhost:80 user@IP
### Share locally and remotely forwarded ports ###
ssh -g -L 2323:linuxgoogle:80 linuxgooglel (Makes available in the entire subnet)
ssh -g -R 2424:localhost:80 linuxgoogle
##Remote forwarded port
ssh gyani@google.com -R2245:127.0.0.1:22
## Local forwarded port
ssh gyani@google.com -L2245:127.0.0.1:2245
## Jump into Real Machine
ssh gyani@127.0.0.1 -p 2245
## Load Balancing ###
Load Balancing can be done with multi Layer switch or Through DNS
Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both. Typically, load balancing is the main reason for computer server clustering.
Route each request in turn to a different server host address in a domain name system (DNS) table, round-robin fashion
- Since load balancing requires multiple servers, it is usually combined with failover and backup services.
- In some approaches, the servers are distributed over different geographic locations.
The load distribution among the servers is known as load balancing. Load balancing applies to all types of servers (application server, database server)
1 comment:
Are you trying to make cash from your visitors by running popup ads?
In case you are, have you considered using Clickadu?
Post a Comment