Friday, June 4, 2010

Bash Shell Scripting with Examples

### Basic Bash ###

BASH bourn again shell
default shell for linux/unix
eg c korn 
The syntax is very easy essential and ideal be automated...
shell scripting more information much quicker
Interactive environment is govern by the shell , incorporate many functionality
##username @ nameofthesystem currentuserdirectory 
# root $ normal user
shell environment interpruts the command used by the user
shell will attempt to execute
interactive - user command are executed
non-interctive - shell script run by the bash
cd - change dir
The less u use the root level privilege it is better in terms of security
## /etc/profile  (global configuration files) form bash
system wide env for login 
##using backtick we can execute a command in the shell script
#HISTSIZE =1000  history file upto 1000 command
## bash reads /etc/profile which contains system wide environmental variables
# /etc/bashrc  ( Local config file for a part
which contains aliases and functions
functions,logic,aliases should go into the bashrc functions
#PS1 will be available in the bashrc file
The two files bashrc and profiles are the two controlling files for the bashrc files
# /etc/skel all are hidden files
pretty self explanatory !!!
##typically when the user will login in 
system level file  and scope level files will read...
The bash will read aall D files 
/etc/profile /etc/bashrc ~/.bash_profile ~/.bashrc
rpm -qa | grep bash (redhat system)
dpkg -q  (debian system)
so far we had a primary introduction about bash Scripting...
#global or user scope level
echo $variable ( $ prefix a variable )
# export  (is must) for the variable ... 
variables are not case sensitive it is just convensional...
# ssh -l test localhost ( login has test)
Its' good practice to export a variable if it is created...
# export variable name 
# set (type set ) to know the variable in the system 
haunting for the variables eg:- set | grep variable
#printenv " also show various variables available in the system
echo $variable  (It ll print the contents inthe varable )
localscope changing...
Is that simple... 
#/etc/skel ( We can specify variable whan a user is added this ll copy to the scope profile )
#grep is a parser
eg:- grep string filename
#exit status  In unix termonology..
0 means good 1 or some other number "bad"
" opposite of boolean " standard..
the exit value is stored in the  varaible ?
#echo $?
$test 1 -eq 1 
echo $?
test 1 -le 3
test 2 -ge 34
test 3 -ne 3
These are the typical ways you can test 6 differetn ways (numbers)
-eq 'equal to' -le 'less than equal to' -ge 'greater than equal to' -ne 'not equal to' -lt 'less than' -gt 'greater than'
There are ways to...
string comparision ,,,
test string = string ( comparison will done by single equal sign )
not equal (!=)
since linux and unix is nothing but a files
file newer or older
#test file1 -nt file2
#test file1 -ot file2
ls /dev/cua1 (character device file ... )
#test -b filename (block device)
#test -c filename (character device )
#test -e filename/dir ( file or dir exist) ( In linux/unix directory is also treated like a special kind of file )
#test -f filename (file only regular file)
[ -f filename ] (condition in the bash scripting )
s - socket 
eg:- ls -l /var/lib/mysql/mysql.sock
The first column contains the definition of the file 
man test

##  Shell Expansions ##

Directory is a special type of file
Brace Expansions
eg: touch test{1,2,3}file ( prefix & suffix)
     mkdir test{ 1,2,3}
     mkdir test{1,2,3}{2,3}  - each one of left matches the right
      touch ~/file{1,2}
In a Nutshell, manipulate much quicker

## Alias ##

Reinterpreted by any names 
It can be applied for Global base / local base
Global - /etc/profile
local - .bashrc
how would we go about doing it.
Eg: alias ls=`ls -l`
unalias ls
unalias delete
With alias commands come in handy...
aliases for Shell Scripts as well.


## Exit Status ##

echo $?
Every command return the exit status in Unix/Linux Environment
Error levels can be 0 to 255
0 - Success command as run successful, !0 - Fail
127- reserved command not exits
1- By virtue of  executing commands, Command exist but arguments are wrong
126 - permission problem executing a shell script

## Command Chaining ##

command delimited with semicolon
Eg: command1;command2;
&& - The left need to be successful In order to perform right
|| - oring  The left need not be successful In order to perform right

## Grep ##
Features
1. The ability to parse lines based on text and/or RegExes
2. Post-processor Note: Grep is a line processor
3. Searches case-sensitively, by default
4. Grep doesn't return the distinct field, whereas it returns entire line
5. Searches for the test anywhere on the line
1. grep 'linux' grep1.txt
2. grep -i 'linux' grep1.txt - case-insensitively search
3. grep '^linux' grep1.txt - uses '^' anchor to anchor searches at the beginning of lines
4. grep -i '^linux' grep1.txt
5. grep -i 'linux$' grep1.txt - uses '$' anchor to anchor searches at the end of lines
Note: Anchors are RegEx characters (meta-characters). They're used to match at the beginning and end of lines
6. grep '[0-9]' grep1.txt  - returns lines containing at least 1 number
7. grep '[a-z]' grep1.txt
8.rpm -qa | grep grep - searches the package database for programs named 'grep'
9. rpm -qa | grep -i xorg | wc -l - returns the number of packages with 'xorg' in their names
10. grep sshd messages
11. grep -v sshd messages - performs and inverted search ( all but 'sshd' entries will returned)
12. grep -v sshd messages | grep -v gconfd 
13. grep -v sshd messages | grep -c -v gconfd - Count the matching lines
14. grep -C 2 sshd messages - returns 2 lines, above and below matching line

Note: Most, if not all, linux programs log linearly, which means one line after another, from the earliest to the current

Note: Use single or double quotes to specify RegExes 
Also, execute 'grep' using 'egrep' when RegExes are being used


## AWK ##
Awk Introduction and Printing Operations
Awk is a programming language which allows easy manipulation of structured data and the generation of formatted reports. 
The Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then perform associated actions
Some of the key features of Awk are:
- Awk views a text file as records and fields
- Like common programming language, Awk has variables, conditionals and loops
- Awk has arithmetic and string operations
- Awk can generate formatted reports

Syntax: awk '/search pattern1/ {Actions}
             /search pattern2/ {Actions}' file
- Search pattern is a regular expression
- Actions - statements to be performed
- several patterns and actions are possible in Awk
- file - Input file
- Single Quotes around program is to avoid shell not to interpret any of its special characters

##Awk Working Methodology ##
- Awk reads the input files one line at a time
- For each line, it matches with given pattern in the given order, if matches performs the corresponding action.
- If no pattern matches, no action will be performed
- Either search pattern or action are optional, But not Both.
- If the action is not given, print all that lines that matches with the given patterns which is the default action
- Empty braces with out any action does nothing. It won't perform default printing operation
- Each statement in Actions should be delimited by semicolon
 
Example 1. Default behaviour of Awk
$ awk '{print;}' employee.txt
So the actions are applicable to all the lines. It just prints

Example 2. Print the lines which matches with the pattern
$awk '/Thomas/Nisha/' emploee.txt
Prints all the line which matches with the 'Thomas' or 'Nisha'. It has two patterns. Awk accepts any number of patterns.

Example 3. Print only specific field
Awk has number of built in variables. For each record i.e line, it splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1,$2,$3,$4. $0 represents whole line. NF is a built in variable which represents total number of fields in a record.
$ awk '{print $2,$5;}' employee.txt
$ awk '{ print $2,$NF;}' employee.txt

In the above example $2 and $5. Where $NF represents last field. 

Example 4. Initialization and Final Action
Awk has two important patterns which are specified by the keyword called BEGIN and END
Syntax:
BEGIN { Actions }
{ACTION } # Action for everyline in a file
END { ACTIONS }

Actions specified in the BEGIN section will be executed before starts reading the lines from the input.
END actions will be performed after completing the reading and processing the lines from the input.
$ awk 'BEGIN { print "Name\tDEsignation\tDepartment\tSalary";} { print $2,"\t",$3,"\t","$4","\t",$NF;}
    END {print "Report Generated \n----------";}' employee.txt

Example 5 : Find the employees who has employee id greater that 200
$awk '$1 > 200' employee.txt

Example 6 : Print the list of employees in Technology department
Now department name is available as a fouth field, so need to check if $4 matches with the string "Technology", if yes print the line
$awk '$4 ~/Technology/' employee.txt
Operator ~ is for comparing with the regular expressions. If it matches the default action i.e print whole line will be performed

Example 7. Print Number of employees in Technology department
Checks if the department is Technology, if it is yes, in Action, just increment the count variable, which was initialized with zero in the BEGIN section
$awk 'BEGIN { count=0;} $4 ~ /Techonology/ { count++; } END { print "Number of employees in Technology Dept =", count; }' employee.txt
At the end of the process, just print the value of count which gives you the number of employees in Technology department

Features:
1. Field/Column processor
2. Supports egrep-compatible (POSIX) RegExes
3. Can return full lines like grep 
4. Awk runs 3 steps:
    a. BEGIN - optional
    b. Body, where the main actions(s) take place
     c. End - optional 
5. Multiple Body actions can be executed by separating them using semicolons. e.g. '{ print $1; print $2 }'
6. Awk, auto-loops through input stream, regardless of the source of the stream . e.g. STDIN, Pipe, File

Usage:
1. awk '/optional_match/ { action }' file_name | pipe
2. awk '{ print $1 }' grep1.txt ( print column number 1 or field number 1)

Note: we can print distinct column by indicating the column number
Note: Use single quotes with awk, to avoild shell interpolation of awk's variables

3. awk '{ print $1,$2 }' grep1.txt
Note: Default input and output field separators is whitespace
4. awk '/linux/ { print }' grep1.txt - this will print ALL lines containing 'linux'
5. awk '/LINUX[cC][bB][tT]/ { print }' grep1.txt o/p: LINUXCBT2,LINUXcbt3
6. awk '{ if ($2 ~/Linux/) print}' grep1.txt
7. awk '{ if ($2 ~/Linux/) print $2,$1}' grep1.txt
8. awk '{ if ($2 ~ /Linux/) print }' grep1.txt
9. awk '{ if ($2 ~ /8/) print }' /var/log/messages - this will print the entire line for log items for the 8th 
10. awk '{ print $3 }' /var/log/messages | awk -F: '{ print $1 }'

## SED - Stream Editor ##
Features:
1. Faciliates automated text editiong
2. Support RegExes ( POSIX )
3. Like Awk, supports scripting using '-F' option
4. Support input via: STDIN,pipe,file

Usage:
1. sed [options] 'instruction[s]' file[s]
2. sed -n '1p' grep1.txt - prints the first line of the file
3. sed -n '1,5p' grep1.txt - prints the first five lines of the file
4. sed -n '$p' grep1.txt - prints the last line of the file
5. sed -n '1,3!' grep1.txt - prints ALL but lines 1-3
6. sed -n '/linux/p' grep1.txt - prints lines with 'linux'
7. sed -e '/^$/d grep1.txt - deletes blank lines from the document 
8. sed -e '/^$/d' grep1.txt > sed1.txt - deletes blank lines from the document 'grep1.txt' and created 'sed1.txt'
9. sed -e '/linux/d' grep1.txt - deletes the lines which start with linux
10. sed -ne 's/search/replace/p' sed1.txt 
11. sed -ne 's/linux/unix/p' sed1.txt - search and replace ( If u drop the 'n' switch it will return all the lines)
12. sed -i.bak -e 's/3/4/' sed1.txt - this backs up the original file and created a new 'sed1.txt' with the modifications indicated in the command

Note: Generally, to create new files, use output redirection, instead of allowing sed to write to STDOUT

Note: Sed applies each instruction to each line

## I/O Redirection ##

standard in and standard out
cat < helloworld.txt ( Input Redirection )
grep hell0 < helloworld.txt ( find out the keyword hello)
Now lets, turn our attention to...
cat helloworld.txt > helloworld2.txt ( copying)
cat helloworld.txt >> hello ( Append) 
grep hello < helloworld.txt >> helloworld.txt

##  PIPING ##

Output of one command can be redirected 
Eg: The o/p of command A can be streamed to i/p of command B
Eg:ls -A | wc -l
Eg:ls -l | sort -r | wc -l ( word count /line count for a file)

Good one 
Eg:cat data 
O/p firstname lastname

cat data | cut -f 2 -d  ' ' ( space )
cat data | cut -f 2 -d ','
cat data | cut -f 2 -d ':' | wc -l
ps -ax | grep httpd

## Command Subsitution ##
cat lspath.txt 
o/p /etc
ls -l `cat lspath.txt` ( This is called substitution )
ls -l `cat lspath.txt` | wc -l

etcdir1=$(ls -l /etc)
etcdir=`ls -l /etc`
There are two format by command subsitution
41. back tick xyz=`ls -l`
2. dollar    xyz=$(ls -l)
How simple shell scripting can be, If you use the command substitution...
tempcount=`ls -A | wc -l `
echo $tempcount,  It can be used condition as well... If count -gt than 10 do something...
netstat -ant | grep 443
sslstatus=`netstatus -ant | grep 442 | wc -l `
echo $sslstatus
Another way...
cat list.txt
dean
davis

filelist=<`list.txt`
echo $filelist
Count No. of files & Dir in a Directory
temp=`ls -A | wc -l` ( ls -A includes the . and .. as well)
We can have dynamic as well... It will be very useful...

## Quoting Nuances ##

Some keywords like $ and /, ? - which is reserved for echoing the variables
Need to know when to use Quoting Mechanicm, Various Quoting mech
Two type of Quoting 
1. Strong Quoting
2. Relax Quoting 
Bash cross Reference the variables - $val
Official Escape Character "\"
Eg: echo I love money \$50
Eg: mkdir testing\ directory\ with\ a\ space ( Escape the next character "Space")

Double Quotes " " - Strong  ( Also Called Conventional Quotes... )
Eg: echo "This a test"
Eg: echo "This a test $?" o/p This a test 0

Single Quotes ' ' ( Every thing is preserved... As it is) 
Note: It doesn't perform any Particular parsing..., Remember Space is Recognished as a Character
Note: Everything is escaped in Single Quotes

Eg: echo 'this is a test $8' o/p this is a test $8

Eg: echo "this is a test $ \\"   o/p: this is a test $ \
Eg: echo "this is a test $ \\\\"  (1st Escapes the 2nd, 3rd escapes the 4th... Lets say Even Numbers) o/p: this is a tesy $ \\ ... Here we End up with two

## Hello World | ##
When a new terminal is Opened like VIM Until you save the data it will be in buffer... We can call has New buffer...
Good Practise
#!/bin/bash  ( This Script runs in a Subshell )
# Defination of the Interpreter Including the date 
#Date: 06.07.10
#Author: Dean Davis
#Purpose: Hello World
Captalize the Variables
#Comments are Appreciated...
#VAriable are set here
#Begin Code 
#END

Note:
source helloworld.sh
./helloworld.sh (Conventional Way... ./(This Directory))

## Hello World || ##

Note: Scripts Simple Runs Subshells...
You can Define Global Variable and Local variables
Eg:
MESSAGE="hello world"
clear
echo $MESSAGE
echo "$MESSAGE"
date +%F\ %r ( O/P: 2004-08-27 07:07:24 PM)
#END
o/p: hello world, hello world
#wc
o/p: lines words characters

## Hello World ||| ##

echo what is your name?
read name
echo Hello $name
echo How are you doing?
read feeling
echo you said you were $feeling
echo Script Name: `basename $0` ( $0 - Current Script 0 is reserved for the name of the Script )
BaseName tell the scripts the path of the Script
o/p - Script Name: Helloworld.sh
echo Script Name: $0
O/p - Script Name: ./Helloworld.sh
Note: Each Echo command instansiate the new line
time is `date +%r`
Note: Call to another file in a Shell Script
. helloworld2.sh or source helloworld2.sh
#mail
#.helloworld.sh | mail -s "Testing hello world using pipes" root or gyani.kgp@gmail.com 
Mutt is the shell based mail client

## Functions ##

Function allows you to Organize Logical Pieces of Code, It's suggested that the develop the shell script with components
by Troubleshoot or debug very easily... It brings functionality
#!/bin/bash (SheBang header)
Date:
Author:
Purpose:
Created:
Modified:
#End
Eg:

##
function showdate() {
    date +%F
}
function showtime() {
    date +%r
}
function getuserinfo () {
    echo Please enter your firstname and lastname
    read firstname lastname
    echo Hello $firstname $lastname
}
function mailadmin() {
    echo success | mail -s " Successful Execution of Script " root
}
#call the function
showdate
showtime
getuserinfo
mailadmin

function ()
{
    return 1 or 0
}

retrival=$?

Note : We can define the function without the keyword function
Eg: 
mailadmin() {
  echo "hello"
}
I hope you enjoyed...


## For Loops | ##

Bash by default supports 3 types of loops
1. for loop
2. while loop
3. until loop

## For Loop ##

for arg in [list]; do action item done
Note: Semicolon is need if you continue on the same line or else not needed...
Eg:
for countries in USA Australia France
do 
    echo $contries
done

for file in `ls -A`
do 
    echo $file
done
o/p: list out all the files including all the directories

DIR="/etc"
for file in `ls -A $DIR`
do
    echo $file
done

Command Subsitution 
1. backticks ` ` 
2. dollar & paranthesis $()

Note: Always use echo statements, It will be very helpful to debug the programs
Eg: 
for num in `seq 1 100`
do 
    echo $num 
done

Eg:
PASSFILE="/etc/passwd"
COUNT=0 (Initialize )

for user in `cat $PASSFILE | cut -f 1 -d ':'`
do 
    echo $user
    let "COUNT += 1"
done
echo $COUNT users registered in the system
exit 0

##
We turn our attention to while loops
while loop
##

Lets' show you an Example
Eg: 
NUM=0
MAX=20

while [ "$NUM" -lt "$MAX" ] ( Condition Testing ) less than -lt, greater than -gt, 
do 
    echo $NUM
    let "NUM += 1"
done 
#End

## Until Loops ##
We can use this as the opposite of the while loop
syntax : 
until [ condition-is-true ]; do command done   

NUM=100
MIN=20

until [ "$NUM" -eq "$MIN" ]
do
    echo $NUM
    let "NUM -= 1"
done
#End

until [ "$STATUS" -eq "0" ]
do
    ping -c 1 192.168.1.35
    echo The host id down
    STATUS=`echo $?`
done

The loop will run indefinite, until the host is up...

## Control Structure | ##
comparision test
test 1 -eq 1 , echo $? - 0
test 1 -eq 2 , echo $? - 1
test linuxcbt = linuxcbt , echo $? - 0 ( We can comprare two strings )

##
if [ 1 -eq 1 ]  ( less than equal -le ), if [ "linuxcbt" = "linuxcbt" ]
then 
    echo Both values are equal 
else
    echo Both values are unequal
fi

Eg:
for contries in USA Australia France Latvia Argentina Jamaica
do 
    if [ "$contries" = "USA" ]
    then
        welcome to USA
    elif [ "$countries" = "Jamaica" ]
    then
        echo One Love
    fi
done

## Control Structure || ##
check for file exists...
FILE="helloworld.sh"
if [ -e $FILE ]  ( Files... ), if [ -d $FILE ] ( Directories... )
then 
    echo The file exists
else
    echo The files Doesn't Exist
fi

## Comparision of files time stamps...
FILE1="test1"
FILE2="test2"

if [ $FILE1 -nt $FILE2 ]
then 
    echo $FILE1 is newer
else
    echo $FILE2 is newer
fi

# Checking the Services ##
netstat -ant | grep :80 > /dev/null ( Redirecting the o/p to /dev/null still preserving the exit status )
netstat -antp | grep httpd > /dev/null 
APACHESTATUS="$?"
if [ "$APACHESTATUS" -eq 0 ]
then 
    echo Apache is UP and Running...
    #Testing mysql 
    netstat -ant | grep 3306 > /dev/null ( We don't want to see it , But want the exit Status )
    MYSQLSTATUS="$?"
    if [ "$MYSQLSTATUS" != 0 ]   ( Nested if then ) 
    then 
        echo Mysql is Not Running !
    else
        echo Mysql is Running!
    fi
else
    echo Apache is NOT Running!
fi

# Control Structure ||| #
Case is Concise way of using If then else statements... In C/C++ it's switch Statement
Eg:
for countries in USA Australia France Latvia Argentina Jamaica cuba   
do 
    case $countries in USA )
    echo "welcome to North amrica" ;;
    Australia )
    echo "Good Day Mate" ;;
    France )    
    echo " Merci" ;;
    Latvia )
    echo "welcome to the former ussr" ;;
    Argentina )
    echo "Buenos Dias" ;;
    Jamaica )
    echo "one Love" ;;
    * )  Note: The wild card will handle anything that doesn't match the case
    echo " " 
    esac
done

# Positional Parameter #
Parameter on the command line to execute in the script
Note: positional parameters are seperated by spaces...
eg: echo $#  o/p: script 3 4 5 - 3
eg: 
BADPARAM=165
echo $# ( It's hold the no. of positional parameters )

if [ $# != 2 ]
then 
    echo This script requires 2 arguments
    echo You've entered $# parameters
    exit $BADPARAM
fi 
#Begin Sequence Command
seq $1 $2
#End

$* = Returns a single string (``$1, $2 ... $n'') comprising all of the positional parameters separated by the internal field separator character 

$@ = Returns a sequence of strings (``$1'', ``$2'', ... ``$n'') wherein each positional parameter remains separate from the others. $@"' to let each command-line parameter expand to a 
# separate word. List of positional parameter sepearted by unquoted space

$0 = Refers to the name of the script itself. 
$# = Refers to the number of arguments specified on a command line... Gives you number of positional parameter...
$$ = Special parameter will give the process id of the shell
$! = Gives you the process id of the most recently executed backgroud process 
$? = Gives the most recently exit status of the command
$- = Options set using set built in commands
$_ = Gives the last argument to the previous command. At the shell startup, it gives the absolute filename of the shell script being executed.

Eg: mytest foo bar qux
   echo There are $# arguments to $0: $*  // There are 3 arguments to mytest: foo bar quux
   echo first argument  $1 //first argument: foo
   echo second argument: $2 // second argument: bar
   echo third argument: $3 // third argument: quux
   echo here they are again: $@ // here they are again: foo bar quux

#Select Menus #
We can create menus in bash using 
1. case
2. select
set |grep PS,  PS3 variable is not assigned anything, this can be editied.
Syntax:
select var in "Choices1 Choices2"
    do
    command
    break (This is optional...)
    done
#End

Eg:
PS3='Please select a choice: '

select var in "Choice1"
    do 
    echo Hello World
    break
    done
Eg:
PS3='Please select a Choice: '
LIST="MySQL System Quit"
select i in $LIST
    do 
    if [ $i = "MySQL" ]
    then
        watch tail /var/log/mysqld.log
    elif [ $i = "System" ]
    then
        watch tail /var/log/messages
    elif [ $i = "Quit" ]
    then
    exit
    fi
    done

# MOVE MANY FILES #
touch test{1,2}
Eg:
if [ $# != 1 ]
then 
    echo At least 1 positional parameter is required!
    exit $BADARG
fi 
for file in `ls -A $1*`
do 
    mv $file $file.old
done
#End

# Network Check Connectivity #
Tip:
1. When ever you write a script open additional terminal to open test few command which u goona include
2. You can speed up the process in this way

Eg:
if [ $# -eq 0 ]
then
    SITE="www.google.com"
else
    SITE="$1"
fi
ping -c 2 $SITE > /dev/null

if [ $? != 0 ]
then
    echo `date +%F`
    echo There seems to be Internet connectivity issues!
fi

# File Differences #
MONITORDIR="/root/temp2"
ls -A $MONITORDIR > filelist2
FILEDIFF=`diff filelist1 filelist2 | cut -f 2 -d ' '`
echo $FILEDIFF
exit

for file in $FILEDIFF
do 
    if [ -e $MONITORDIR/$file ]
    then
    echo $file
    fi
done

# MONITOR SERVICES #
netstat -ant | grep 3306 /dev/null
MYSQLSTATUS=`echo $?`
SERVICENAME="mysqld"
COUNT=0
THRESHOLD=2

if [ $MYSQLSTATUS != 0 ]
then
    while [ $COUNT -le $THRESHOLD ]
    do 
    service $SERVICENAME start
    if [ $?  != 0 ]
    then
        let "COUNT +=1"
    else

    exit 0
    done
    echo Problems starting $SERVICENAME | mail -s "$SERVICENAME Failure"
else

    echo $SERVICENAME is running...
fi

# FTP Synchronization #
Using mirror command we can synchronize the local dir to remote dir...
open -ulinuxcbtdebian,abc123 192.168.1.20
cd temp2
lcd /root/temp2
mirror -Rn
SCRIPTHOME="/root/temp2"
LFTPSCRIPT=$SCRIPTHOME/ftpsynchronize.lftp
lftp -f $LFTPSCRIPT

# Parse Logs #
checking logs 
awk is superior parser than cut
awk has the features to perform grep like features
awk '/anonymous/ { print $8,$12 }' 
if [ $# != 1 ]
then 
    echo At least 1 parameter is required for the threshold Value 
    exit 165
fi
LOGFILE="/var/log/vsftpd/vsftpd.log"
BADNAME="anonymous"
THRESHOLD=$1   
OFFENSES=`awk '/anonymous/ { print $8,$12 }' $LOGFILE | wc -l'  (Note: print line numbers)
grep $BADNAME $LOGFILE | awk '{ pring $8,$12 }'
if [ $OFFENSES -gt $THRESHOLD ]
then
    echo $OFFENSES Attempted breaches were detected | mail -s "Breach attempt" root
fi
echo $OFFENSES

# Backup Files #
BACKUPDIR=~/backup
SCRIPTDIR=~/temp2
BACKUPFILE=scripts.backup.`date +%F`.tgz
BACKUPHOST=192.168.1.20
COUNT=`ls $BACKUPDIR/scripts.* | wc -l`
THRESHOLD=7
function checkbackup () {
if [ ! -e $BACKUPDIR ]
then
    echo Creating directory because it doesn\'t exist!
    mkdir ~/backup
    COUNT=0
#    exit 0
else
    COUNT=`ls $BACKUPDIR/scripts.* | wc -l`
fi

}

function backup() {
if [ $COUNT -le $THRESHOLD ]
then
    tar -czvf $BACKUPDIR/$BACKUPFILE $SCRIPTDIR > /dev/null
    if [ $? !=0 ]; then echo Problesm Creating Backup files; fi
    scp $BACKUPDIR/$BACKUPFILR $BACKUPHOST:   
    if [ $? != 0 ]; then echo Problems Copying File to Backup HOst fi
fi
}
checkbackupdir
backup

Note: public key stored in .ssh/id_rsa.pub ( copy the public key to remote host in .ssh/authorised_keys(create)authorised key)
      private key stored in .ssh/authorized_keys

# Logging Techniques #
Eg: 
date +%b\ %d o/p: Oct 17
grep -E 'oct 15' messages.1 | wc -l
grep -E `date +%b\ %d`  messages
mydate=`date +%b\ %d'
myscript=`basename $0`
myscripterrors=$myscript.errors
log1=/var/log/messages.1
log2=/var/log/maillog
log3=/var/log/mysqld.log
log4=/var/log/secure
log5=/var/log/cron.1
for log in $log{1,2,3,4,5}
do    
    if [ -e $log ] && [ -s $log ] (If the file exist and the file is not a blank file(Not Zero byte) process...)
    then
    echo $log BEGIN
    grep -E "$mydate" $log 2> $myscripterrors   
    echo $log END
    echo
    fi
done
#Cleanup STDERR generated file
if [ -e $myscripterrors ] && [ ! -s $myscripterrors ]
then
    rm -rf $myscripterrors
fi

# Additional Stuff #
The expr Command
Originally, the Bourne shell provided a special command that was used for processing
mathematical equations. The expr command allowed for processing equations

Using Brackets
The bash shell includes the expr command to stay compatible with the Bourne shell; how-
ever, it also provides a much easier way of performing mathematical equations. In bash,
when assigning a mathematical value to a variable, you can enclose the mathematical
equation using a dollar sign and square brackets ($[ operation ]):
e.g. var1=$[1 * 5]
The z shell (zsh) provides full floating-point arithmetic operations. If you require
floating point calculations in your shell scripts, you might consider checking out
the z shell, which you can easily install using the Synaptic installer in Ubuntu.

A Floating-Point Solution
There have been several solutions for overcoming the bash integer limitation. The most
popular solution uses the built-in bash calculator (called bc).
e.g.
#!/bin/bash
var1=‘echo “ scale=4; 3.44 / 5” | bc’
echo The answer is $var1

# Linux Exit Status Codes #
Code         Description
0         Successful completion of the command
1         General unknown error
2         Misuse of shell command
126         The command can’t execute
127         Command not found
128         Invalid exit argument
128+x         Fatal error with Linux signal x
130         Command terminated with Ctl-C
255         Exit status out of range

# The test Numeric Comparisons #
Comparison             Description
n1 -eq n2             Check if n1 is equal to n2
n1 -ge n2             Check if n1 is greater than or equal to n2
n1 -gt n2             Check if n1 is greater than n2
n1 -le n2             Check if n1 is less than or equal to n2
n1 -lt n2             Check if n1 is less than n2
n1 -ne n2             Check if n1 is not equal to n2

# The test Command String Comparisons #
Compaison         Description
str1 = str2         Check if str1 is the same as string str2
str1 != str2         Check if str1 is not the same as str2
str1 < str2         Check if str1 is less than str2
str1 > str2         Check if str1 is greater than str2
-n str1         Check if str1 has a length greater than zero
-z str1 C        Check if str1 has a length of zero

# The test Command File comparisons #
Comparison         Description
-d             file Check if file exists and is a directory
-e             file Checks if file exists
-f             file Checks if file exists and is a file
-r             file Checks if file exists and is readable
-s             file Checks if file exists and is not empty
-w             file Checks if file exists and is writeable
-x             file Checks if file exists and is executable
-O             file Checks if file exists and is owned by the current user
-G             file Checks if file exists and the default group is the same as
                   the current user
file1 -nt file2     Checks if file1 is newer than file2
file1 -ot file2     Checks if file1 is older than file2

# Shell Scripting Debugging #
Advanced Scripting
Debugging: fixing up unexpected error
- Sometimes your script will not work properly, and it is not immediately obvious why not
- In situations like this it is necessary to use the Bourne shell's built-in simple debbugging facility

Note: When the -x option is used to run a script( for example, sh -x myscript) the script echoes each command-line it is about to run just before it runs it
All special characters (such as wildcards, variables substiturions, etc) will have already been processed by the shell before the command-line is displayed)

The "shift" command re-assigns the postional parameters in effect shifting them to the left one match
$1 <- $2, $2 <- $3, $3 <- $4
Eg: bash -x test.sh 3 4 5

Note: x - xtrace, v - verbose, e - stop immediately if any text program returns a non-zero, n - syntax error
bash -x scriptname 
bash -v scriptname

#!/bin/bash -x 
Between the program 
set -x    set -v 
set +x     set +v

$$$$ SECRET $$$$  && ### Bash tips and tricks ###
Don’t think of piping as running two commands back-to-back though. The Linux
system actually runs both commands at the same time, linking them together
internally in the system. As the first command produces output, it’s sent imme-
diately to the second command. No intermediate files or buffer areas are used
to transfer the data.

#Exec
The exec() family of functions will initiate a program from within a program. They are also various front-end functions to execve().
The functions return an integer error code. (0=Ok/-1=Fail). 
An exec command redirects stdin to a file

#fd
File descriptors 0, 1 and 2 are reserved for stdin, stdout and stderr respectively. However, bash shell allows you to assign a file descriptor to an input file or output file. This is done to improve file reading and writing performance. This is known as user defined file descriptors. 
exec fd> output.txt
    * where, fd >= 3 
Eg:
exec 3> /tmp/output.txt
echo "This is a test" >&3
date >&3
exec 3<&-

Brace expansion becomes useful when you need to make a backup of a file. This is why it's my favorite shell trick. I use it almost every day when I need to make a backup of a config file before changing it. For example, if I'm making a change to my Apache configuration, I can do the following and save some typing:

$ cp /etc/httpd/conf/httpd.conf{,.bak}

Notice that there is no character between the opening brace and the first comma. It's perfectly acceptable to do this and is useful when adding characters to an existing filename or when one argument is a substring of the other. Then, if I need to see what changes I made later in the day, I use the diff command and reverse the order of the strings inside the braces:
$ diff /etc/httpd/conf/httpd.conf{.bak,}
1050a1051
> # I added this comment earlier

today2=`date +%d-%b-%Y`

To speed things up, you can search interactively through your command history by pressing Ctrl-R. After doing this, your prompt changes to:
(reverse-i-search)`':

if [ ! -x $parentdir -o ! -w $parentdir ] 
then
  echo "Uh oh, can't create requested directory $1"
  exit 0
fi
This is a sophisticated use of the test command, but read “!” as “not” and “-o” as “or”, and you can see the test is “if not executable $parentdir or not writeable $parentdir then...”, and that should make sense! 
Scripting for X Productivity
zenith
dialog

# SED Ninga Tips #

An address is either a line number ($ for the last line) or a regular expression enclosed in slashes. “.” (any character), “*” (any number of the immediately preceding regular expression), “[class]” (any character in class), “[^class]” (any character not in class), “^” (begining of line), “$” (end of line) and ''\'' (to escape characters where needed). 
The most commonly used sed commands are “d” (delete) and “s” (substitute).
s/pattern/replacement/[g]

Filter out empty lines from a file:
Example: sed -e '/^$/d' your_file.txt

Add named mycomputer to the end of every line in /etc/exports:
Example: cat /etc/exports |  \
sed -e 's/$/ mycomputer/' > /etc/exports

Add the computer named comp2 only to the directories beginning with /data/ in /etc/exports:
Example: cat /etc/exports | \
sed -e '/^\/data\//s/$/gyani/' > /etc/exports

Remove the first word on each line (including any leading spaces and the trailing space):
E.g: cat test3.txt | sed -e 's/^ *[^ ]* //'

The initial ^ * is used to match any number of spaces at the beginning of the line. The [^ ]* then matches any number of characters that are not spaces (the ^ inside the brace reverses the match on the space), so it matches a single word. 
Remove the last word on each line:
E.g: cat test3.txt | sed -e 's/^\(.*\) .*/\1/'

Assume we have a small table of two columns of numbers which we wish to swap:
s/\([0-9]*\) \([0-9]*\)/\2 \1/

# Arrays #
array=(red green blue yellow magenta)
len=${#array[*]}
echo "The array has $len members. They are:"
i=0
while [ $i -lt $len ]; do
    echo "$i: ${array[$i]}"
    let i++
done                          

## Arithmetic Operations ##
Eg:
$((yy % 100))
N_M=`expr $T_M % 12` // expr it avoids some string interger issues...
let i++ // let i = i+1 // increment operator

#!/bin/bash
x=5   # initialize x to 5
y=3   # initialize y to 3
add=$(($x + $y))   # add the values of x and y and assign it to variable add
sub=$(($x - $y))   # subtract the values of x and y and assign it to variable sub
mul=$(($x * $y))   # multiply the values of x and y and assign it to variable mul
div=$(($x / $y))   # divide the values of x and y and assign it to variable div
mod=$(($x % $y))   # get the remainder of x / y and assign it to variable mod
These assignment operators are also available with $(( )) provided they occur inside the double parentheses. The outermost assignment is still just plain old shell variable assignment.

#Shopt
shopt built in command is used to set and unset a shell options. Using this command, you can make use of shell intelligence.
#Let
let commands is used to perform arithmetic operations on shell variables.

# Getopts Good Example ##
getopts command is used to parse the given command line arguments. We can define the rules for options i.e which option accepts arguments and which does not. In getopts command, if an option is followed by a colon, then it expects an argument for that option.
getopts provides two variables $OPTIND and $OPTARG which has index of the next parameter and option arguments respectively.

$ cat options.sh
#! /bin/bash
while getopts :h:r:l: OPTION
do
         case $OPTION in
          h) echo "help of $OPTARG"
             help "$OPTARG"
             ;;
          r) echo "Going to remove a file $OPTARG"
             rm -f "$OPTARG"
            ;;
         esac
done
#Out/put:
$ ./options.sh -h jobs
help of jobs
jobs: jobs [-lnprs] [jobspec ...] or jobs -x command [args]
    Lists the active jobs.  The -l option lists process id's in addition
    to the normal information; the -p option lists process id's only.

Example:
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

## Wonderful Example of Getopts ##
Getopts Multi Option Parsing ...
01    #!/bin/bash    // Run time parameters
02     
03    function help {
04        echo "Usage: test -p"
05    }
06     
07    if test $# -eq 0; then
08        help
09        exit 0
10    fi
11     
12    exit 0

If you don't give any parameters, it prints the help message. But... what about checking the parameter itself? Thats where getopts comes into play! First of all, look at the following script, which adds the getopts to the above script...

01    #!/bin/bash
02     
03    function help {
04        echo "Usage: test -p"
05    }
06     
07    if test $# -eq 0; then
08        help
09        exit 0
10    fi
11     
12    while getopts "p" option; do
13        case $option in
14            p) echo "this is a test";;
15            *) help;;
16        esac
17    done
18     
19    exit 0

#### How it works
Each time the while loop is executed, getopts puts the next parameter into the variable option. The parameters desired, must be defined as a string, containing them one by one.
If you want to accept the parameters a, b, and c, the loop should be:

while getopts  "abc" var; do
2        case $var in
3            a) echo "parameter a given";;
4            b ) echo "parameter b given";;
5            c) echo "parameter c given";;
6            *) echo "Usage: script -abc";;
7        esac
8    done

### Accepting Arguments   
Getopts gives you a way to accept arguments for a parameter too! Just put a  :  after the parameter's name. Like this:

1    while getopts "a:bc"  var; do
2        case $var in
3            a) echo "parameter a given, it's argument is $OPTARG";;
4            b ) echo "parameter b given";;
5            c) echo "parameter c given";;
6            *) echo "Usage: script -a message -bc";;
7        esac
8    done
As understood from above script, the corresponding argument for a parameter is inside the variable OPTARG. So, you can easily manage it

## Bash Arrays ##

Bash arrays have numbered indexes only, but they are sparse, ie you don't have to define all the indexes. An entire array
can be assigned by enclosing the array items in parenthesis:
  arr=(Hello World)
Individual items can be assigned with the familiar array syntax (unless you're used to Basic or Fortran):
  arr[0]=Hello
  arr[1]=World
But it gets a bit ugly when you want to refer to an array item:
  echo ${arr[0]} ${arr[1]}
To quote from the man page:
    The braces are required to avoid conflicts with pathname expansion. 
In addition the following funky constructs are available:
  ${arr[*]}         # All of the items in the array
  ${!arr[*]}        # All of the indexes in the array
  ${#arr[*]}        # Number of items in the array
  ${#arr[0]}        # Length of item zero
The ${!arr[*]} is a relatively new addition to bash, it was not part of the original array implementation.
The following example shows some simple array usage (note the "[index]=value" assignment to assign a specific index):
#!/bin/bash
array=(one two three four [5]=five)
echo "Array size: ${#array[*]}"
echo "Array items:"
for item in ${array[*]}
do
    printf "   %s\n" $item
done
echo "Array indexes:"
for index in ${!array[*]}
do
    printf "   %d\n" $index
done
echo "Array items and indexes:"
for index in ${!array[*]}
do
    printf "%4d: %s\n" $index ${array[$index]}
done

Running it produces the following output:
Array size: 5
Array items:
   one
   two
   three
   four
   five
Array indexes:
   0
   1
   2
   3
   5
Array items and indexes:
   0: one
   1: two
   2: three
   3: four
   5: five

Note that the "@" sign can be used instead of the "*" in constructs such as ${arr[*]}, the result is the same except when expanding to the items of the array within a quoted string. In this case the behavior is the same as when expanding "$*" and "$@" within quoted strings: "${arr[*]}" returns all the items as a single word, whereas "${arr[@]}" returns each item as a separate word.

The following example shows how unquoted, quoted "*", and quoted "@" affect the expansion (particularly important when the array items themselves contain spaces):

#!/bin/bash
array=("first item" "second item" "third" "item")
echo "Number of items in original array: ${#array[*]}"
for ix in ${!array[*]}
do
    printf "   %s\n" "${array[$ix]}"
done
echo

arr=(${array[*]})
echo "After unquoted expansion: ${#arr[*]}"
for ix in ${!arr[*]}
do
    printf "   %s\n" "${arr[$ix]}"
done
echo
arr=("${array[*]}")
echo "After * quoted expansion: ${#arr[*]}"
for ix in ${!arr[*]}
do
    printf "   %s\n" "${arr[$ix]}"
done
echo
arr=("${array[@]}")
echo "After @ quoted expansion: ${#arr[*]}"
for ix in ${!arr[*]}
do
    printf "   %s\n" "${arr[$ix]}"
done

When run it outputs:
Number of items in original array: 4
   first item
   second item
   third
   item
After unquoted expansion: 6
   first
   item
   second
   item
   third
   item
After * quoted expansion: 1
   first item second item third item
After @ quoted expansion: 4
   first item
   second item
   third
   item

# Bash Function Returns #

Returning Values from Bash Function
* HOW-TOs
Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.

Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.
The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:

function myfunc()
{
    myresult='some value'
}
myfunc
echo $myresult

The code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.
A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:

function myfunc()
{
    local  myresult='some value'
    echo "$myresult"
}

result=$(myfunc)   # or result=`myfunc`
echo $result
Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.

The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    eval $__resultvar="'$myresult'"
}

myfunc result
echo $result
Since we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.

When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:

function myfunc()
{
    local  result=$1
    local  myresult='some value'
    eval $result="'$myresult'"
}
myfunc result
echo $result

The reason it doesn't work is because when eval does the second interpretation and evaluates result='some value', result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.
For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    if [[ "$__resultvar" ]]; then
        eval $__resultvar="'$myresult'"
    else
        echo "$myresult"
    fi
}

myfunc result
echo $result
result2=$(myfunc)
echo $result2
Here, if no variable name is passed to the function, the value is output to the standard output.

###How To Redirect stderr To stdout ( redirect stderr to a File )###
[a] stdin - Use to get input (keyboard) i.e. data going into a program.
[b] stdout - Use to write information (screen)
[c] stderr - Use to write error message (screen)

Understanding I/O streams numbers
The Unix / Linux standard I/O streams with numbers:
Handle     Name     Description
0     stdin     Standard input
1     stdout     Standard output
2     stderr     Standard error

Redirecting the standard error stream to a file
program-name 2> error.log

Redirecting the standard error (stderr) and stdout to file
command-name &>file
  or
command > file-name 2>&1
Redirect stderr to stdout
command-name 2>&1

##Improve Bash Shell Scripts Using Dialog##

The dialog command enables the use of window boxes in shell scripts to make their use more interactive.
Shell scripts are text files containing commands for the shell and are frequently used to handle repetitive tasks. In order to avoid typing the same commands over and over again, we put them in a file with a few modifications, give it execute permission and run it.
To control the program at run-time, an interactive shell script is needed. For this case, the dialog command offers an easy way to draw text-mode colored windows. These windows can contain text boxes, message boxes or different kinds of menus. There are even ways of using input from the user to modify the script behaviour.

E.g:
#!/bin/bash
if$
    test -r weekly.report
then
    lpr weekly.report
    Mail boss < weekly.report
    cp weekly.report /floppy/report.txt
    rm weekly.report
else
    echo I see no weekly.report file here.
fi

The test command's -r operator means, “Does this file exist, and can I read it?” test is quiet regardless of whether it succeeds or fails, so there's no need for anything to get sent to /dev/null.
Test also has an alternative syntax: you can use a [ character instead of the word test, so long as you have a ] at the end of the line. Be sure to put a space between any other characters and the [ and the ] characters! We can make our if look like this now:

if [ -r weekly.report ]
#!/bin/bash
if [ ! -r weekly.report ]
then
    echo I see no weekly.report file here.
    exit 1
fi
    if [ ! -w . ]
then
    echo I will not be able to delete
    echo weekly.report for you, so I give up.
    exit 2
fi
# Real work omitted...

Each test now has a ! character in it, which means “not”. So the first test succeeds if the weekly.report is not readable, and the second succeeds if the current directory (“.”) is not writable. In each case, the script prints an error message and exits. Notice that there's a different number fed to exit each time. This is how Unix commands (including if itself!) tell whether other commands succeed: if they exit with any exit code other than 0, they didn't. What each non-zero number (up to 255) means, other than “Something bad happened,” is up to you. But 0 always means success.

Fortunately, test can help. Let's put this as the very first test in our program, right after the #!/bin/bash:
if [ -z "$1" ]
then
    echo $0: usage: $0 filename
    exit 3
fi
Now if the user puts nothing on the command line, we print a usage message and quit. The -z operator means “is this an empty string?”. Notice the double quotes around the $1: they are mandatory here. If you leave them out, test will give an error message in just the situation we are trying to detect. The quotes protect the nothing-at-all stored in $1 from causing a syntax error

Here's how we read that script: “     there's something in $1, we mess with it. Immediately after we finish messing with it, we do the shift command, which moves the contents of $2 into $1, the contents of $3 into $2, and so forth, regardless of how many of these command-line arguments there are. Then we go back and do it all again. We know we've finished when there's nothing at all in $1.”

This technique allows us to write a script that can handle any number of arguments, while only dealing with $1 at a time. So now our script looks like this:
#!/bin/bash
while [ ! -z "$1" ]
do
    # do stuff to $1
    if [ ! -r $1 ]
    then
        echo $0: I see no $1 file here.
        exit 1
    fi
        # omitted test...
    lpr $1
    Mail boss < $1
    # and so forth...
    shift
done
exit 0

Notice that we nested if inside while. We can do that all we like. Also notice that this program quits the instant it finds something wrong. If you would like it to continue on to the next argument instead of bombing out, just replace an exit with:

shift
continue
The continue command just means “Go back up to the top of the loop right now, and try the control command again.” 

##Floating Point Math in Bash##
The obvious candidate for adding floating point capabilities to bash is bc. bc is, to quote the man page:
    An arbitrary precision calculator language 
As an aside, take note of the last word in that quote: "language". That's right bc is actually a programming language, it contains if statements and while loops among others. I say as an aside because it's largely irrelevant to what we want to do today, not completely irrelevant but largely.

To use bc in our bash scripts we'll package it up into a couple of functions:

    float_eval EXPRESSION
and
    float_cond CONDITIONAL-EXPRESSION

Both functions expect a single floating point expression, float_eval writes the result of the expression evaluation to standard out, float_cond assumes the expression is a conditional expression and sets the return/status code to zero if the expression is true and one if it's false.

Usage is quite simple:
  float_eval '12.0 / 3.0'
  if float_cond '10.0 > 9.0'; then
    echo 'As expected, 10.0 is greater than 9.0'
  fi
  a=12.0
  b=3.0
  c=$(float_eval "$a / $b")

The code for the functions follows: 
function float_eval()
{
    local stat=0
    local result=0.0
    if [[ $# -gt 0 ]]; then
        result=$(echo "scale=$float_scale; $*" | bc -q 2>/dev/null)
        stat=$?
        if [[ $stat -eq 0  &&  -z "$result" ]]; then stat=1; fi
    fi
    echo $result
    return $stat
}

function float_cond()
{
    local cond=0
    if [[ $# -gt 0 ]]; then
        cond=$(echo "$*" | bc -q 2>/dev/null)
        if [[ -z "$cond" ]]; then cond=0; fi
        if [[ "$cond" != 0  &&  "$cond" != 1 ]]; then cond=0; fi
    fi
    local stat=$((cond == 0))
    return $stat
}

### Use the Bash trap Statement to Clean Up Temporary Files ###

The trap statement in bash causes your script to execute one or more commands when a signal is received. One of the useful things you can use this for is to clean up temporary files when your script exits.
To execute code when your script receives a signal, use the following syntax:
trap arg sigspec...

The "arg" is the command to execute. If the command contains spaces, quote it. You can include multiple commands by separating them with semicolons. For more complex things, put your exit code in a function and just invoke the function. The "sigspec" list is a list of signals to trap and then execute "arg" (if/when they occur). For example, to remove a file on EXIT, do the following:
trap "rm -f afile" EXIT
Note that EXIT is not a real signal (do kill -l to see all signals); it is synthesized by bash.

If you create temporary files at various places in your code and you don't use a naming convention that would allow you to use a wild card in your trap statement and you don't want to worry about changing your trap statement as your code evolves, you could write something like this to allow you to add new trap commands that get executed on exit:

#!/bin/bash

declare -a on_exit_items
function on_exit()
{
    for i in "${on_exit_items[@]}"
    do
        echo "on_exit: $i"
        eval $i
    done
}
function add_on_exit()
{
    local n=${#on_exit_items[*]}
    on_exit_items[$n]="$*"
    if [[ $n -eq 0 ]]; then
        echo "Setting trap"
        trap on_exit EXIT
    fi
}
touch $$-1.tmp
add_on_exit rm -f $$-1.tmp

touch $$-2.tmp
add_on_exit rm -f $$-2.tmp

ls -la
Here the function add_on_exit() adds commands to an array, and the on_exit() function loops through the commands in the array and executes them on exit. The on_exit function gets set as the trap command the first time add_on_exit is called.

### Pass on Passwords with scp ###

Learn how to propagate files quickly and do backups easily when you set up scp to work without needing passwords. 
You now are asked for bozo's root password, so we're not quite there yet. The system still is asking for a password, so it's not easily scriptable. To fix that, follow this one-time procedure, after which you can make endless password-less scp copies:
   1.Decide which user on the local machine will be using scp later on. Of course, root gives you the most power, and that's how I personally have done it. I'm not going to give you a lecture here on the dangers of root, so if you don't understand them, choose a different user. Whatever you choose, log in as that user now and stay there for the rest of the procedure. Log in as this same user when you use scp later on.
   2.Generate a public/private key pair on the local machine. Say what? If you're not familiar with public key cryptography, here's the 15-second explanation. In public key cryptography, you generate a pair of mathematically related keys, one public and one private. You then give your public key to anyone and everyone in the world, but you never ever give out your private key. The magic is in the mathematical makeup of the keys; anyone with your public key can use it to encrypt a message, but only you can decrypt it with your private key. Anyway, the syntax to create the key pair is:
   
    ssh-keygen -t rsa
   3. In response, you should see:
    Generating public/private rsa key pair
      Enter file in which to save the key ... 
      Press Enter to accept this.
   4. In response, you should see:
    Enter passphrase (empty for no passphrase):
    You don't need a passphrase, so press Enter twice.
   5. In response, you should see:
     Your identification has been saved in ... 
      Your public key has been saved in ... 
      Note the name and location of the public key just generated. It always ends in .pub.
   6. Copy the public key just generated to all of your remote Linux boxes. You can use scp or FTP or whatever to make the copy. Assuming you're using root--again, see my warning in step 1--the key must be contained in the file /root/.ssh/authorized_keys. Or, if you are logging in as a user, for example, clyde, it would be in /home/clyde/authorized_keys. Notice that the authorized_keys file can contain keys from other PCs. So, if the file already exists and contains text, you need to append the contents of your public key file to what already is there.

Now, with a little luck, you should be able to scp a file to the remote box without needing to use a password. So let's test it by trying our first example again. Copy a file named xyz.tgz from your local PC to the /tmp dir of a remote PC called bozo: 

## Bash Regular Expressions ##
When working with regular expressions in a shell script the norm is to use grep or sed or some other external command/program. Since version 3 of bash (released in 2004) there is another option: bash's built-in regular expression comparison operator "=~".
Bash's regular expression comparison operator takes a string on the left and an extended regular expression on the right. It returns 0 (success) if the regular expression matches the string, otherwise it returns 1 (failure).

In addition to doing simple matching, bash regular expressions support sub-patterns surrounded by parenthesis for capturing parts of the match. The matches are assigned to an array variable BASH_REMATCH. The entire match is assigned to BASH_REMATCH[0], the first sub-pattern is assigned to BASH_REMATCH[1], etc..

The following example script takes a regular expression as its first argument and one or more strings to match against. It then cycles through the strings and outputs the results of the match process: 
#!/bin.bash
if [[ $# -lt 2 ]]; then
    echo "Usage: $0 PATTERN STRINGS..."
    exit 1
fi
regex=$1
shift
echo "regex: $regex"
echo
while [[ $1 ]]
do
    if [[ $1 =~ $regex ]]; then
        echo "$1 matches"
        i=1
        n=${#BASH_REMATCH[*]}
        while [[ $i -lt $n ]]
        do
            echo "  capture[$i]: ${BASH_REMATCH[$i]}"
            let i++

        done
    else
        echo "$1 does not match"
    fi
    shift
done

Assuming the script is saved in "bashre.sh", the following sample shows its output:
  # sh bashre.sh 'aa(b{2,3}[xyz])cc' aabbxcc aabbcc
  regex: aa(b{2,3}[xyz])cc

  aabbxcc matches
  capture[1]: bbx
  aabbcc does not match

## Validating an IP Address in a Bash Script ##

I've recently written about using bash arrays and bash regular expressions, so here's a more useful example of using them to test IP addresses for validity.
To belabor the obvious: IP addresses are 32 bit values written as four numbers (the individual bytes of the IP address) separated by dots (periods). Each of the four numbers has a valid range of 0 to 255.
The following bash script contains a bash function which returns true if it is passed a valid IP address and false otherwise. In bash speak true means it exits with a zero status, anything else is false. The status of a command/function is stored in the bash variable "$?".
#!/bin/bash
# Test an IP address for validity:
# Usage:
#      valid_ip IP_ADDRESS
#      if [[ $? -eq 0 ]]; then echo good; else echo bad; fi
#   OR
#      if valid_ip IP_ADDRESS; then echo good; else echo bad; fi
#
function valid_ip()
{
    local  ip=$1
    local  stat=1

    if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
        OIFS=$IFS
        IFS='.'
        ip=($ip)
        IFS=$OIFS
        [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
            && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
        stat=$?
    fi
    return $stat
}

# If run directly, execute some tests.
if [[ "$(basename $0 .sh)" == 'valid_ip' ]]; then
    ips='
        4.2.2.2
        a.b.c.d
        192.168.1.1
        0.0.0.0
        255.255.255.255
        255.255.255.256
        192.168.0.1
        192.168.0
        1234.123.123.123
        '
    for ip in $ips
    do
        if valid_ip $ip; then stat='good'; else stat='bad'; fi
        printf "%-20s: %s\n" "$ip" "$stat"
    done
fi

If you save this script as "valid_ip.sh" and then run it directly it will run some tests and prints the results:
  # sh valid_ip.sh
  4.2.2.2             : good
  a.b.c.d             : bad
  192.168.1.1         : good
  0.0.0.0             : good
  255.255.255.255     : good
  255.255.255.256     : bad
  192.168.0.1         : good
  192.168.0           : bad
  1234.123.123.123    : bad

In the function valid_ip, the if statement uses a regular expression to make sure the subject IP address consists of four dot separated numbers:
  if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
If that test passes then the code inside the if statement separates the subject IP address into four parts at the dots and places the parts in an array:

  OIFS=$IFS
  IFS='.'
  ip=($ip)
  IFS=$OIFS

It does this by momentarily changing bash's Internal Field Separator variable so that rather than parsing words as whitespace separated items, bash parses them as dot separated. Putting the value of the subject IP address inside parenthesis and assigning it to itself thereby turns it into an array where each dot separated number is assigned to an array slot. Now the individual pieces are tested to make sure they're all less than or equal to 255 and the status of the test is saved so that it can be returned to the caller:
  [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
          && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
  stat=$?

Note that there's no need to test that the numbers are greater than or equal to zero because the regular expression test has already eliminated any thing that doesn't consist of only dots and digits. 

### Tech Tip: Dereference Variable Names Inside Bash Functions  ###
We often read (including in the book Advanced Bash-Scripting Guide by Mendel Cooper) that if we pass variable names as parameters to functions, they will be treated as string literals and cannot be dereferenced (ie the value is not available). But this is not so, variable names can be passed as parameters to functions and they can be dereferenced to obtain the value of the variable with the given name.

The following script demonstrates this:
DerefernceVariablePassedToFunction() {
    if [ -n "$1" ] ; then
        echo "value of [${1}] is: [${!1}]"
    else
        echo "Null parameter passed to this function"
    fi
}
Variable="LinuxJournal"
DerefernceVariablePassedToFunction Variable

If we put above code in a file and run it, we will get output as:

$ /bin/bash DereferenceVarInFunction.sh
value of [Variable] is: [LinuxJournal]
The secret here is the "!" used in the variable expansion "${!1}". The bash manual states:
    If the first character of parameter is an exclamation point, a level of variable indirection is introduced. Bash uses the value of the variable formed from the rest of parameter as the name of the variable; this variable is then expanded and that value is used in the rest of the substitution, rather than the value of parameter itself. This is known as indirect expansion. 

##  Using Named Pipes (FIFOs) with Bash ##

Like un-named/anonymous pipes, named pipes provide a form of IPC (Inter-Process Communication). With anonymous pipes, there's one reader and one writer, but that's not required with named pipes—any number of readers and writers may use the pipe.
Named pipes are visible in the filesystem and can be read and written just as other files are:
$ ls -la /tmp/testpipe
prw-r--r-- 1 mitch users 0 2009-03-25 12:06 /tmp/testpipe|
One situation might be when you've got a backup script that runs via cron, and after it's finished, you want to shut down your system. If you do the shutdown from the backup script, cron never sees the backup script finish, so it never sends out the e-mail containing the output from the backup job. You could do the shutdown via another cron job after the backup is "supposed" to finish, but then you run the risk of shutting down too early every now and then, or you have to make the delay much larger than it needs to be most of the time.Using a named pipe, you can start the backup and the shutdown cron jobs at the same time and have the shutdown just wait till the backup writes to the named pipe. When the shutdown job reads something from the pipe, it then pauses for a few minutes so the cron e-mail can go out, and then it shuts down the system.
Named pipes are created via mkfifo or mknod:

$ mkfifo /tmp/testpipe
$ mknod /tmp/testpipe p
The following shell script reads from a pipe. It first creates the pipe if it doesn't exist, then it reads in a loop till it sees "quit":

#!/bin/bash
pipe=/tmp/testpipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
    mkfifo $pipe
fi
while true
do
    if read line <$pipe; then
        if [[ "$line" == 'quit' ]]; then
            break
        fi
        echo $line
    fi
done
echo "Reader exiting"
The following shell script writes to the pipe created by the read script. First, it checks to make sure the pipe exists, then it writes to the pipe. If an argument is given to the script, it writes it to the pipe; otherwise, it writes "Hello from PID".

#!/bin/bash
pipe=/tmp/testpipe
if [[ ! -p $pipe ]]; then
    echo "Reader not running"
    exit 1
fi
if [[ "$1" ]]; then
    echo "$1" >$pipe
else
    echo "Hello from $$" >$pipe
fi

Running the scripts produces:
$ sh rpipe.sh &
[3] 23842
$ sh wpipe.sh
Hello from 23846
$ sh wpipe.sh
Hello from 23847
$ sh wpipe.sh
Hello from 23848
$ sh wpipe.sh quit
Reader exiting


### Bash Associate Array ####
There's nothing too surprising about associative arrays in bash, they are as you probably expect:
declare -A aa
aa[hello]=world
aa[ab]=cd

The -A option declares aa to be an associative array. Assignments are then made by putting the "key" inside the square brackets rather than an array index. You can also assign multiple items at once:
declare -A aa
aa=([hello]=world [ab]=cd)
Retrieving values is also as expected:

if [[ ${aa[hello]} == world ]]; then
    echo equal
fi
bb=${aa[hello]}

You can also use keys that contain spaces or other "strange" characters:
aa["hello world"]="from bash"

Note however that there appears to be a bug when assigning more than one item to an array with a parenthesis enclosed list if any of the keys have spaces in them. For example, consider the following script:

declare -A b
b=([hello]=world ["a b"]="c d")
for i in 1 2
do
    if [[ ${b["a b"]} == "c d" ]]; then
        echo $i: equals c d
    else
        echo $i: does not equal c d
    fi
    b["a b"]="c d"
done

Before ending I want to point out another feature that I just recently discovered about bash arrays: the ability to extend them with the += operator. This is actually the thing that lead me to the man page which then allowed me to discover the associative array feature. This is not a new feature, just new to me:
aa=(hello world)
aa+=(b c d)

After the += assignment the array will now contain 5 items, the values after the += having been appended to the end of the array. This also works with associative arrays.
aa=([hello]=world)
aa+=([b]=c)           # aa now contains 2 items

Note also that the += operator also works with regular variables and appends to the end of the current value.
aa="hello"
aa+=" world"          # aa is now "hello world"

Another expansion that exists is to extract substrings from the expanded value using the form ${VAR:offset:length}. This works in the expected form: offsets start at zero, if you don't specify a length it goes to the end of the string. For example:
  str=abcdefgh
  echo ${str:0:1}
  echo ${str:1}
outputs "a" and "bcdefgh". 

This form also accepts negative offsets which count backwards from the end of the string. So this:
  str=abcdefgh
  echo ${str:-3:2}
produces "abcdefgh"... oops, what happened there? What happened was that bash misinterpretted what we wanted because the expansion looks like a default value expansion: ${VAR:-DFLT}. First time I tried this I stared at it for quite a while before a light came on as to how to do it (without using a variable [see below]):
  str=abcdefgh
  echo ${str:$((-3)):2}

which outputs the desired value "fg". The "$((...))" causes bash to treat the value as an arithmetic expansion (ie a number). Another slightly longer way of doing this is:
  str=abcdefgh
  i=-3
  echo ${str:$i:2}
The final form of parameter expansion I want to mention is one which simply expands to the length of the variable's value, its form is ${#VAR}. So for example:
  str=abcdef
  echo ${#str}
outputs "6".

Using these forms of parameter expansion in your shell scripts can simplify and shorten your scripts. These are not the only forms of parameter expansion that bash supports but they're the ones that I've found most useful over time. For more information see the "Parameter Expansion" section of the bash man page.
p.s. Note that all of the above forms of parameter expansion also work with bash's Special parameters: "$$", "$0", "$1", etc. 

command1 && command2 
command2 is executed if, and only if, command1 returns 
an exit status of zero. 

command1 || command2 
command2 is executed if and only if command1 returns 
a non-zero exit status.  

The return status of AND and OR lists is the exit 
status of the last command executed in the list. 

Later in the script, there is undoubtedly a statement like rm -f /var/lock/subsys/$sname, and in fact, a cleaner way to write it would be to trap exit conditions and make sure that the lock file isn't left around, even if the script errors out. This is done with the trap shell command. Error condition 0 is a standard termination, so one clean way to write this is as follows:

trap "/bin/rm -f /var/lock/subsys/$sname" 0 
This provides a lot of flexibility, because you can capture any of the dozens of possible signals like SIGINT (interrupt) or SIGHUP (hangup).
Anyway, you're not the first to be baffled by system scripts, but as you can see, a bit of persistence reveals all. 

### Useful Linux One Liners ###
1. Display Username and UID sorted by UID Using cut, sort and tr
Cut command is used to extract specific part of a file. The following example cuts the username and UID from /etc/passwd file, and sort the output using sort command using username as a key and ":" as a delimiter
As a part of formatting the output, you can use any other character to display username and UID. Using tr command you can convert to ":" to any other character
$ cut -d ':' -f 1,3 /etc/passwd | sort -t ':' -k2n - | tr ':' '\t'

2. Find List of Unique Words in a file Using tr, sed, uniq
The following example lists the words which has only alphabets. 'tr' command converts all the character other than alphabets to newline. So all the words will be listed out with number of newlines. Sed command removes the empty lines and finally uniquely sort the output to avoid the duplicates.
$ tr -c a-zA-Z '\n' < Readme1.txt | sed '/^$/d' | sort | uniq -i -c
Note: uniq with -i ignores the cases
Linux 'sed' command plays a vital role in test manipulation operations

3. Join Two Files (Where one file is not sorted ) Using sort and Join
Join Command joins two files based on a common field between two files.For join to work properly, both the files should be sorted. 
In the example below, the file m1.txt Employee name and Employee Id and its not sorted. Second file m2.txt has employee name and Department name. To join these two files, sort the first file and give the sorted output as one of the input stream of join.
$ sort m1.txt | join - m2.txt

4. Find out which process is using up your memory using ps,awk,sort
The following command lists all the process sorted based on the used memory size.
$ ps aux | awk ' { if ($5 != 0) print $2,$5,$6,$11 } ' | sort -k2n
The above command lists the PID, Used virtaul memory size, Used resident set-size and process command.
Awk is an extremely useful language to manipulate structured data very quickly.

5. Find Out Top 10 Largest File or Directory Using du,sort and head
'du' command shows summarized disk usage for each file and directory of a given location(/var/log/*). The output of a sort command is reversely sorted based on the size
# du -sk /var/log/* | sort -r -n | head -10

6. Find out Top 10 most Used commands.
Bash maintains all the commands you execute in a hidden file called .bash_history under your home directory
Use the following one liner to identify which command you execute a lot from your command line.
$cat ~/.bash_history | tr "\|\;" "\n" | sed -e "s/^ //g" | cut -d " " -f 1 | sort | uniq -c | sort -n | tail -n 15

7. Display timestamp using HISTTIMEFORMAT
  Typically when you type history form command line, it displays the command# and the command. For auditing purpose, it may be beneficial to display the timestamp along the command 
# export HISTTIMEFORMAT='%F %T'

8. Search the histroy using Control+R 

9. Repeat perious command quickly using 4 different methods
Sometime you may end up repeatig the previous commands for various reasons. following are the 4 different ways to repeat the last executed command.
- Use the up arrow
- Type !!
- Type !-1
- Press Control+P

10. Eliminate the continuous repeated entry form history using HISTCONTROL
export HISTCONTROL=ignoredups

11. Erase duplictes accross the whole history using HISTCONTROL
export HISTCONTROL=erasedups

12. Disable the usage of history using HISTSIZE
export HISTSIZE=0

13. Ignore specific commands from the history using HISTIGNORE
Sometimes you may not want to clutter your history with basic commands such as pwd and ls
$export HISTIGNORE="pwd:ls:ls -ltr:"

Another way to Enterring data into the text
# cat > demo.args << EOF
> /usr/bin/qemu -cpu host
> EOF

### Screen ####
Linux screen command offers the ability to detach a session that is running some process, and then attach it at a later time. When you reattach the session later, your terminals will be there exactly in the way you left them earlier.
Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.
When the session is detached, the process that was originally started from the screen is still running and managed by the screen. You can then re-attach the session at a later time, and your terminals are still there, the way you left them.

You need to execute commands with screen something like below
# screen ssh root@vx64

Screen Detach Method
1: Detach the screen using CTRL+A d
When the command is executing, press CTRL+A followed by d to detach the screen.
Screen Detach Method 2: Detach the screen using -d option
When the command is running in another terminal, type the command as following.
$ screen -d SCREENID

List all the running screen processes
"screen -ls command" or "screen -ls"

$ screen -ls
There is a screen on:
    4491.pts-2.FC547    (Attached)
1 Socket in /var/run/screen/S-sathiya.

$ screen -d 4491.pts-2.FC547  ## detach process
[4491.pts-2.FC547 detached.]

$ screen -r 4491.pts-2.FC547 ## Attach process

### Most Frequently  used Unix/Linux Commands ###
1. Tar Command
- Create a new archive : tar cvf archive_name.tar dirname/
- Extract archive : tar xvf archive_name.tar
- View and Existing Archive : tar tvf archive_name.tar

2. Grep Command
- Search for a given string in a file (case in-sensitive) : grep -i "the" demo_file
- print the matched lines along with 3 lines after it : grep -A 3 -i "example" demo_file
- Search for a given string in all files : grep -r "ramesh" *

3. Find Command Example
- Find files using file-name : find -iname "MyCProgram.c"
- Execute commands on files found by the find command : find -iname "MyCProgram.c" -exec md5sum {} \;
- Find all empty files in home directory : find ~ -empty

4. Ssh command Examples
- login to remote host: ssh -l jsmith remotehost.example.com
- Debug ssh client : ssh -v -l jsmith remotehost.example.com
- Display ssh client version : ssh -V

5. Sed Commands
- print the content in reverse order : sed -n '1!G;h;$p' thegeekstuff.txt
- Add line number for all non-empty-lines in a file : sed '/./=' thegeekstuff.txt | sed 'N; s/\n/ /'

6. Awk Commands
- Remove duplicate lines using awk: awk '!($0 in array) { array[$0]; print }' temp
- Print all lines from /etc/passwd that has the same Uid and gid : awk -F ':' '$3==$4' passwd.txt
- Print only specific field from a file : awk '{ print $2,$5;}' employee.txt

7. Vim command examples
- Go the the 143rd line of file: vim +143
- Go th the first match of the specified vim +/search-term
- Open the file in read only mode: vim -R /etc/passwd

8. Diff Command examples
- Ignore white space while comparig : diff -w name_list.txt name_list_new.txt

9. Sort command examples
- sort a file in ascending order : sort names.txt
- sort a file in descending order : sort -r names.txt
- sort passwd file by 3rd field: sort -t: -k 3n /etc/passwd | more

10. Export command examples
- To view oracle related env variable: export | grep ORACLE

11. Xargs command examples
- Copy all images to external hard-drive: ls *.jpg | xargs -n1 -i cp {} /external-hard-drive/directory
- Search all jpg images in the system and archive it: find / -name *.jpg -type f -print | xargs tar -cvzf images.tar.gz
- Download all the URLs mentioned in the url-list.txt file: cat url-list.txt | xargs wget -c

12. ls Command examples
- Display filesize in human readable format: ls -lh
- Order Files Based on Last Modified Time(In reverse Order) : Using ls -ltr
- Visual Classification of files with Special characters using : ls -F

13. pwd command
- Pwd is Print working directory

14. gzip command examples
- To Create a *.gz compressed file: gzip test.txt
- To uncompress a *.gz file: gzip -d test.txt.gz
- Display compression ratio of the compressed file using : gzip -l *.gz

15. Bzip2 command example
-  To create a *.bz2 compressed file: bzip2 test.txt
-  To uncompress a *.bz2 file: bzip2 -d test.txt.bz2

16. unzip command examples
- To extract a *.zip compressed file: unzip test.zip
- view the contents of *.zip file (withour unzipping it) : unzip -l jasper.zip

17. Ftp command examples
- To connect to a remote server and download multiple files : ftp Ip/hostname, mget *.html
- To view the file names located on the remote server before downloading: mls *.html

18. Crontab command examples
- View crontab entry for a specific user: crontab -u john -l
- schedule a cron job every 10 minutes: */10 * * * * /home/ramesh/check-disk-space

19. Service command examples
service command is used to run the system V init scripts. i.e. Instead of calling the scripts located in the /etc/init.d/
- Check the status of a service: service ssh status
- Check the status of all services: service --status-all
- Restart a service: service ssh restart

20. PS command examples
Ps command is used to display information about the processes that are running in the system
- To view current running processes: ps -ef | more
- To view current running processes in a tree structure. H option stands for process hierarchy: ps -efH | more

21. free command examples
Used to display the free, used, swap memory available in the system
If you want to quickly check how many GB of RAM your system has use the '-g' option.
- free -g 
- If you want to see a total memory(including the swap), use the -t switch

22. Top command examples
top command displays the top processes in the system( by default sorted by cpu usage). To sort top output by any column, Press O (upper-case O)
To displays only the processes that belong to a particular user use -u option. top process that belongs to orcle uer: top -u oracle

23. df command examples
- Display the file system disk space usage. By default df -k displays output in bytes. df -h displays output in human readable form.
- Use -T option to display what type of file sytem

24. Kill command examples
- Use kill command to terminate a process. First get the process id using ps -ef command, then use kill -9 to kill the running linux process as shown below. you can also use killall, pkill, xkill to terminate a unix process

25. Chmod command examples
chmod command is used to change the permission for a file or directory
- Give full access to user and group(i.e read, write and exucute) on a specific file : chmod ug+rwx file.txt
- Revoke all access for the group(i.e read,write and execute) on a specific file: chmow g-rwx file.txt

26. ifconfig command examples
view or configure a network interface on the linux system
- ifconfig -a, ifconfig eth0 up, ifconfig eth0 down

27. Whereis ls, whatis ls

28. locate command - using locate command you can quickly search for the location of a specific file

29. tail command examples
- Print the last 10 lines of a file by default: tail filename.txt
- Print N number of lines from the file named filename.txt : tail -n N filename.txt
- View the content of the file in real time using 'tail -f'

30. Download and store it with a different name
- wget -o taglist.zip http://www.vim.org/scripts/download_script.php?src_id=7701

### Linux modprobe Command Examples to View, Install, Remove Modules ###
modprobe utility is used to add loadable modules to the Linux kernel. You can also view and remove modules using modprobe command.

1. List Available Kernel Modules
modprobe -l will display all available modules as shown below.
 
2. List Currently Loaded Modules
While the above modprobe command shows all available modules, lsmod command will display all modules that are currently loaded in the Linux kernel.
lsmod | less

3. Install New modules into Linux Kernel
In order to insert a new module into the kernel, execute the modprobe command with the module name.
Example: sudo modprobe vmhgfs ; lsmod | grep vmhgfs

4.  Load New Modules with the Different Name to Avoid Conflicts
Consider, in some cases you are supposed to load a new module but with the same module name another module got already loaded for different purposes.
To load a module with a different name, use the modprobe option -o as shown below.
- sudo modprobe vmhgfs -o vm_hgfs

5. Remove the Currently Loaded Module
If you’ve loaded a module to Linux kernel for some testing purpose, you might want to unload (remove) it from the kernel.
- modprobe -r vmhgfs

Ethtool utility is used to view and change the ethernet device parameters.
    * Full duplex : Enables sending and receiving of packets at the same time. This mode is used when the ethernet device is connected to a switch.
    * Half duplex : Enables either sending or receiving of packets at a single point of time. This mode is used when the ethernet device is connected to a hub.
    * Auto-negotiation : If enabled, the ethernet device itself decides whether to use either full duplex or half duplex based on the network the ethernet device attached to.

### Troubleshooting Using dmesg Command in Unix and Linux ###

During system bootup process, kernel gets loaded into the memory and it controls the entire system. 
When the system boots up, it prints number of messages on the screen that displays information about the hardware devices that the kernel detects during boot process.
These messages are available in kernel ring buffer and whenever the new message comes the old message gets overwritten
1. View the Boot Messages
By executing the dmesg command, you can view the hardwares that are detected during bootup process and it’s configuration details. There are lot of useful information displayed in dmesg. Just browse through them line by line and try to understand what it means. Once you have an idea of the kind of messages it displays, you might find it helpful for troubleshooting, when you encounter an issue.
# dmesg | more

2. View Available System Memory
# dmesg | grep Memory

3. View Ethernet Link Status (UP/DOWN)
# dmesg  | grep eth

4. Change the dmesg Buffer Size in /boot/config- file
Linux allows to you change the default size of the dmesg buffer. The CONFIG_LOG_BUF_SHIFT parameter in the /boot/config-2.6.18-194.el5 file (or similar file on your system) can be changed to modify the dmesg buffer.
#  grep CONFIG_LOG_BUF_SHIFT  /boot/config-`uname -r`

5. Clear Messages in dmesg Buffer
# dmesg -c

6. dmesg timestamp: Date and Time of Each Boot Message in dmesg
By default the dmesg don’t have the timestamp associated with them. However Linux provides a way to see the date and time for each boot messages in dmesg in the /var/log/kern.log file as shown below.
klogd service should be enabled and configured properly to log the messages in /var/log/kern.log file.
# dmesg | grep "L2 cache"
# grep "L2 cache" kern.log.1

### Explore Linux /proc File System (/proc directories, /proc files) ###
Inside the /proc directory, you’ll see two types of content — numbered directories, and system information files.

/proc is not a real file system, it is a virtual file system. For example, if you do ls -l /proc/stat, you’ll notice that it has a size of 0 bytes, but if you do “cat /proc/stat”, you’ll see some content inside the file.

1. /proc Directories with names as numbers
Do a ls -l /proc, and you’ll see lot of directories with just numbers. These numbers represents the process ids, the files inside this numbered directory corresponds to the process with that particular PID.
Following are the important files located under each numbered directory (for each process):
    * cmdline – command line of the command.
    * environ – environment variables.
    * fd – Contains the file descriptors which is linked to the appropriate files.
    * limits – Contains the information about the specific limits to the process.
    * mounts – mount related information
Following are the important links under each numbered directory (for each process):
    * cwd – Link to current working directory of the process.
    * exe – Link to executable of the process.
    * root – Link to the root directory of the process.

2. /proc Files about the system information
Following are some files which are available under /proc, that contains system information such as cpuinfo, meminfo, loadavg.
    * /proc/cpuinfo – information about CPU,
    * /proc/meminfo – information about memory,
    * /proc/loadvg – load average,
    * /proc/partitions – partition related information,
    * /proc/version – linux version
Some Linux commands read the information from this /proc files and displays it. For example, free command, reads the memory information from /proc/meminfo file, formats it, and displays it.
# /proc/cmdline – Kernel command line
# /proc/cpuinfo – Information about the processors.
# /proc/devices – List of device drivers configured into the currently running kernel.
# /proc/dma – Shows which DMA channels are being used at the moment.
# /proc/fb – Frame Buffer devices.
# /proc/filesystems – File systems supported by the kernel.
# /proc/interrupts – Number of interrupts per IRQ on architecture.
# /proc/iomem – This file shows the current map of the system’s memory for its various devices
# /proc/ioports – provides a list of currently registered port regions used for input or output communication with a device
# /proc/loadavg – Contains load average of the system
The first three columns measure CPU utilization of the last 1, 5, and 10 minute periods.
The fourth column shows the number of currently running processes and the total number of processes.
The last column displays the last process ID used.
# /proc/locks – Displays the files currently locked by the kernel
Sample line:
1: POSIX ADVISORY WRITE 14375 08:03:114727 0 EOF
# /proc/meminfo – Current utilization of primary memory on the system
# /proc/misc – This file lists miscellaneous drivers registered on the miscellaneous major device, which is number 10
# /proc/modules – Displays a list of all modules that have been loaded by the system
# /proc/mounts – This file provides a quick list of all mounts in use by the system
# /proc/partitions – Very detailed information on the various partitions currently available to the system
# /proc/pci – Full listing of every PCI device on your system
# /proc/stat – Keeps track of a variety of different statistics about the system since it was last restarted
# /proc/swap – Measures swap space and its utilization
# /proc/uptime – Contains information about uptime of the system
# /proc/version – Version of the Linux kernel, gcc, name of the Linux flavor installed.


### Abouot ssh
X11 Forwarding

You can encrypt X sessions over SSH. Not only is the traffic encrypted, but the DISPLAY environment variable on the remote system is set properly. So, if you are running X on your local computer, your remote X applications magically appear on your local screen.
Turn on X11 forwarding with ssh -X host. You should use X11 forwarding only for remote computers where you trust the administrators. Otherwise, you open yourself up to X11-based attacks.
A nifty trick using X11 forwarding displays images within an xterm window. Run the web browser w3m with the in-line image extension on the remote machine; see the Debian package w3m-img or the RPM w3m-imgdisplay. It uses X11 forwarding to open a borderless window on top of your xterm. If you read your e-mail remotely using SSH and a text-based client, it then is possible to bring up in-line images over the same xterm window. 

Config File
SSH looks for the user config file in ~/.ssh/config. A sample might look like:
ForwardX11 yes
Protocol 2,1

Speeding Things Up: Compression and Ciphers
ssh -C or put Compression yes in your config file

Port Forwarding
Ports are the numbers representing different services on a server; such as port 80 for HTTP and port 110 for POP3. You can find the list of standard port numbers and their services in /etc/services. SSH can translate transparently all traffic from an arbitrary port on your computer to a remote server running SSH. The traffic then can be forwarded by SSH to an arbitrary port on another server. 

Encryption
Many applications use protocols where passwords and data are sent as clear text. These protocols include POP3, IMAP, SMTP and NNTP. SSH can encrypt these connections transparently. Say your e-mail program normally connects to the POP3 port (110) on mail.example.net. Also, say you can't SSH directly to mail.example.net, but you have a shell login at shell.example.net. You can instruct SSH to encrypt traffic from port 9110 (chosen arbitrarily) on your local computer and send it to port 110 on mail.example.net, using the SSH server at shell.example.net:

ssh -L 9110:mail.example.net:110 shell.example.net
That is, send local port 9110 to mail.example.net port 110, over an SSH connection to shell.example.net. 

Tunneled Connections
SSH can act as a bridge through a firewall whether the firewall is protecting your computer, a remote server or both. All you need is an SSH server exposed to the other side of the firewall. For example, many DSL and cable-modem companies forbid sending e-mail from your own machine over port 25 (SMTP).
Our next example is sending mail to your company's SMTP server through your cable-modem connection. In this example, we use a shell account on the SMTP server, which is named mail.example.net. The SSH command is:

ssh -L 9025:mail.example.net:25 mail.example.net

Piping Binary Data to a Remote Shell

Piping works transparently through SSH to remote shells. Consider:
cat myfile | ssh user@desktop lpr
tar -cf - source_dir | \
ssh user@desktop 'cat > dest.tar'
The first example pipes myfile to lpr running on the machine named desktop. The second example creates a tar file and writes it to the terminal (because the tar file name is specified as dash), which is then piped to the machine named desktop and redirected to a file. 

Running Remote Shell Commands
With SSH, you don't need to open an interactive shell if you simply want some output from a remote command, such as:
ssh user@host w
This command runs the command w on host as user and displays the result. It can be used to automate commands, such as:
perl -e 'foreach $i (1 .. 12) \
{print `ssh server$i "w"`}'
Notice the back-ticks around the SSH command. This uses Perl to call SSH 12 times, each time running the command w on a different remote host, server1 through server12. In addition, you need to enter your password each time SSH makes a connection. However, read on for a way to eliminate the password requirement without sacrificing security. 

###
Awk and Sed Complete Tour 
###

## Bash Scripting ##
sed and awk

sed - stream editor, takes input from standard input, search replace delete
awk - field processor, we can focus on specific column

sed and awk features 
mget is used to download a file from a ftp server to local box
bluefish is a simple editor for php,perl,bash,python,html,sql scripts

### FEATURES COMMON TO BOTH AWK & SED ###
1. Both are scripting languages.... Really helpful for automation
2. Both work primarily with text files
3. Both are programmable editors
4. Both accept command-line options and can be scripted (-f script_name)
5. Both GNU versions support POSIX (GREP) and EGREP RegExes
6. Lineage = ed (editor) -> sed -> awk

### SED's FEATURES ### key - search/replace
1. Non-interactive editor
2. stream editor
a. Manipulates input - performing edits as instructed
b. sed accepts input on/from: STDIN (KeyBoard), File, Pipe{|}
3. sed Loops through ALL input lines stream or files
4. Does NOT operates on the source file, by default (will NOT clobber the Original file, unless instructed to do so)
5. Supports addresses to indicate which lines to operate on: /^$/d - deletes blank lines   (^$ - blank line)
6. stores active (current ) line the 'pattern space' and maintains a 'hold space' for usage
7. sed loops through the text lines

### AWK's FEATURES ### key - reporting, Chopping up the datastreams into multiple columns
1. Field processor based on whitespace, by default
2. Used for reporting (extracting specific columns) from data feed
3. supports programming constructs
a. loops (for,while,do)
b. conditions(if,then,else)
c. arrays(lists)
d. functions(string,numeric, user-defined)
4. Automatically tokenizes words in a line for later usage. $1, $2, $3, etc
5. Automatically loops through input like sed, making lines available for processing
6. Ability to execute shell commands using 'system()' functions

### REGULAR EXPRESSIONS (RegEx) REVIEW ###
Regular Expressions (RegExes) are key to mastering Awk & Sed

## METACHARACTERS#### 
^ - matches the character(s) at the beginning of a line
a. sed -ne '/^dog/p' animal.txt
$ - matches the character(s) at the end of a line
a. sed -ne '/dog$/p' animals.txt - n - options to supress... In short only print those values which matches

Take Match line which contains only 'dog':
a. sed -ne '/^dog$/p' animals.txt
b. sed -ne '/^dog$/p' - reads from STDIN, Press Enter after each line. Terminates which CTRL-D
c. cat animals.txt | sed -ne '/^dog$/p'
d. cat animals.txt | sed -ne '/^dog$/Ip' - prints matches case-insensitively

. - matches any character {typically except new line}
a. sed -ne '/^d...$/Ip' animals.txt
b. sed -ne '/^d.../Ip' animals.txt // ... 3 characters

### REGEX QUANTIFIERS ###
* - 0 or more matches of the previous character
+ - 1 or more matches of the previous character
? = 0 or 1 of the previous character

sed -ne '/^d.\+/Ip animals.txt
Note: Escape Quantifiers in RegExes using the escape character '\'

### CHARACTERS CLASSES ###
Allows to search for a range of characters
a. [0-9]
b. [a-z][A-Z]

sed -ne '/^d.\+[0-9]/Ip' animals.txt // d at the begining of the line 1 or more character then numeric 
Note: Character Clases match 1, and only 1 character

### Introduction to SED ###
Usage:
1. sed [options] 'instruction' file | PIPE | STDIN
2. sed -e 'instruction1' -e 'instruction2' ...
3. sed -f script_file_name file
Note: Execute sed by indicating instuction on one of the following:
1. Command-line
2. Script File

Note: Sed accepts instructions based on '/patter_to_match/action'
### Print Specific Lines of a file ###
Note: '-e' is optional if there is only 1 instruction to execute
sed -ne '1p' animals.txt - prints first line of file
sed -ne '2p' animals.txt - prints second line of file 
sed -ne '$p' animals.txt - prints last printable line fo file
sed -ne '2,4p' animals.txt - prints lines 2-4 from file
sed -ne '1!p' animals.txt - prints ALL lines EXCEPT line #1
sed -ne '1,4!p' animals.txt - prints ALL lines EXCEPT lines 1 - 4
sed -ne '/dog/p' animals.txt - prints ALL lines containing 'dog' - case-sensitive
sed -ne '/dog/Ip' animals.txt - prints ALL lines containing 'dog' - case-insensitive
sed -ne '/[0-9]/p' animals.txt - prints ALL lines with AT LEAST 1 numeric
sed -ne '/cat/,/deer/p' animals.txt - prints ALL lines beginning with 'cat', ending with 'deer'
sed -ne '/deer/,+2p' animals.txt - prints the lines with 'deer' plus 2 extra lines

## Delete Lines Using Sed Address ###
sed -e '/^$/d' animals.txt - deletes blank lines from file
Note: Drop '-n' to see the new output when deleting
sed -e '1d' animals.txt - deletes the first line from animals.txt
sed -e '1,4d' animals.txt - deletes lines 1-4 from animals.txt
sed -e '1-2d' animals.txt - deletes every 2nd line beginning with line 1 - 1,3,5...

### Save Sed's changes using output redirection ###
sed -e '/^$/d' animals.txt > animals2.txt - deletes blank lines from file and creates new output file 'animals2.txt'

### SEARCH $ REPLACE USING SED ###
General Usage:
sed -e 's/find/replace/g' animals.txt - replace 'find' with 'replace'
Note: Left Hand Side (LHS) supports literals and RegExes
Note: Right Hand Side (RHS) supports literals and back references

Examples:
sed -e 's/LinuxCBT/UnixCBT/' - Replace 'LinuxCBT' with 'UnixCBT' on STDIN to STDOUT
sed -e 's/LinuxCBT/UnixCBT/I' - replace 'LInuxCBT' with 'UnixCBT' on STDNIN to STDOUT (Case-Insensitive)

Note: Replacements occur on the FIRST match, unless 'g' is appended to the s/find/replace/g sequence
sed -e 's/LinuxCBT/UnixCBT/Ig' - replace 'LinuxCBT' with 'UnixCBT' on STDIN to STDOUT (Case-Insensitive & global)

Tasks:
1. Remove ALL blank lines
2. Substitute 'cat', regardless of case, with 'Tiger'
sed -ne '/^$/d' -e '/s/cat/Tiger/Ig' animals.txt - Removes blank lines & substitues 'cat' with 'Tiger'

sed -ne 
Note: Whenever using '-n' option, you MUST specify the print modifier 'p'
OR sed -e '/^$/d; s/cat/tiger/Igp' animals.txt - does the same as above
Note: Simply separate multiple commands with semicolons

## Update source file - Backup source file ###
sed -i.bak -e '/^$/d; s/cat/tiger/Igp' animals.txt - performs as above, but ALSO replaces the source file

### Search & Replace (Text Substitution) Continued ###
sed -e '/address/s/find/replace/g' file ( -e - Swith will print every thing including effected lines )
sed -e '/Tiger/s/dog/mutt/g' animals.txt
sed -ne '/Tiger/s/dog/mutt/gp' animals.txt - substitues 'dog' with 'mutt' where line contains 'Tiger'
sed -e '/Tiger/s/dog/mutt/gI' animals.txt
sed -e '/^Tiger/s/dog/mutt/gI' animals.txt - Updates lines that begin with 'Tiger'
sed -e '/^Tiger/Is/dog/mutt/gI' animals.txt - updates lines that being with 'Tiger' (case-Insensitive)

## Focus on the Right Hand Side (RHS) of search & replace fuctions in SED ##
Note: SED reserves a few characters to help with substitutions based on the matched pattern from the LHS
& = The full value of the LHS (pattern matched) OR the values in the pattern space

Task:
Intersperse each line with the word 'Animal'
sed -ne 's/.*/&/p' animals.txt - returns the matched pattern with the matched pattern
sed -ne 's/.*/Animals &/p' animals.txt - Intersperses 'Animals' on each line
sed -ne 's/.*/Animals: &/p' animals.txt - Intersperses 'Animals' on each line

sed -ne 's/.*[0-9]/&/P' animals.txt - animals with at least 1 number at the end of the name
sed -ne 's/.*[0-9]\{1\}/&/p' animals.txt - returns animals with only 1 numeric at the end of the name
sed -ne 's/[a-z][0-9]\{4\}$/&/pI' animals.txt - returns animals with 4 numeric values at the end of the name
sed -ne 's/[a-z][0-9]\{1,4\}$/&/pI' animals.txt - returns animals with at least 1, up to 4 numeric values at the end of the name

## Grouping & Backreferences ###
Note: Segment matches into backreferences using escaped parenthesis: \(RegEx\)
sed -ne 's/\(.*\)\([0-9]\)/&/p' animals.txt - This creates 2 variables: \1 and \2
sed -ne 's/\(.*\)\([0-9]\)$/\1/p' animals.txt - This creates 2 variables : \1 & \2 but references \1 
sed -ne 's/\(.*\)\([0-9]\)$/\2/p' animals.txt - This creates 2 variables : \1 & \2 but references \2
sed -ne 's/\(.*\)\([0-9]\)$/\1 \2/p' animals.txt - This creates 2 variables : \1 & \2 but references \1 & \2

### Apply Changes to Multiple Files ###
Sed supports Globbing via wildcards: *, ?
sed -ne 's/\(.*\)\([0-9]\)$/\1 \2/p' animals*txt - This creates 2 variables : \1 & \2 but references \1 & \2 

### Sed Scripts ###
Note: Sed supports scripting, which means, the ability to dump 1 or more instructions into 1 file 
sed -f script_file_name text_file
sed -f animals.sed animals.txt 

Task:
Perform mutliple transformations on animals.txt file
1. /^$/d - Remove blank lines
2. s/dog/frog/Ig - substitutes globally, 'dog' with 'frog' - (case-insensitive)
3. s/tiger/lion/Ig - substitutes globally, 'tiger' with 'lion' - (case-insensitive)
4. s/.*/Animals: &/ - Interspersed 'Animals:'
5. s/animals/mammals/Ig - Replaced 'Animals' with 'mammals'
6. s/\([a-z]*\)\([0-9]*\)/\1/Ip - Strips trailing numeric values from alphas
sed -ne '/^$/d: s/\([a-z]*\)\([0-9]*\)/\1/pI' animals.txt

### Awk - Intro ###
Features:
1. Reporter
2. Field Processor
3. Supports Scripting
4. Programming Constructs
5. Default delimiter is whitespace
6. Supports: Pipes, Files, and STDIN as sources of input
7. Automatically tokenizes processed columns/fields into the variables : $1, $2, $3 ... $n

Usage:
awk '{instructions}' file(s)
awk '/pattern/ { procedure }' file
awk -f script_file file(s)

Tasks:
Note: $0 represents the current record or row
1. Print entire row, one at a time, from an input file (animals.txt)
 a. awk '{ print $0 }' animals.txt
2. Print specific columns from (animals.txt)
 a. awk '{ print $1 }' animals.txt - this prints the 1st column from the file
3. Print Multiple columns from (animals.txt)
 a. awk '{ print $1; print $2; }' animals.txt
 b. awk '{ print $1,$2; }' animals.txt
4. Print columns from lines containing 'deer' using RegEx support
 a. awk '/deer/ { print $0 }' animals.txt
 b. awk '/deer/ { print $1 }' animals.txt

awk '/^[0-9]$/ { print $0 } ' animals.txt - Only for single character ... starting with digit ending with digit 
awk '/^[0-9]*$/ { print $0 }' animals.txt - Multiple characters...
sed -e /^$/d animals.txt | awk '/^[0-9]*$/ { print $0 }

Print Columns from lines containing 'deer' using RegEx support
awk '/deer/ { print $0 }' animals.txt

Print Columns from lines containing digits
awk '/[0-9]/ { print $0 }' animals.txt

Remove Blank lines with Sed and Pipe output to awk for processing 
sed -e /^$/d animals.txt | awk '/^[0-9]*$/ { print $0}'

Print blank lines 
awk '/^$/ { print }' animals.txt OR
awk '/^$/ { print $0 }' animals.txt

Print ALL lines beginning with the animal 'dog' case-insensitive
awk 'dog/I { print }' animals.txt

## Delimiters ###
Default delimiter: whitespace (space, tabs )
Use: '-F' to influence the default delimiter

Tasks:
1. Parse /etc/passwd using awk
 a. awk -F: ' { print } ' /etc/passwd
 b. awk -F: ' { print $1, $5 } ' /etc/passwd
2. Support for character classes in setting the dafult delimiter
 a. awk -F"[:;,\t]"

### Awk Scripts ###
Features:
 1. Ability to organize patterns and procedure into a script file
 2. The patterns/procedures are much neater and easier to read
 3. Less information is placed on the command-line
 4. By default, loops through lines of input from various sources: STDIN, pipe, files
 5. # is the default comment character
 6. Able to perform matches based on specific fields

Awk scripts consists of 3 parts:
 1. Before (denoted using: BEGIN) - Executed prior to FIRST line of input being read
 2. During ( Main Awk Loop) - Focuses on looping through lines of input
 3. After (denoted using: END) - Executed after the LAST line of input has been processed
Note: BEGIN and END components of AWK scripts are OPTIONAL

Tasks:
1. Print to the screen some useful information without reading input (STDIN, Pipe or File)
 a. awk 'BEGIN { print "TEsting awk without input file" }'
2. Set system variable: FS to colon in BEGIN block
 a. awk 'BEGIN { FS = ":" ; print " Testing Awk without input file" }'
 b. awk 'BEGIN { FS = ":"; print FS }'
3. Write scirpt to extract rows which contain 'deer' from animals.txt using RegEx
 a. awk -f animals.awk animals.txt
more animals.awk
# This script parses document for items containing deer
# Component 1 - BEGIN
BEGIN { print "Begin processing of various records" }
# Component 2 - Main Loop
/deer/ { print }
# Component 3 - End
End { print " process complete " }

4. Parse /etc/passwd
 a. print entire lines - { print }
 b. print specific columns - { print $1, $5}
 c. print specific columns for a specific user - /linuxcbt/ { print $1, $5 }
 d. print specific columns for a specific user matching a given column $1 - /linucbt/ { print $1, $5 }
 e. test column #7 for the string 'bash' - $7 - /bash/ { print }

### Awk variables ###
Features 3 types of variables:
 1. system - i.e. a = 3
 2. Scalars - i.e. a = 3
 3. Arrays - i.e. variable_name[n]

Note: Variables do not need to be declared. Awk, automatically registers them in memory
Note: Variables names ARE case-sensitive

System Variables:
 1. FILENAME - name of current input file
 2. FRN - used when multiple input files are used
 3. FS - Field separator - defaults to whitespace - can be a single character, including via a RegEx
 4. OFS - output field separator - defaults to whitespace
 5. NF - number of fields in the current record
 6. NR - current record number (It is auto-summed when referenced in END section)
 7. RS - record separator - defaults to a newline
 8. ORS - output record separator - defaults to a newline
 9. ARGV - array of command-line arguments - indexed at 0, beginning with $1
 10. ARGC - total  # of command-line arguments
 11. ENVIRON - array of environment variables for the current user

Tasks:
1. Print key system variables
 a. print FILENAME { print anywhere after the BEGIN block }
 b. print NF - number of fields per record
 c. print NR - current record number
 d. print ARGC - returns total number of command-line arguments

Scalar Variables:
variable_name = value
age = 50
Note: Set scalars in the BEGIN section, however, they can be, if required, set in the main loop
{ ++age } - increments variables 'age' by 1, for each iteration of the main loop (component 2 of 3)
set variable to string using double quotes:
fullname = "Dean Davis"
Concatenate variables by separating the values using a space
fullname = "Dean Davis"

Array Variables:
 Feature:
 1. List of information

Task:
1. Define an array variable to store various ages
 a. age[0] = 50
2. Use split function to auto-build an array
 a. arrinum = split(string, array, separator)

### Operators ###
 Features:
 1. Provides comparison tools for expressions 
 2. Generally 2 types:   
    a. Relational - ==, !=, <, >, <=, >= (RegEx matches), ! - (RegEx Does NOT Match)
    b. Boolean- || (OR), && (AND), !(NOT) - Combines comparisons
 3. Print something if the current record number is > 10
Examples:
    NR >= 10 { print "Current Record number is greater than 10 " NR}
Extract records with ONLY 2 fields
    NF == 2 { print }
Test if current record has at LEAST 2 fields and is at least record 5
NF >= 2 && NR >= 5 { print }

### Loops ###
Features:
1. Support for: while, do , and for
    While:
        { while (NR > 10) print "Greater than 10" }
    For:   
         for(i=1; i<=10; ++i) print i
    Do - performs the action carried out by while at least once;
     do action while (condition)
 
### Processing Records with Awk ###
Task:
1. Process multiple delimiters in the same file (across records )
 a. awk -F "{:;}" '{ print }' animals2.txt
 b. awk 'BEGIN { FS="{ ;: }" }; { print $2 }' animals2.txt
 c. awk -f script_name animals2.txt
2. Process multiple delimiters on the same line
 a. Note: Script does NOT change, however, input file DOES
3. Normalize the Output field Separator (OFS)
   BEGIN { OFS=":" }
4. Build animalclasses array from the list of classes in animals.txt
 a. { animalclass{NR} == $2 } - place in main loop - builds animalclass Array
5. Extract Daemon entries from /var/log/messages
 a. extract kernel messages
    a1. awk -f test.awk /var/log/messages
    b1. awk -f -linuxcbt/test.awk messages | awk '$8 - /error/ { print $5,$6,$7,$8,$9 }' (If the $8 matches error keyword )
    Note: It will be helpful in case of syslog server which get messages from multiple servers
    c1. awk -f -linuxcbt/test.awk messages | awk 'BEGIN { print "Here are the Error messages"}; $8 - /error/ { print $5,$6,$7,$8,$9 }; END { print "Process Complete" }'

## Script 
# This script parses document for items containing deer

# component 1 - BEGIN
BEGIN { FS = ":"; age = 50; fullname = "Dean" "Davis"; print "Begin Processing of various records" }

# Component 2 - Main Loop
#$7 ~ /bash/ { print NR $1, $2 NF }
{ print NR, $1, $2, NF }
{ print "Age is : " age}
{ ++age }

#Component 3 -END
END { print " Process complete"
print "Filename:" FILENAME
print "Total Command-line Arguments:" ARGC
}

### Printf Formatting ### New Chapter
Features:
1. Ability to control the width of fields in the Output

Usage:
printf("format", arguments)
Supported Printf Formats include:
1. %c - ASCII Characters
2. %d - Decimals NOT Floating point values OR Values to the right af the decimal point
3. %F - Floating Point
4. %s - Strings
Note: printf does NO print newline character(s)
This means you'll need to indicate a newline sequence: \n in the "format" section of the printf 
Note: Default output is right-justified . Use '.' to indicate left-justification
General Format section:
[.]width.precision[cdfs]
width - influences the actual width of the column to be output
precision - influences the number of places to the right of the decimal point
precision - Also influences the number of strings to be printed from a string

EXamples | Tasks:
1. print "Testing printf" from the command-line
 a. awk 'BEGIN { printf("Testing printf\n") }'

2. read 'animals.txt' and format the output with printf
 a. awk 'BEGIN { printf("Here is the output\n") } { printf("%s\t %s\n", $1, $2) }' animals.txt ( print column from the string )

3. Apply width and precision to task #2
 a. awk 'BEGIN { printf ("Here is the output\n")} { printf("%.3s\t%s\n", $1,$2)' animals.txt ( We can control the precision EG: tiger o/p: tig )
 b. awk 'BEGIN { printf ("Here is the output\n")} { printf("%20s\t%20s\n", $1, $2) }' animals.txt ( Columns with 20 exact spaces )
4. Left-Justified tast #3
 a. awk 'BEGIN { printf("Here is the output\n") } { printf("%-20s\t%-20s\n",$1,$2) }' animals.txt ( left justification )
5. Parse animals_with_price
 a. awk 'BEGIN { printf("Here is the output\n\n")} { printf("%-5s\t%f\n", $1, $2) }' animals_with_prices.txt (%f - decimal point Output looks pretty neat )
   Note: %.2f - only 2 decimals
6. Format using printf animals2.txt
 a. for (i=1; i <= NR; i++)
    printf ("%-12s %1d %-2s %-10s\n", "Animal Class", i, ": ", animalclass[i])
7. Apply upper and lower case formatting to printf values
 a. printf("%-12s %1d %-2s %-10s\n", "ANIMAL CLASS", i, ": ", toupper(animalclass[i]))
 b. printf("%-12s %1d $-2s %-10s\n", "ANIMAL CLASS", i, ": ", tolower(animalclass[i]))
8. Format output form /var/log/messages
 a. Extract date, time, server and daemon columns, include a header
   BEGIN { printf ("%-15s %-15s %-10s\n", "DATE", "Server", "Daemon") }
/kernel/ { printf ("%3s %2s %15s %10s\n", $1,$2,$3,$4,$5)}
END { print "process Complete"}

##### Additional Sed and Awk Examples ###
Task:
1. Update PHP web pages to remove 'Free Shipping' whereever it exists
Note: grep 'keyword' file_name ( The grep will search the keyword in the file_name )
 a. Code to remove: Shipping:$nsbp; Free
//' products_linuxcbt_security_edition.php
 sed -i.bak -e 's/Shipping: Free
//' products_linuxcbt_security_edition.php

 b. Effect the change to ALL product files and create .new output files without clobbering the source files
 for i in `ls -A products_*php ; do sed -e 's/Shippeing;$nbsp
//' $i > $i.new; done
2. Strip '.new' suffix from newly generated files
 a. echo "products_linuxcbt.php.new" | sed -e 's/\.new//'
 b. for i in `ls -A products_*new | sed -e 's\/.new//'; do echo $i; done
 c. for i in `ls -A products_*new | sed -e 's/\.new//'; do mv $i.new $i; done

3. Remove 'Free Shipping' from faq.php file
 a. code to remove :
  • Free Shipping 
     b. sed -e 's/

  • Free Shipping//' faq.php > faq.php.new

    ### Use Awk & Sed Together to update specific rows in /var/log/messages:
    Task:
     a. Update Month information for kernel messages for september 3
     awk $1 ~ /sep/ && $2 ~ /3/ &&  $5 ~ /kernel/ { print }* /var/log/messages
     b. awk $1 ~ /sep/ && $2 ~ /3/ &&  $5 ~ /kernel/ { print }* /var/log/messages | sed -ne 's/sep/september/p' 
     c. awk $1 ~ /sep/ && $2 ~ /3/ &&  $5 ~ /kernel/ { ++total } { print } END{ print "Total Records UPdated: " total }' /var/log/messages | sed -ne 's/sep/september/p'  ( In awk we don't have to initialize the variable to zero, the awk by default initialize it to zero ) 


    Back up Scripts for Cloud.

    ## Find

    Display the pathnames of all files in the current directory and all subdirectories.  The commands
    #find . -print
    #find -print
    #find .

    This will search any filename that begins with foo and ends with bar
    #find . -name foo\*bar

    Example using two search criteria: 
    #find / -type f -mtime -7 | xargs tar -rf weekly_incremental.tar

    Note:
    Will find any regular files (i.e., not directories or other special files) with the criteria "-type f", and only those modified seven or fewer days ago ("-mtime -7").  Note the use of xargs, a handy utility that coverts a stream of input (in this case the output of find) into command line arguments for the supplied command (in this case tar, used to create a backup archive). 

    Another use of xargs is illustrated below.  This command will efficiently remove all files named core from your system (provided you run the command as root of course): 

    #find / -name core | xargs /bin/rm -f
    #find / -name core -exec /bin/rm -f '{}' \; # same thing
    #find / -name core -delete                  # same if using Gnu find

    (The last two forms run the rm command once per file, and are not as efficient as the first form.  However the first form is safer if rewritten to use "-print0".) 

    The find criteria is used to locate files modified less than 10 minutes ago
    #find / -mmin -10

    Recently downloaded file want to locate
    #find / -cmin -10
    -cmin n = File's status was last changed n minutes ago.
    -mmin n = File's data was last modified n minutes ago.
    -mtime n = File's data was last modified n*24 hours ago.

    Find files with various permissions set.  "-perm /permissions" 
    will locate files that are writeable by "others"
    #find . -perm -o=w

    #find . -mtime 0   # find files modified between now and 1 day ago
                      # (i.e., within the past 24 hours)
    #find . -mtime -1  # find files modified less than 1 day ago
                      # (i.e., within the past 24 hours, as before)
    #find . -mtime 1   # find files modified between 24 and 48 hours ago
    #find . -mtime +1  # find files modified more than 48 hours ago

    #find . -mmin +5 -mmin -10 # find files modified between
                              # 6 and 9 minutes ago

    This says to seach the whole system, skipping the directories /proc, /sys, /dev, and /windows-C-Drive (presumably a Windows partition on a dual-booted computer).  The Gnu -noleaf option tells find not to assume all remaining mounted filesystems are Unix file systems (you might have a mounted CD for instance).  The "-o" is the Boolean OR operator, and "!" is the Boolean NOT operator (applies to the following criteria). 

    So these criteria say to locate files that are world writable ("-perm -2", same as "-o=w") and NOT symlinks ("! -type l") and NOT sockets ("! -type s") and NOT directories with the sticky (or text) bit set ("! \( -type d -perm -1000 \)").  (Symlinks, sockets and directories with the sticky bit set are often world-writable and generally not suspicious.) 

    #find / -noleaf -wholename '/proc' -prune \
         -o -wholename '/sys' -prune \
         -o -wholename '/dev' -prune \
         -o -wholename '/windows-C-Drive' -prune \
         -o -perm -2 ! -type l  ! -type s \
         ! \( -type d -perm -1000 \) -print

    Using -exec Efficiently: 
    # find whatever... | xargs command
    Two limitations
    - Firstly not all commands accept the list of files at the end of the command.  A good example is cp: 
    #find . -name \*.txt | xargs cp /tmp  # This won't work! [ -t - we can handle the issue ]
    -Secondly filenames may contain spaces or newlines, which would confuse the command used with xargs.  (Again Gnu tools have options for that, "find ... -print0 |xargs -0 ...".) 

    An alternate form of -exec ends with a plus-sign, not a semi-colon.  This form collects the filenames into groups or sets, and runs the command once per set.  (This is exactly what xargs does, to prevent argument lists from becoming too long for the system to handle.)  In this form the {} argument expands to the set of filenames.  For example: 
    find / -name core -exec /bin/rm -f '{}' +

    #find /opt -name '*.txxt' -type f -exec sh -c 'exec cp -f "$@" /tmp' find-copy {} \;
    #find /path/to/files* -mtime +5 -exec rm {} \;

    -exec, allows you to pass in a command such as rm. The {} \; at the end is required to end the command.



    ##Bash Redirections Using Exec ##

    #!/bin/bash

    echo hello

    # Parse command line options.
    # Execute the following if --log is seen.
    if test -t 1; then
        # Stdout is a terminal.
        exec >log
    else
        # Stdout is not a terminal, no logging.
        false
    fi

    echo goodbye
    echo error >&2

    The if statement uses test to see if file descriptor number one is connected to a terminal (1 being the stdout). If it is then the exec command re-opens it for writing on the file named log. The exec command without a command but with redirections executes in the context of the current shell, it's the means by which you can open and close files and duplicate file descriptors. If file descriptor number one is not on a terminal then we don't change anything.

    If you run this command you'll see the first echo and the last echo are output to the terminal. The first one happens before the redirection and the second one is specifically redirected to stderr (2 being stderr). So, how do you get stderr into the log file also? Just one simple change is required to the exec statement:

    #!/bin/bash

    echo hello

    if test -t 1; then
        # Stdout is a terminal.
        exec >log 2>&1
    else
        # Stdout is not a terminal, no logging.
        false
    fi

    echo goodbye
    echo error >&2

    Here the exec statement re-opens stdout on the log file and then re-opens stderr on the same thing that stdout is opened on (this is how you duplicate file descriptors, aka dup them). Note that order is important here: if you change the order and re-open stderr first (i.e. exec 2>&1 >log), then it will still be on the terminal since you're opening it on the same thing stdout is on and at this point it's still the terminal.

    ## Regular Expressions in Grep Command with 10 Examples – Part I ##

     Example 1. Beginning of line ( ^ )

    In grep command, caret Symbol ^ matches the expression at the start of a line. In the following example, it displays all the line which starts with the Nov 10. i.e All the messages logged on November 10.
    $ grep "^Nov 10" messages.1

    The ^ matches the expression in the beginning of a line, only if it is the first character in a regular expression. ^N matches line beginning with N.

    Example 2. End of the line ( $)
    Character $ matches the expression at the end of a line. The following command will help you to get all the lines which ends with the word “terminating”.

    $ grep "terminating.$" messages
    From the above output you can come to know when all the kernel log has got terminated. Just like ^ matches the beginning of the line only if it is the first character, $ matches the end of the line only if it is the last character in a regular expression.

    Example 3. Count of empty lines ( ^$ )
    Using ^ and $ character you can find out the empty lines available in a file. “^$” specifies empty line.

    $ grep -c  "^$" messages anaconda.log

    Example 4. Single Character (.)
    The special meta-character “.” (dot) matches any character except the end of the line character. Let us take the input file which has the content as follows.

    In case if you want to search for a word which has only 4 character you can give grep -w “….” where single dot represents any single character.

    Example 5. Zero or more occurrence (*)

    The special character “*” matches zero or more occurrence of the previous character. For example, the pattern ’1*’ matches zero or more ’1′.

    Example 6. One or more occurrence (\+)

    The special character “\+” matches one or more occurrence of the previous character. ” \+” matches at least one or more space character.

    If there is no space then it will not match. The character “+” comes under extended regular expression. So you have to escape when you want to use it with the grep command.

    Example 7. Zero or one occurrence (\?)
    The special character “?” matches zero or one occurrence of the previous character. “0?” matches single zero or nothing.
    $ grep "hi \?hello" input

    “hi \?hello” matches hi and hello with single space (hi hello) and no space (hihello).
    The line which has more than one space between hi and hello did not get matched in the above command.

    Example 8.Escaping the special character (\)

    If you want to search for special characters (for example: * , dot) in the content you have to escape the special character in the regular expression.
    $ grep "127\.0\.0\.1"  /var/log/messages.4

    Example 10. Exception in the character class

    If you want to search for all the characters except those in the square bracket, then use ^ (Caret) symbol as the first character after open square bracket. The following example searches for a line which does not start with the vowel letter from dictionary word file in linux.
    $ grep -i  "^[^aeiou]" /usr/share/dict/linux.words


    ## Dmesg
    During system bootup process, kernel gets loaded into the memory and it controls the entire system.
    When the system boots up, it prints number of messages on the screen that displays information about the hardware devices that the kernel detects during boot process.

    These messages are available in kernel ring buffer and whenever the new message comes the old message gets overwritten. You could see all those messages after the system bootup using the dmesg command.

    1. View the Boot Messages

    ## Ip Validation
    $ cat input
    15.12.141.121
    255.255.255
    255.255.255.255
    256.125.124.124

    $ egrep  '\b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' input
    15.12.141.121
    255.255.255.255

    In the regular expression given above, there are different conditions. These conditioned matches should occur three times and one more class is mentioned separately.

       1. If it starts with 25, next number should be 0 to 5 (250 to 255)
       2. If it starts with 2, next number could be 0-4 followed by 0-9 (200 to 249)
       3. zero occurence of 0 or 1, 0-9, then zero occurence of any number between 0-9 (0 to 199)
       4. Then dot character

    Backup and Restore MySQL Database Using mysqldump
    Using mysqldump, you can backup a local database and restore it on a remote database at the same time, using a single command.

    For the impatient, here is the quick snippet of how backup and restore MySQL database using mysqldump:

    backup: # mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

    restore:# mysql -u root -p[root_password] [database_name] < dumpfilename.sql

    How To Backup MySQL database

    1. Backup a single database:

    This example takes a backup of sugarcrm database and dumps the output to sugarcrm.sql

    # mysqldump -u root -ptmppassword sugarcrm > sugarcrm.sql

    # mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

    The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table:

    2. Backup multiple databases:

    If you want to backup multiple databases, first identify the databases that you want to backup using the show databases as shown below:

    # mysql -u root -ptmppassword

    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | bugs               |
    | mysql              |
    | sugarcr            |
    +--------------------+
    4 rows in set (0.00 sec)

    # mysqldump -u root -ptmppassword --databases bugs sugarcrm > bugs_sugarcrm.sql

    3. Backup all the databases:

    The following example takes a backup of  all the database of the MySQL instance.
    # mysqldump -u root -ptmppassword --all-databases > /tmp/all-database.sql

    4. Backup a specific table:
    In this example, we backup only the accounts_contacts table from sugarcrm database.
    # mysqldump -u root -ptmppassword sugarcrm accounts_contacts \
          > /tmp/sugarcrm_accounts_contacts.sql
    4. Different mysqldump group options:

        * –opt is a group option, which is same as –add-drop-table, –add-locks, –create-options, –quick, –extended-insert, –lock-tables, –set-charset, and –disable-keys. opt is enabled by default, disable with –skip-opt.
        * –compact is a group option, which gives less verbose output (useful for debugging). Disables structure comments and header/footer constructs. Enables options –skip-add-drop-table –no-set-names –skip-disable-keys –skip-add-locks

    How To Restore MySQL database

    1. Restore a database

    In this example, to restore the sugarcrm database, execute mysql with < as shown below. When you are restoring the dumpfilename.sql on a remote database, make sure to create the sugarcrm database before you can perform the restore.
    # mysql -u root -ptmppassword

    mysql> create database sugarcrm;
    Query OK, 1 row affected (0.02 sec)

    # mysql -u root -ptmppassword sugarcrm < /tmp/sugarcrm.sql

    # mysql -u root -p[root_password] [database_name] < dumpfilename.sql

    2. Backup a local database and restore to remote server using single command:
    This is a sleek option, if you want to keep a read-only database on the remote-server, which is a copy of the master database on local-server. The example below will backup the sugarcrm database on the local-server and restore it as sugarcrm1 database on the remote-server. Please note that you should first create the sugarcrm1 database on the remote-server before executing the following command.

    [local-server]# mysqldump -u root -ptmppassword sugarcrm | mysql -u root -ptmppassword --host=remote-server -C sugarcrm1



    ### How to Read a File Line by Line in a Shell script ###
    Method 1: PIPED while-read loop
    FILENAME=$1
    count=0
    cat $FILENAME | while read LINE
    do
           let count++
           echo "$count $LINE"
    done

    echo -e "\nTotal $count Lines read"

    Method 2: Redirected "while-read" loop
    FILENAME=$1
    count=0
    while read LINE
    do
          let count++
          echo "$count $LINE"

    done < $FILENAME
    echo -e "\nTotal $count Lines read"

    ### Process file line by line using awk ###
    awk is pattern scanning and test processing language. It is useful for manipulation of data files, text retrieval and processing. Good for manipulating and/or extracting fields (columns) in structured text files.


    FILENAME=$1
    awk '{kount++;print kount, $0 }
         END { print "\nTotal" kount" lines read "}' $FILENAME


    ### Load Content of a File into an Array ###
    You can load the content of the file line by line into an Array
    #!/bin/bash
    filecontent=( `cat "logfile" `)

    for t in "${filecontent[@]}"
    do
    echo $t
    done
    echo "Read file content!"

    $ ./loadcontent.sh
    Welcome
    to
    thegeekstuff
    Linux
    Unix
    Read file content!

    ## Initialization an array during declaration ##
    Instead of initializing an each element of an array separately, you can declare and initialize an array by specifying the list of elements (separated by white space) with in a curly braces.
    Syntax:
    declare -a arrayname=(element1 element2 element3)

    ## Print the whole Bash Array ##
    There are different ways to print the whole elements of the array. If the index number is @ or *, all members of an array are referenced. You can traverse through the array elements and print it, using looping statements in bash.

    syntax:
    echo ${Unix[@]}

    ### Length of the Bash Array ###
    We can get the length of an array using the special parameter called $#.

    ${#arrayname[@]} gives you the length of the array.

    ### Extraction by offset and length for an array ###
    The following examples shows the way to extracts 2 elements starting from the position 3 from an array called Unix

    Unix=('a','b','c','d','e','f','g')
    echo ${Unix[@]:3:2}
    output: d e

    ### Search and Replace in an array elements ###
    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');
    echo ${Unix[@]/Ubuntu/SCO Unix}

    ### Add an element to an existing Bash Array ###
    $cat arraymanip.sh
    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');
    Unix=("${Unix[@]}" "AIX" "HP-UX")
    echo ${Unix[7]}

    $./arraymanip.sh
    AIX

    ### Remove an Element from an Array ###
    unset is used to remove an element from an array.unset will have the same effect as assigning null to an element.
    #!/bin/bash
    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');

    unset Unix[3]
    echo ${Unix[3]}

    ### Remove Bash Array Elements using patterns ###
    In the search condition you can give the patterns, and stores the remaining element to an another array as shown below.

    $ cat arraymanip.sh
    #!/bin/bash
    declare -a Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora');
    declare -a patter=( ${Unix[@]/Red*/} )
    echo ${patter[@]}

    output : Debian ubuntu suse fedora

    ### Copying an Array ###
    Expand the array elements and store the into a new array shown below
    #!/bin/bash
    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');
    Linux=("${Unix[@]}")
    echo ${Linux[@]}

    ### concatenation of two bash Arrays ###
    Expand the elements of the two arrays and assign it to the new array

    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');
    Shell=('bash' 'csh' 'jsh' 'rsh' 'ksh' 'rc' 'tcsh');

    UnixShell=("${Unix[@]}" "${Shell[@]}")
    echo ${UnixShell[@]}
    echo ${#UnixShell[@]}

    ### Deleting an Entire Array ###
    unset is used to delete an entire array

    Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');
    unset UnixShell

  • No comments: