### Gnu/Linux materials ###
Operating System network Installation procedure
1. nfs anaconda will communicate with the n/w installation files.
process status
ps -ax | grep nfs
service nfs status
kickstart
- start the kickstart template(system-config-kickstart) and open the anaconda file with the kickstart
- The default one is anaconda-ks.cfg.which is being created by the anaconda based on the setting we put in.
%post section
echo "hackers will be punished !" > /etc/motd
-> Kickstart from cdrom and usb
linux ks=floppy
linux ks=hd:sda1:/ks.cfg ksdevice=link
linux ks=hd:sda1:/ks.cfg
linux ks=cdrom:/ks.cfg
=> syslinux.cfg - is the file that is read before the kernel loads. It is similar in functionality to lilo.conf. we have modified syslinux.cfg so that it runs linux ks=floppy by default if you press enter at the boot prompt. If you want to do a regular interactive install using the floppy then you can just type linux at the boot prompt.
=> boot.msg contains the text that is displayed at the boot prompt.
=> ks.cfg is the kickstart configuration file.
==> " Server Managements "
LInux boot process
Redhat package manager "you need to know"
packages tend to refresh ... s/w application everything belongs to a package. like cp, mv etc
In addition to package management we will teach you
HOw the system boot what happens at every step of booting
bios will check the various pheripheral in addition to ram is checked. The default pheripherals are functional it will make sure.
The bios handover the responsibility to mbr only 512 bytes very first sector of the hard disk to store the binary of any os to boot the system
small space contains loader (exists a small binary which is a loader ) 'This all happens in no seconds'
Search for mbr the primary harddrive ide or scsi 'we will talk about the features shortly'
1st stage call 2nd stage grub kernel launched reads from /boot partitions in addition to the kernel loads the initrd into ram to prepare the kernal for loading itself into memory as well as to get the access of the depended modules so that we can handle the system over to the init process.
mbr 512 bytes
1st stageboot loader
2nd stageboot loader is the grub
grub can read directly the ext2/ext3 partions without referencing to the mbr that is the main difference between grub and lilo.
grub is superior to lilo just b'cos it can read the partition directly.
grub is much much more flexible, lilo store every thing in the mbr.
grub create a ramdisk takes a section of memeory for initrd which contain the modules, required by the kernel, modules include the device drivers. initrd is much like proc directory but it is only used upon boot.
If initrd is being deleted go to resuce mode and create a mkinitrd.
/boot/grub/grub.conf - splash image we can modify we can simply download or else we can make it.
kernel is monolithic by default
A monolithic kernel is a kernel architecture where the entire operating system is working in the kernel space and alone as a supervisor mode. like unix,linux,bsd,sloaris.
Kernel mode - In this mode, the software has access to all the instructions and every piece of hardware.
User mode - In this mode, the software is restricted and cannot execute some instructions, and is denied access to some hardware (like some area of the main memory, or direct access to the IDE bus).
Kernel space - Code running in the kernel mode is said to be inside the kernel space.
User space - Every other programs, running in user mode, is said to be in user space.
All types of process /etc/rc.d/init.d control by the init
different run levels process kill scripts and start scripts.
When the system startup the init reads up the S-scripts whereas if the system shutdowns it reads K-scripts.
"reliatively simple and reliatively straight forward"
Linux boot process steps in brief.
Boot process
1. BIOS loads & checks peripherals & check for boot device
2. MBR - exists on the very 1st sector of primary hard drive
3. MBR references step 1 loader with in 512 bytes
4. step 2 loader called from step 1 loader and loaded into ram
5. Default step 2 loader is GRUB & optionally LILO
6. Grub is loaded into memory
7. GRUB locates kernel(vmlinuz-version) from /boot partition
8. GRUB creates RAMDISK for initrd (dont' remove initrd )
9. GRUB handsoff to kernel
10.kernel handsoff boot process to /sbin/init
11.init loads daemons and mounts partitions /etc/fstab
12.user receives login screen
GrUb features
1. provides a pre-OS command environment like additional parameter Ramsize decrease mem=512 like
2. Can boot OS's above 1024th cylinder or 528MB
3. GRUB can read directly ext2 & ext3 partitions
/boot - vmlinuz-version
/boot/grub/grub.conf
VArious run levels " in addition "
/etc/inittab
- init is the grandfather of processes.
- init relies on various run levels
- runlevel 0 - 6 ( 7 runlevels)
init is the userlevel process grandfather of all the process in the system
mingetty is the process responsible for virtual consoles.
7 runlevels redhat recognises
runlevel 0 = halt - proper way to shutdown
runlevel 1 = single user mode = linux single
runlevel 2 = n/a or user - user defined
runlevel 3 = text mode with 6 virtual consoles
runlevel 4 = n/a or user - user defined
runlevel 5 = graphics mode or x
runlevel 6 = reboot
init first runs the rc.sysinit script - house keeping task that has to be accomplished
- swap process
- filesystem check
- product version
- system clocks
- maps the keyboard
- setting the paths etc
now inittab file, init need to know default runlevels
function file - environment path
/etc/rc.d/init.d/ directory contains the daemons
There is a logic in the madness.
- The lower the number it will execute first, kudzu is the hardware checker
/etc/X11/prefdm - preferred display manager
kernel passes control upon boot to /sbin/init
1. /etc/rc.d/rc.sysinit - setup of script environment
2. /etc/inittab - for default runlevel
3. /etc/rc.d/init.d/funtions - setting environment path
4. executes scripts in appropriate runlevel - /etc/rc.d/rc5.d
a. runs K Scripts - Kill scripts
b. runs S scripts - Start scripts
5. execute /etc/rc.local script
6. start the mingetty in the inittab file for virtual consoles.
7. /etc/X11/prefdm - preferred display manager starts up graphical manager
Daemon management
file httpd ( This will tell the what kind of file is httpd )
- redhat-config-services edit all the service in different runlevels.
- ntsysv only we can edit the current run level.
- chkconfig list the daemons in all different run levels.
- In ubuntu it is sysv-rc-conf tools to manage the daemons
eg:- chkconfig --list --level 5 httpd on/off.
/etc/rc.d/init.d all of the daemons reside.
user profile schema
- What exaclty happen when a new user is created. - lets' examine
1. /etc/skel influence - its' a global directory
ls -ali in skel dir By default the contents of the /etc/skel directory will be copied to the users' home directory.
2. .bash_profile inherits the /etc/profile file and .bashrc will handles the aliases and functions for specific users.
The /etc/profile path contains the global path like /sbin,/usr/bin eg:- printenv | grep PATH
alias df='df -h' in .bashrc
User Profile schema
1. /etc/skel - contains initial initialization files
2. /etc/profile - contains global settings that are inherited by everyone - PATH
3. useradd - copies /etc/skel info. to the user's HOME directory
4. userdel -r username removes user and /home/username directory
Quota Management Concepts - production environment
package listing rpm -q package-name
- quota managemt essentilly looks at the inode usage and the block usage depend on the uid and gid.
- we can set separate quota on uid and gid.
- In fstab file we have to add defaults,usrquota,grpquota
- soft and hard limit in otherway bottom and top limits.
- You need to mount the filesystem which we want quota support. (umount and mount the filesystem)
- process is reliant on the / filesystem you can mount the / b'cos process are running
- quotacheck -cugm /(On which file system) m - Ignore whether the filesystem is able to mount/umount the filesystem just create the data files
- quotacheck -vugm / ( verify the file system)
- edquota utility will allow the per user eg :- edquota username (edits the quota of the user), quota username (dispays the quota)
- quotacheck -vugm /
- repquota / (represent quota)
-quotaoff -vugm/quotaon -vugm
1. rpm -q quota
2. modify /etc/fstab - usrquota and/or grpquota
3. umount/mount or reboot system to enable quota support
4. create quota files - quotacheck -cugm /
5. verify quotas - quotacheck -vugm /
6. Assign quotas to users/groups - edquota linux
7. verify user quota - quota linux
8. test quotas - cp files that exceed the quota
9. copy the quota eg:- edquota -p studentquota newuser
Note : block = 1k
In proc directory
cat /etc/partitions
cat /etc/filesystems
*** Howto: Setup a DNS server with bind in Ubuntu ***
Step1: Install bind 9 dnsutils
Step 2: Configure the main Bind files. Usually, if you install Bind from the source code, you will have to edit the file named.conf. However, Ubuntu provides you with a pre-configured Bind, so we will edit
sudo vi /etc/bind/named.conf.local
This is where we will insert our zones. By the way, a zone is a domain name that is referenced in the DNS server
Insert this in the named.conf.local file:
# This is the zone definition. replace example.com with your domain name
zone "example.com" {
type master;
file "/etc/bind/zones/example.com.db";
};
# This is the zone definition for reverse DNS. replace 0.168.192 with your network address in reverse notation - e.g my network address is 192.168.0
zone "0.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/rev.0.168.192.in-addr.arpa";
};
sudo vi /etc/bind/named.conf.options (This is optional)
We need to modify the forwarder. This is the DNS server to which your own DNS will forward the requests he cannot process.
Code:
forwarders {
# Replace the address below with the address of your provider's DNS server
123.123.123.123;
};
Now, let's add the zone definition files (replace example.com with your domain name:
Code:
sudo mkdir /etc/bind/zones
sudo vi /etc/bind/zones/example.com.db
The zone definition file is where we will put all the addresses / machine names that our DNS server will know. You can take the following example:
Code:
// replace example.com with your domain name. do not forget the . after the domain name!
// Also, replace ns1 with the name of your DNS server
example.com. IN SOA ns1.example.com. admin.example.com. (
// Do not modify the following lines!
2006081401
28800
3600
604800
38400
)
// Replace the following line as necessary:
// ns1 = DNS Server name
// mta = mail server name
// example.com = domain name
example.com. IN NS ns1.example.com.
example.com. IN MX 10 mta.example.com.
// Replace the IP address with the right IP addresses.
www IN A 192.168.0.2
mta IN A 192.168.0.3
ns1 IN A 192.168.0.1
Now, let's create the reverse DNS zone file:
Code:
sudo vi /etc/bind/zones/rev.0.168.192.in-addr.arpa
Copy and paste the following text, modify as needed:
Code:
//replace example.com with yoour domain name, ns1 with your DNS server name.
// The number before IN PTR example.com is the machine address of the DNS server. in my case, it's 1, as my IP address is 192.168.0.1.
@ IN SOA ns1.example.com. admin.example.com. (
2006081401;
28800;
604800;
604800;
86400
)
IN NS ns1.example.com.
1 IN PTR example.com
Ok, now you just need to restart bind:
sudo /etc/init.d/bind9 restart
We can now test the new DNS server...
sudo vi /etc/resolv.conf
// replace example.com with your domain name, and 192.168.0.1 with the address of your new DNS server.
search example.com
nameserver 192.168.0.1
dig example.com
***REDHAT PACKAGE MANAGEMENT***
rpm managemnent tool is a great tool which keep tracks of everything in the system.
RPM facts
1. RPM is free - GPL
2. stores info. about packages in a database. /var/lib/rpm
3. RPM usage requires root access
4. RPM operates in modes - install,upgarde,remove,verify,query.
note: IF you want to find cp,mv,rm command belongs to which package.
rpm -qf(q query, f file) cp
You can query literally any file in the redhat.
rpm -qa (entire list of pakages will be echoed)
rpm -qa | wc -l (It will show the package number)
key
cd /usr/share/rhn/
RPM-GPG-KEY (rpm gnu privacy guard)
rhn # rpm --import /usr/share/rhn/RPM-GPG-KEY
rpm -qa gpg-pubkey*
which mozilla , rpm -qf /usr/bin/mozilla
remote desktop to windows2003 server
rdesktop -g 550X450 -u gyani ip
*** Network Admin ***
PING - packet internet gropper.
It is a tool to diagnoise the icmp protocol on a ip based network.
ICMP is the protocol within the tcp ip protocol. (Internet control messaging protocol)
TCP ip inherently or by default supports the icmp packet.
which ping. rpm -qf /bin/ping
bundles various packages.
ping is a true client/server protocol.
we send a echo request and the host will send a echo reply.
ping is also used for testing the dns resolution. simple way. check /etc/resolv.conf namesever setting
eg : ping www.google.com
ttl=49 time=50 ms This is important b'cos it total equates the total round trip. It took total 50 ms to respond to us from the server.
Ping by default sends packets 1 sec intervals.
icmp echo request (icmp type 8). icmp echo reply (icmp type 0).
ping are indefinite in linux unless in windows 4
flood the host network.
ping -c 3 google.com ( c count, sending - 3 packets) " You may be misleaded "
Diagnosing remote server connectivity
telnet localhost port number eg:- telnet localhost 80
In ping time=77 ms ( Total round trip, From our host to Destination host ) pretty fast,Internet is pretty fast.
64 bytes , In general 56 bytes by default, icmp pads 8 bytes information into the ICMP header.
There are times when you need to communicate with routers firewall etc.
The way you adjust the ping size.
ping -s(size) 24 google.com (24+8=32 Icmp header pads up 8 bytes )
ping -s 24 -c 5 google.com
Large ping size can eventually over run many routers,firewall buffers that can cause effectively icmp overflow.
The time between the sequences is 1 sec.
Many hackers, many script kiddies. attempt to flood routers and firewall.
Linux is customizable for every thing we can set the parameter.
We can change the intervals from 1 sec to less than 1 sec.
ping -i .5 -c 4 google.com
Good idea to block the icmp traffic to the webserver. router and firewall.
Over run the icmp buffer.
Isn't this amazing.
ping -i.02 -q(quiet mode) localhost (We can send loads of packets in no time)
As a network administrator ,we have to scan our network.
** TRACE ROUTE **
Determine the route from tha calling host to the Destination.
Traceroute is offen used for connectivity problem
Traceroute calculated the total number of hops between the our host and the destination host.
traceroute relies on the icmp protocol.
traceroute google.com (DNS resolution)
traceroute -n google.com ( NO DNS resolution)
ICMP relies upon
type 11 & type 3
11 = time exceeded
3 = unreachable
** MTR Utility **
This tool consoldate the functionality of traceroute and ping.
It has its' place in ay network terminal.
which mtr , rpm -qf /usr/sbin/mtr
mtr host ( default localhost)
Mtr determines the path discovery and host to router statics.
mtr interval increments mtr -i .5 www.linux.net
data moves in internet are milli seconds.
ping relies on echo request and echo reply.
traceroute relies on time exceeded.
*** ARP protocol ***
Address resolution protocol.
Arp protocol allows tcp/ip hosts to resolve layer three IP address to Layer 2 MAC address.
We should understand the flow of data through the OSI model.
When ever 2 hosts communicate the address resolution protocol communicates facilitation the layer 2 communication(mac address layer )
For Example when i say ping www.redhat.com , it initiates the layer 3 communication.
Computer on ethernet n/w generally talk on layer 2 network. which is the mac address.
The names are translated to ip address, ip address are translated to macaddress, that happens in kernel capabilities.
arp, arp -e , arp -a(barkeley style).
when two computer communicate on tcp/ip address they essentialy use mac address.
various network ipx, apple talk
whenever ever the machines has to be communicate the mac address has to be resolved.
arp table will be having the ip and the mac address. arp protocol does relies on broadcast.
What happen when we say ping google.com, We sent all our packets to default gateway, the default gw/router will handles the communication for us.
we can remove from the arp table.
arp -d ip
arp is handled by the linux kernel by default, ip address are s/w base whereas the mac address are hardware base.
Its' a protocol that works pretty much by itself.
*** Netstat ****
netstat utility display the sockets in our machines and listening sockets
In general interested in tcp/udp connections
netstat -ltu
It tells the various tcp/udp connections that are listening to.
netstat -r
It tells the various routes established to this particular machine.
In ssh the key is the public key. .known_hosts any one can get the key.
ports are less than 1024 are well known ports.
https - 443, smtp - 25. ssh - 22
*** ifconfig***
Creating a sub interface
ifconfig eth0 add 192.168.0.1
down the subinterface -- ifconfig eth0:0 down
*** ROUTE***
route that directs the network traffic to and from subnets , various destinations are maintained in a simple table.
cat /etc/sysconfig/network
static route ( route can be for specific host and route for entire subnet )
route add -host ip gw ip
route del ip
route for the entire subnet
route add -net ip netmask 255.0.0.0 gw ip
route add -host ip dev eth1 (eth1 will talk to the default gateway, ip - destination address)
** NAme resolution This section we talk about the name resolution
Generally the tools are.
nslookup www.redhat.com
The dns server IP which is resolving the IP and the ip address of www.redhat.com
dns port 53
NOn-authoriative answer- means cached(It means the dns query result does not come authoriative dns mean public, It cached from the local server )
www.linux.com
12.110.139.108
This actually cached by our local dns server
dig www.redhat.com
dig will list time-to-live per record base or domain base in seconds.
dig returns the authority nameserver IP of the public servers.
dig gives lots of information whereas dig gives the fact that the record is authoriative or non-authoriative
so,if you run dig without anything it returns all the nameservers on the internet was returned the various root level servers.
dig @ns1.gowebing.com www.linux.net
dig - server that queried is actually our local DNS server.
BY default when you run dig it reads the contents the /etc/resolv.conf
By default if you run dig it returns all the root dns servers.
Dig will support multiple queries/dig will accept multiple queries in the command line. eg:- dig example.com yahoo.com
dig also reads from file
dig google.com MX ( Please give me the MX record )
dig is much more flexible.
Find MX record dig –MX ip
- Miscellaneous utility
w - By default w display who logged onto the system.
pts connections are called shell instances.
last
/etc/issue file - /r kernel version, /m platform. On telnet the data will display
/etc/mtab file - On ssh the data will display.
/ect/issue file will display upon successful login.
DHCPD SERVER (Dynamic host configuration protocol)
- It provides the clients to automatically configure on the tcp based IP
ip address,subnet mask, dns server, wins server, time server.
you need to install dhcpd package.
cd /usr/share/doc/dhcp -- sample dhcpf.conf sample file.
cp dhcpd.conf.sample /etc/dhcpd.conf
dhcpd daemon makes use of dhcpd.conf aswell as /var/lib/dhcp/dhcpd.leases
dhcpd.lease this is the database for the dhcpd daemon... When ever it leases ip to the clients it appends the information to this particular file.
we can have multiple subnets and multiple scopes.
we can setup reservation
You need to pay attention to the options that are there in the dhcpd.conf file.
/etc/syslog/dhcpd --- We can set arguments
/etc/sysconfig/dhcrelay -- various interfaces and dhcp servers.
routers by default dont' forward broadcast but dhcp works on broadcast. In order to facilate the machines across the subnet
the dhcp relay function is necessary
relay is simply a way to move the broadcast traffic across routers.
/etc/init.d/dhcrelay restart
dhcrealy functions in order to accross the functions.
***********Xinted (super server )**********
It is the latest version or incarnation
super server - Any daemon that controls the functionality of the other daemon, In particular this xinetd is responsible for network related services. such as imap,telnet
Tcp-wrappers
This is the means by which we can restrict the machines or domains. prevent connectivity to various services.
/etc/hosts.allow,/etc/hosts.deny.
/etc/xinetd.d/ * small services rsync,imap etc
Telnet passes the data in a plain text.
##### logical volume management #######
created volume sets contiguous space that made available
Regardless of the storage type ide,scsi,sata so on and so forth... into one contiguous space that made available
Features:
1. Ability to create Volume sets and stripe sets
2. LVM masks the underlying physical technology (ATA,ATAPI,ide, scsi, sata, pata etc )
parittions form 3 hard-disks made has single partitions which made it has one single accessible device.
3. LVM represents storage using a hierarchy:
a. Volume groups
a1. Physical volumes (/dev/sda2, /dev/sdb2)
b. Logical Volumes
b1. File systems ( over loaded to the filesystem)
4. LVM physical volumes can be of various sizes
5. Ability to resize volumes of the fly
Note: volume groups join: Physical volumes (PVs) and logical Volumes (LVs)
4. steps to setup LVM:
1. Create LVM partitions via fdisk or parted
a. fdisk /dev/sda
b. n
c. p
d. +10G
e. t - change to type '8e' (LVM)
f. w
g. partprobe /dev/sda
2. Create Physical Volumes using 'pvcreate' eg: pvcreate /dev/sda1 /dev/sda2 after that "pvdisplay"
3. Create Volume Groups using 'vgcreate'eg: vgcreate volgroupoo1 /dev/sda3 /dev/sda2 that "vgdisplay"
note: Volume groups can be segmented into multiple logical volumes
4. Create one or more Logical Volumes
lvcreate -L 10GB -n logvolvar1 volgroup001
lvdisplay
5. Create File system on logical volumes(s)
mke2fs -j /dev/volgroup001/logvolvar1
6. Mount logical volumes
mkdir /var1
mount /dev/volgroup001/logvolvar1 /var1
Note: Be sure to add to the fstab file, So that the volumes are mounted when the system reboots
3-tiers of LVM display commands include:
a. pvdisplay - physical volumes
b. vgdisplay - volume groups
c. lvdisplay - logical volumes - file systems - mount here
Note : ls -ltr /dev/mapper ( Different Logical volume mapped )
However after renaming the logical volumes the changes has been affected in the dev-map tree, the changes will be affected after remounting.
Vim /etc/fstab – entry
/device –name mount-point file system default 0 0
Rename of Logical Volume:
1. lvrename volume_group_name old_name new_name - used to rename volumes
Task: Rename 'logvolvar1' to 'logvolopt1'
a. lvrename volgroup001 logvolvar1 logvolopt1
Note: LVM is updated immediately, even while volume is mounted
However, you must remount the logical volume to see the changes
b. umount /var1 && mount /dev/mapper/volgroup001-logvolopt1 /opt1
c. Update /etc/fstab
Mount –a ( Remount the file systems which are there in the fstab Entry )
Remove Logical Volume:
Task: Remove 'logvolusr1' from the logical volume pool
a. umount /usr1
b. lvremove /dev/mapper/volgroup001-logvolusr1
c. use 'lvdisplay' to confirm removal
Resize Logical Volume:
Task: Grow (Resize) 'logvolopt1' to 20GB
a. lvresize -L 20GB /dev/volgroup001/logvolopt1
b. lvdisplay - to confirm new size of logical volume
c. df -h - will still reveal the current size
d. Resize the file system to update the INODE table on the logical volume to account for the new storage in 'logvolopt1'
'resize2fs -f -p /dev/volgroup001/logvolopt1'
Note: you may resize file systems online if the following are met:
1. 2.6.x kernel series
2. Must be formatted with ext3
Task: Shrink (resize) 'logvolopt1' to 15GB
a. lvresize -L 15GB /dev/volgroup001/logvolopt1
b. lvdisplay
c. df -h
d. resize2fs -f -p /dev/volgroup001/logvolopt1
Note: online shrinking is not supported
e. df -h
Note: Check disk utilization prior to shrinking to reduce the risk of losing data
*************** VSFTPD SERVICE ***********
file are vsftpd.conf, vsftpd.ftpusers, vsftpd.user_list
vsftpd.ftpusers - Users that are not allowed to login via ftp eg: root because ftp transmits clear text data across the network. Intruders can sniff easily compromise the system.
Allows anonymous access as well as the ftp account.
anonymous users are redirected to the /var/ftp/pub , ftp is a non-privileaged user
- All sets of features , we will do the following.
netstat -ant | grep 21 ( to check whether it is listening )
- Connetcivity
ftp localhost
anonymous
lcd (pwd)
lsd ~
!ls (list the contents)
Lets upload the file as follows
mput anac*
By default anonymous user can download but can't upload.
We can also login as system typical users. eg: linux
pwd
Normal users are sent to the home directories by default whereas the anonymous users are sent to the special directory for security purposes.
- Confiuration
This files drives the entire process. (options)
-- allow anonymous
-- local users
-- umask 022 , The files are created on the ftp server.
-- anonymous user upload.-- log files defaults location /var/log/vsftpd.log
-- Banner string
-- Tcpwrapper , Before login to the system, system will check the host.allow and host.deny
We can controll the ftpd daemon, By any daemon controlled by xinetd. It gains added security
put a softlink of vsftpd.conf to /etc/vsftpd.conf Because default configuration files looks for /etc directory
It Takes little while, All the sudden..
If we want to run the ftp under the xinetd daemon(super server )
copy the xinetd.vsftpd to /etc/vsftpd
1. edit the file disable=yes
2. create a soft copy of the vsftpd.conf in /etc
edit listen= yes (comment)
3. start the xinetd restart.
When vsftpd runs with xinetd file, we have to comment the "listen= YES"
Its a option to have xinetd control over the ftp server
anon_max_rate=10000 ( Bandwith per second, restrict the bandwidth)
ftp://localhost (By default anonymous)
ftp://username@localhost
The downloading speed also we can put the constraint.
local_max_rate=15000 (bytes/sec)
deny_email_enabled=yes
touch vsftpd.banned_emails (default file to search)
Specify the emails in this file eg: 1@2.com
change the port
listen_port=550
max_clients=2 (This is always used in the production environment) == demo
Two simultaneously users can connect to the users at given time.
### SAMBA ###
file sharing and printer sharing with windows domain
samba makes uses of the smb protocol (server messaging protocol)
In a heterogenous environment In general windows,unix, linux.
samba bridges the heterogeneous gap.
Swat is the samba Web Admin Tool. Use swat o configure your samba server
/etc/xinetd/swat file edit == changed disabled "no"
http://localhost:901
username and password root
This is the main administration section
main file /etc/samba/smb.conf -- This is the main files that drives the samba.
wins servers windows name resolution. resolve the name across the subnets.
nmdb - handles names, netbios
smbd - master daemon for files and printer sharing.
connection to windows machines
rdesktop -g 540x450 ip
run - explorer, entire network, microsoft w n.w, samba server
Its' not only to specific to redhat.
*** NFS ***
By sun microsystem in 1980s'
seamless and transparent files transfer between unix boxes.
Nfs server can export the directories and client can simple mount the directories, similar to connectig to windows share.
Export file ip,*(sync) " That would suffice "
eg: /root/temp 192.168.1.0/24(rw,sync) 192.168.1.100(ro)
Aproxmately their are 5 daemons that are associated with nfs.
eg: ps aux | grep nfs
connect to the nfs share
mount ip:/path destination
df -h (human readable format)
(rw,sync,no_root_squash)
If the root connects it will be treated as the under privileage user.
*** Automated jobs and tasks ***
Every one minute checks the cron daemon.
crontab entry
crontab -e
30 * * * * /root/tesh.sh
We can also create cron.allow and cron.deny ( tcp-wrappers)
at now
at> script
at.allow and at.deny (tcp-wrappers)
*** BIND ***
Predominant naming system (Berkely Internet Name Domain System)
Daemon developed by 1980s'
bind utilities such as nslook,dig
redhat-config-bind (Graphical tool to configure)
Bind daemon functions in 3 Primary modes
1. Caching server or recursive server (By default)
By default once we add a bind server with out adding any zones files it will be running as a caching only server
Bind server will use on the behalf of the client request uses the resolv.conf file
2. You can setup has one of the bind server as the primary/secondary dns server.
3. In addition bind server can run in all three nodes. Or it can run dedicate nodes individual functions
Bind server will full fill queries in the caching only modes.
Main configuration files that drives the bind server.
named listen to the port 53.
Non-Authoriative answer means Caching only mode.
simply examine the localhost.zone
ptr(reverse record), A NS (forward record)
@ indicating localhost. ttl (time to live record) in seconds.
everytime you make changes, change the serial number.
serial number is not changed, the assumption is that the zone file is not changed.
IN - internet
@ IN A 192.168.0.2 , www IN A IP
bdc backward domain control slave zone for redundnacy.
allow-transfer in the named.conf file eg: allow-transfer { ip;};
netstat -ant | grep 53
zone file difference for primary and the secondary files
IN NS ns1. Primary
IN NS ns2. secondary
ns1 IN A 192.168.1.2
ns2 IN A 10.0.0.1
On the Master server add the below lines to the zone files "named.rfc1912.zones"
allow-transfer { 172.24.24.1 ;};
A and CNAME (alias) It only meant for the forward zone file.
slave zone file
zone "linux.internal " {
type slave;
file "linux.internal";
masters { 192.168.1.2; }; //primary Dns server IP // At the same time forward zone file of the primary dns server will be updated with ns1 and ns2.
Can be possible to have multiple primary dns servers for master zone file. slave zone file for redundancy.
The linux.internal file will be replicated with the primary zone file.
dig @ip or @localhost www.linux.internal
note:
The zone file should be owned by named (group or user )
***Reverse zone***
dig -x ip
modify /etc/named.conf
Ignore the host ocate
zone "1.168.192.in-addr.arpa" {
type master;
file "1.168.192.in-addr.arpa";
coping the sample reverse zone file
@ IN SOA ns1.linux.internal. root.linux.internal (
Secondary reverse zone.
DNS is very important for the mail servers.
### apache web server ####
Features:
1. www web server
2. MOdular ( allowing flexibility and variety in use )
tasks
rpm -ql httpd - list all the files
/etc/httpd - top level configuration container on RH5
/etc/httpd/conf - primary configuration directory
/etc/httpd/conf/httpd.conf - primary apache configuration file
/etc/httpd/conf.d - drop-in configuration directory, read by apache upon startup
conf.d - In linux environment , tends to mean that we can keep multiple files read by apache
similar to xinet.d
magic file means the mime type, the type of the file the server has to serve.client server connectivity the mime type.
modules
/usr/lib/httpd/modules
ls -ltr | wc -l
Explorer /etc/httpd/conf/httpd.conf file
httpd.conf - modules to load, virtual host to support
ServerToken OS ( publish information to the clients in paritcular the resource that is not available) server will respond
it with apache version and the os details
ServerRoot "/etc/httpd" since apache runs multiple instances...
MaxKeepAliveRequest 100 ( max 100 req to a particular client )
apache starts in two modes
- prefork mode is also known as classic mode - start 8 servers
- multithreaded mode - start 2 server but each server can run multiple threads
Listen - govers to the port number
Dso - dynamic shared objects
LoadModule nameofthemodule pathtothe module
not all modules are loaded but the common items are loaded
when appache start it reads up on the httpd.conf file
Any file in the conf.d directory with extension *.conf file will be included by the apache
a. httpd runs as apache:apache where as other os www:www
b. Apache maintains, always a 'main' server, which is independent of virtual hosts. This server is a catch-all for traffic that doesn't match any of the defined virtual hosts.
ServerAdmin root@localhost ( Any email with the domain )
Document root -- In the filesystem where the default webpages for the apache
Directory directive will allows us to allow rules based on the per directory based
c. directive govers file system access.
Note: The primary apache process runs as 'root', and has access to the full file system. However, directive restricts the web-user's view of the file system.
AllowOverride None - If .htaccess file is there simply be ignored.
mod_user directive
UserDir disable - user will be unable to publish contents from the home directory ( just uncomment that line )
mime.type file defined multiple files defined.
d. Test access to '.ht*' files from web root
e. ErrorLog logs/error_log - default error log file for ALL hosts
f. logs/access_log - default log file for default server
Note: Every directory, outside of the 'DocumentRoot' should have at least one: directive defined.
start apache and continue to explore
ps -ef | grep httpd
note: parent apache runs as 'root' and can see the entire file system
note: However, children processes run as 'apache' and can only see files/directories that 'apache:apache' can see
4. Create an Alias for content outside of the web root (/var/www/html)
a. Alias /testalias /var/www/testalias ( this snippet has to be added at the end of the httpd.conf file )
AllowOverride None
Options None
Order allow,deny
Allow from all
test page for apache served from by default with out index.html /var/www/error/noindex.html
document must be readable by user apache either the group apache
If index.html file is not exist it will fetch the noindex.html file
4. Create an Alias for content outside of the web root (/var/www/html )
a. /var/www/testalias1/
exit status of that particular process "echo $?"
log files very important
/var/log/httpd/error_log ( error for the apache )
/var/log/httpd/access_log (attempts to access the server )
5. Ensure that Apache will start when the system boots
a. chkconfig --level 35 httpd on && chkconfig --list httpd
###VIRTUAL HOSTS CONFiGURATION:###
Features:
1. Ability to share/serve content based on 1 or more IP addresses
2. Supports 2 modes of Virtaul Hosts
a. IP Based - one site per IP address
b. Host header names - multiple sites per IP address
NOte: every virtual host have a document root
Tasks:
1. Create IP based Virtaul hosts
a. ifconfig eth0:1 192.168.75.210 ( logical address )
b. Configure the Virtual Host:
ServerAdmin webmaster@linuxserv4.linux.internal
ServerName site1.linux.internal ( This has to be updated in the DNS to resolve the IP)
DocumentRoot /var/www/site1
// which governs access to the filesystem rules for the users
Order allow,deny
Allow form all
CustomLog logs/site1.linux.internal.access.log combined
ErrorLog logs/site1.linux.internal.error.log
c. CReate: /var/www/site1 and content (index.html)
d. Update: /etc/httpd/conf/httpd.conf with vhost information
e. restart the apache web server
list of modules available -> "httpd -M"
list of modules compiled -> "httpd -l"
2. Create Name-based Virtual Hosts using the primary IP address
It will have to parse from the DNS server to the localhost file
a. NameVirtualHost 192.168.75.199:80
ServerAdmin webmaster@linuxserv4.linux.internal
ServerName site3.linux.internal ( This has to be updated in the DNS to resolve the IP)
DocumentRoot /var/www/site3
// which governs access to the filesystem rules for the users
Order allow,deny
Allow form all
CustomLog logs/site3.linux.internal.access.log combined
ErrorLog logs/site3.linux.internal.error.log
httd -s ( to see the dafault server )
Include the named virtual host site3.example.com or site4.example.com
In Dns file
site3 IN A 192.168.75.199 // dns entry if should be able to resolve with the dig utility
site4 IN A 192.168.75.199
## Example Virtual Host ##
###Apache with ssl support ###
Features:
1. Secure/Encrypted communications
Requirements:
1. httpd
2. openssl (secure socket layer library )
3. mod_ssl (which provides the hook with the openssl tool kit)
4. crypto-utils (genkey - used to generate certificates/private keys/CSRs (Ceritficate signing request)
a. also used to create a self-signed cerificate
rpm -ql mod_ssl ( It provides the hook the apache needs )
mod_ssl is the plugin module for the apache, which provides the ssl support, gives /etc/httpd/conf.d/ssl.conf - includes key ssl directives
crypto-utils - provides /usr/bin/genkey
ssl.conf resides in the conf.d directory
loadmodule and listen 443 , pks crt, ssl engine on
2. Generate ssl usage keys using: genkey
a. genkey site1.linux.internal (FQDN is important)
CA - certificate Authority
we are not encrypting the key (high security environment)
- key will be generated /etc/pki/tls/certs
change in the ssl.conf file
SSLCertificateKeyFile /etc/pki/tls/private/site1.linux.internal.key (private key file)
SSLCertificateKeyFile /etc/pki/tls/certs/site1.linux.internal.cert (public key)
3. Update /etc/httpd/conf.d/ssl.conf to reference the new keys (public/private)
4. Restart the HTTPD server
a. service httpd restart
b. httpd -S
5.Test HTTPS connectivity
a. https://IP (update the Dns ) or the host file syntax: IP fqdn name
Note: For multiple SsL sites, copy the: /etc/httpd/conf.d/ssl.conf file to distinct files, that match your distinct IP-based VHosts
## Apache authentication for the virtual hosts ##
***** Apache Access Control *******
ACcess to the server that prompts for the User Authentication
prompt for the password
we need to create a password file it will store either crypt or md5
Inorder to facilicate Apache Security/Authentication requirements
1. password file - htpasswd
2. make reference to password file via:
.htaccess
Directory directive
Navigate to the /etc/httpd/conf within this direct store the password file.
password file should not be able to accessible to anybody.
httpd support two type of authentication
1. basic [ The credentials will be passing in clear text, something like telnet]
2. digest [ The piece of information is passed across the network, secure ]
"AuthType Basic/Digest" - syntax
STEP1:
AuthType Basic
AuthName "securityrealm1"
AuthUserFile conf/securityrealm1
Reqire user gyani
STEP2:
htpasswd -c securityrealm1 gyani
-c - create or append
syntax : filename,username
htpasswd -m securityrealm1 xyz
change the permissions of the file with the password
chmod 644 filename
"Keep in mind that the password file will be placed in the conf directory(redhat)"
--reason for the htaccess
* Upon access of every website of the apache, the every hit on the apache will give a reference to that particular file it stays in the ram
* simply security permissions...
Another way of doing the apache authentication...
httpd.conf
AllowOverride AuthConfig
Create in file in /var/www/linux.external
touch .htaccess
Add the contents has below
AuthType Basic
AuthName "secuirtyrealm1"
AuthUserFile /etc/httpd/conf/securityrealm1
Require user gyani
reload the apache...
*** DIGEST AUTHENTICATION **
Step 1 : Change the following in the .htacces file
AuthType Digest
AuthName "securityrealm2"
AuthDigestFile /etc/httpd/conf/securityrealm2
Require user gyani
Step 2:
Run htdigest command to create the digest file
htdigest -c /etc/httpd/conf/securityrealm2 securityrealm2 gyani
digest text is much longer than the basic, In addition the credentials are not passed in the clear text.
digest transmits the hash value...
If you create authentication from the .htaccess file and add all the atributes in that file, it will reduce the far disk IO, which is may keep
the website pretty much fast rather than keeping all the attributes in the httpd.conf file.
**group of user in authentication
cd /etc/httpd/conf
touch group1
add the following contents...
Group1: gyani tricia
conf#htpasswd securityrealm1 tricia
step 3
cd /var/www/linux.internal
vim .htaccess
Add the following
AuthType Basic
AuthName "securityrealm1"
AuthUserFile /etc/httpd/conf/securityrealm1
AuthGroupFile /etc/httpd/conf/group1
Reqire group Group1
The Apache web server provides a built-in means to protect any directory in your web server with access restrictions. The passwords are stored in an encrypted file. Keep in mind, however, that unless you set up your site to use Secure Socket Layer (SSL) encryption, usernames and passwords will be passed from the client to the server in clear text. It is therefore highly recommended that if you are restricting access to certain areas of your website that you also use SSL encryption for authenticating users.
Learn how to use Apache's .htaccess files to protect pages on your site with a username and password
To add password protection to your pages, you need to do the following two things:
Create a text file on your server that will store your username and password.
Create a special file called .htaccess in the folder you want to protect.
Creating the password file using htpasswd
htpasswd -c .htpasswd fred
Protecting a Folder Append this things in .htaccess file
AuthUserFile /full/path/to/.htpasswd //folders
AuthType Basic
AuthName "My Secret Folder"
Require valid-user
AuthUserFile /full/path/to/.htpasswd //file
AuthType Basic
AuthName "My Secret Page"
Require valid-user
If you want to password protect other folders (that aren't under the currently protected folder), simply copy your .htaccess file to the new folder to be protected.
To password protect more than one file in the same folder, just create more blocks within the same .htaccess file - for example:
AuthUserFile /full/path/to/.htpasswd
AuthType Basic
AuthName "My Secret Page"
Require valid-user
Require valid-user
#### From redhat knowledge base ####
For this exercise we will assume that your document root is /var/www/html and that the directory you want to protect is called /var/www/html/private.
First, open the /etc/httpd/conf/httpd.conf file for editing. Find the AllowOverride directive in the section. By default it looks like this:
AllowOverride None
Change it to read:
AllowOverride AuthConfig
Restart your webserver:
service httpd restart
Next, we need to create an .htaccess file that tells Apache to require authorization for the /var/www/html/private directory. The .htaccess file goes inside the directory you want to protect and should look like the following example:
# /var/www/html/private/.htaccess
AuthName "Private Directory"
AuthType Basic
AuthUserFile /var/www/.htpasswd
require valid-user
The next step is to create the password file. The file is created using the htpasswd command. The location of the file is indicated in the .htaccess file. Note it is a good idea to keep this file outside of the document root.
htpasswd -c /var/www/.htpasswd username
Where "username" is the name of a user who will have access to the directory. Note that this does not have to be a system user; the htpasswd users only exist for the purpose of authenticating to protected web directories. Note that the -c option is only used when you are first creating the file. Do not use this option when creating subsequent users or it will replace the existing file with a new one.
*** WEB SERVICES ***
Apache modalias powerful module for apache.
apache httpd daemon
It was originally developed by the robert maccul at Natioal center for superconputing
It is most widely used webserver on the internet
80 % of the webserver on the internet today (widely used webserver in productions)
Httpd.conf files that drives the apache engine
Graphical administrator for the apache
Graphical interface doesn't support all the options
mod_ssl - This provides the ssl support for apache, gateway to the openssl libraries
If you are concerned about any modules please go and search the modules in the apache
apache.org eg: perl,php,plugins.
apache modules are extensible we can develop our own modules
packages -> rpm -q pkg-name
netcraft and e-soft
cofiguration file /etc/httpd/conf/httpd.conf
Once the apache initiated or restructured to reload
Document root /var/www
Its' in var because the websites pages are very likely to change
deafult webpage locates in the /var/www/html/
By default this is the structure for lanuching the web platform
Modules provides the specific functionality, perform specific funtions very well.
specific funtions can be handled by the modules
to list out the modules type "httpd -l" list the default modules that are running
The way you support the additional functionality by the modules
If we add modules just make references in the httpd.conf file
httpd.conf file is heavily documented for the apache.
MaxKeepAliveRequests 100 - The maximum number of requests to allow
port binds to 80,443(ssl)
user apache
group apache
But it starts as root privilege, b'cos it binds to the port lesser than 80,1024 and before it responds to the clients request it will handover to
the apache group and username. b'cos it gives access to the files.
We have to make sure that the apache user/group has the privileges to the files and directory.
By the default page , it is indicate served indeed apache is running.
*Mod Alias
mod alias provides various directories and redirection
netstat -ant | grep 80,443
0.0.0.0:80 listens to all ip addresses.
alias allows us to map between urls and filesystems.
The directory access out side the document root.
"alias /linuxtest /var/linuxtest " - syntax Note: you can access the webpage from the browser http://localhost/linuxtest
Ensure that the linuxtest directory got the permissions to that particular directory.
chown on linuxtest directory to apache, Create a index.html file ( add contents to that file)
"simple fabulous and very simple to implement"
This modules is precompiled everything in its' all available
sCRIPT alias for the CGI-BIN outside of the document root.
It tells the apache about the executions files.
"Scriptalias /linuxtest /var/linuxtest " - syntax
remember : Script needs special permissons execute permissions
*Mod-Alias 2 (In production environment, we use aliasing,scripting aliasing and redirection all that...
highily coustmizable web server.
redirecting to filesystems and outside of the file systems.
"alias /roottest /root" - syntax
webserver got the error but access forbidden, error code 403
redirect will allow us to change the old url to the new url... Over localbox to the internet.
redirect /redirecttest http://www.linux.net
It may ask for the username and password.
In addition you can create multiple redirection
"redirect /redhat http://WWW.redhat.com" syntax
"redirect /gnu http://www.gnu.org" - syntax
When ever you consider out of the document root security b'cos of the filesystem security
order of the security
deny,allow.
Very basic way to restrict access to the alias in the filesystem
syntax
Order deny,allow
Deny fron all
Allow from 192.168.1.3 //subnet 192.168.1.0/24 , //*.amritavidya.edu
When we are using localhost we are connecting to the loopback address.
aliases are independent of each other.
telnet ip 80 ( web server )
get /linuxtest (html will be returning from the server)
global log files are
access_log and error_log
403 error means access denied, 404 means page not found
*** Virtual host ***
It allows us to run multiple websites in one machine
we can use simply 1 IP address to serve multiple websites.
virtual hosts can be done in two ways
1. IP based virtual host - we need to have virtual interfaces
2. Name base virtual host - very papular 1 ip address or very few we can host many websites
many to one relationship
***IP Based virtual hosting (Virtualhost containers )
step 1:
// DNS Name or IP Name "We can use the Ip as well"
ServerAdmin root@linux.internal
DocumentRoot /var/www/linux.internal
ServerName www.linux.internal
ErrorLog logs/linux.internal-error_log
acesslog logs/linux.internal-access_log
step 2:
Create a directory called linux.internal with default webpage.
Step 3:
Create virtual ip and changing DNS
ifconfig add eth0 192.168.0.3, ifconfig eth0:1 192.168.0.4
last peice of the puzzle... lets continue...
add /etc/resolv.conf
DNS Entry is very important to resolve the names.
www.linux.internal and www.linux.external are the two should be there in the DNS entry.
eg: www points to the ip and the linux.internal should be the zone file
DNS points us to the appropriate iP addresses... DNS is very important.
**NAME based virtual host **
Namevirtualhost * ( It means it will listen to all IP address)
If you are specifing the * in the virtual host container you need to specify the *
eg:
NameVirtualHost 192.168.0.2
with out the prefix of the www.
ServerAlias linux.internal or *.linux.internal www2.linux.internal / Its' very important
www2 record will be added to the DNS entry to resolve the Ip address.
# VirtualHost example (name based):
# Almost any Apache directive may go into a VirtualHost container.
ServerAdmin webmaster@host.some_domain.com
DocumentRoot /www/docs/host.some_domain.com
ServerName host.some_domain.com/
ErrorLog logs/host.some_domain.com-error_log
CustomLog logs/host.some_domain.com-access_log common
ServerName and DocumentRoot are necesairly remaining are additional
Client is gonna need DNS for providing the web page
To tie multiple domain name/ websites to One Ip address
DNS should be configured up-front
Dns are Named based and IP based.
logs...
Change the permissions of the log files...
something like 700 it will be good.
when ever a service is reloaded SIGHUP is sent to that particular service.
It gives you a sense and closes you to give a quick information.
** Apache Access Control ***
ACcess to the server that prompts for the User Authentication
prompt for the password
we need to create a password file it will store either crypt or md5
Inorder to facilicate Apache Security/Authentication requirements
1. password file - htpasswd
2. make reference to password file via:
.htaccess
Directory directive
Navigate to the /etc/httpd/conf within this direct store the password file.
password file should not be able to accessible to anybody.
httpd support two type of authentication
1. basic [ The credentials will be passing in clear text, something like telnet]
2. digest [ The piece of information is passed across the network, secure ]
"AuthType Basic/Digest" - syntax
STEP1:
AuthType Basic
AuthName "securityrealm1"
AuthUserFile conf/securityrealm1
Reqire user gyani
STEP2:
htpasswd -c securityrealm1 gyani
-c - create or append
syntax : filename,username
htpasswd -m securityrealm1 xyz
change the permissions of the file with the password
chmod 644 filename
"Keep in mind that the password file will be placed in the conf directory(redhat)"
--reason for the htaccess
* Upon access of every website of the apache, the every hit on the apache will give a reference to that particular file it stays in the ram
* simply security permissions...
Another way of doing the apache authentication...
httpd.conf
AllowOverride AuthConfig
Create in file in /var/www/linux.external
touch .htaccess
Add the contents has below
AuthType Basic
AuthName "secuirtyrealm1"
AuthUserFile /etc/httpd/conf/securityrealm1
Require user gyani
reload the apache...
***DIGEST AUTHENTICATION ***
Step 1 : Change the following in the .htacces file
AuthType Digest
AuthName "securityrealm2"
AuthDigestFile /etc/httpd/conf/securityrealm2
Require user gyani
Step 2:
Run htdigest command to create the digest file
htdigest -c /etc/httpd/conf/securityrealm2 securityrealm2 gyani
digest text is much longer than the basic, In addition the credentials are not passed in the clear text.
digest transmits the hash value...
If you create authentication from the .htaccess file and add all the atributes in that file, it will reduce the far disk IO, which is may keep
the website pretty much fast rather than keeping all the attributes in the httpd.conf file.
**group of user in authentication
cd /etc/httpd/conf
touch group1
add the following contents...
Group1: gyani tricia
conf#htpasswd securityrealm1 tricia
step 3
cd /var/www/linux.internal
vim .htaccess
Add the following
AuthType Basic
AuthName "securityrealm1"
AuthUserFile /etc/httpd/conf/securityrealm1
AuthGroupFile /etc/httpd/conf/group1
Reqire group Group1
###Examples of Virtual Hosting
ServerName site1.example.com
DocumentRoot /var/www/site1
AllowOverride all
AuthType basic
Authname “Authentication Required !!”
AuthUserFile /var/www/site1/passwordfile
require user amma or require group amma
ErrorLog logs/site1.example.com.error-log
CustomLog logs/site1.example.com.access-log common
syntax:-
htpasswd -mc /path username
DocumentRoot /var/www/html
ServerName 172.24.0.33
Basic – like Telnet
Digest – secure
NameVirtualHost IP:Port
*** SSL Certificate ***
Both self signed and commercial...
we need package called openssl and mod_ssl
mod_ssl functions almost like a driver for httpd to access the openssl.
mod_ssl talks to openssl, it is a module that provides the ssl connectivity.
How to generate a private key as well as public key.
PKI infrastructure asymetric encryption techniques...
In this the public key differs from the private key.
What ever encrypt with one can only be decrypted with the other.
Eg:
If a client uses public key to encrpyt the massages only our private key can decrypt the message, If we encrypt with our private key only the public key can decrypt it. thats' how the whole PKI infracture works in a nut shell.
goto cd /etc/httpd/conf
Two directories you need to pay attention to are
-> ssl.crt (which contains the ssl certificates )
-> ssl.key (which contains the private key)
By default redhat sets 2 bogus keys private and public.
remove the default ssl certificate and ssl server key
rm ssl.crt/server.crt; rm ssl.key/server.key
Step 1: (private key )
First we need to generate the private key.
openssl genrsa 1024 > /etc/httpd/conf/ssl.key/server.key
The above is the private key.
Step 2: (Public key )
cd /usr/share/ssl/certs
make testcert
.... filled up the asked information
Its a self signed key
cd /etc/httpd/conf/ssl.crt ... you will find the certificate...
reload the apache ( It will take the affect of the secure socket layer support )
ssl certificate do not function for name based virtual host...
on the browser https protocol
Webalizer logs and awstats (advance statics something of webserver ) --- monitoring tools
*** TOMCAT ****
Apache s/w foundation
It allows us to run the servlet engine and jsp webpages, java server pages java support
jakara-tomcat,java and j2sdk
export JAVA_HOME=/usr/java/...
startup.sh run this particular script http://localhost:8080
userconfig.xml file is the main configuration file
*** Weblogic j2ee engine ***
Allows you to deploy enterprise java beans, jsp and java servlets
full gamet of support for the j2ee application , java beans it provides clustering across multiple server. The multiple
server will work together for loadbalancing requests
we can also integrate with apache
www.bea.com
./serverweblogic.bin
/opt directory run the script
localhost:7001/console
*** JBOSS ***
java web application server
That implements EJB java servlets and jsp webpages
It fully implements the j2ee specifications used in many corporate environment
jboss.org
the jboss environment always locate into the bin directory
export JAVA_HOME=/usr/java/j2sdk
http://127.0.0.1:8080 "however you refer "
jvm - java virtual machine
stateful and stateless(not changing dynamic) session beans
jboss is meant for managing the java applications
### MYSQL SERVER ###
Features:
1. DBMS Engine
2. Compabtible with various front-ends:
a. perl
b. php
c. odbc
d. GUI Management
Tasks:
1. Install Mysql Client & Server
a.yum -y install mysql
/etc/my.cnf - primary config file
/usr/bin/mysql - primary client used to interact with the server
/usr/bin/mysqladmin - primary admin utility
b. yum -y install mysql-server
/usr/libexec/mysqld - DBMS Engine
2. Start Mysql server and modify perms for 'root'
a. service mysqld start
b. chkconfig --level 35 mysqld
mysql -u root
c. password change -- mysqladmin -u root password abc123
d. Default database location /var/lib/mysql
e. mysql -u root -p servername ( from the client side )
Note: mysql command line options ALWAYS override global (/etc/my.cnf) and/or local (~/.my.cnf ) configuration directives
Sql commands:
mysql>
show databases;
use databasename
show tables;
describe contacts;
INSERT INTO contacts (first_name, last_name, bus_phone1,email) VALUES ('key', 'mohoom', '232.232', 'key@example.com');
Delete record from 'contacts' table
DELETE FROM contacts WHERE email = 'key1@Linux.com';
flush privileges; ( Ensures that the changes will take effect immediately )
show processlist;
drop database database-name;
Create Database 'addressbook'
Create database AddressBook;
use Addressbook;
create table contacts ('first_name char(20), 'last_name' char(20), 'bus_phone' char(20), 'email' char(20), PRIMARY KEY ('email'));
save the above two lines in a script file name it as "create_addressbook_db.mysql"
Inputing a file to create a database
mysql -u root -pabc123(-p password) < create_address_book.mysql
**IPtables**
Built in fuctionality of the linux kernel and it has the capability like
- firewall
- Network address translation
- Port address translation
- packet filtering
- IP masquerading ( In other words natting )
By default other security available like xinetd and tcpwrappers
iptables will allow you to protect across local subnet and subnets.
local firewall and personal firewall
by default it is not configured in our system
Netfilter is the part of the kernel above kernel 2.4
iptables is the frontend tool for configuring the various tables...
iptables -L
This lists all of the tables and various chains
By default there are 3 tables defined in iptables framework
default one is filter table
second one is NAT table
NAT example:
For suppose if you setup your Linux machine with 2 network card...
1 facing the lan and the other facing the wan
third table is the mangle table
It allows our system to mangle various packets entering and leaving our system
3 table are
- filter table
- nat table
- mangle table
Since filter are what we want to restrict access
incoming access INPUT in particular
filter table got 3 chains
INPUT - entering from source lan or wan to the network interface card. incoming packet INbound traffic
FORWARD - this is a mixture of the i/p and o/p It acts like a routing functionality eg: Internet sharing
eg: taking packet from the network A and forwarding to the network B
OUTPUT - packet is created @ the redhat server and leaving from any interface -- Outbound traffic
Every table has a policy set and the policy set controls the default rules of the tables
Policy ACCEPT : This unique table has a globally unique policy of ACCEPT ( Global policy ) All communications are permitted. Default ip tables policy
eg:
- Any traffic Inbound to our machine is to be accepted
- Any traffic that need to be routed and forwarded will be accepted (Mixture of in and out )
- Any traffic which we sent out sourced from our machine is to be accepted
policy means you set up the least restrictive first so that we can filter it later
For example cisco routers and all
the deafult policy will be to block all.
Eg:
When ever you are writing the rule use -A it will append the rule to the particular chain.
INPUT
eg: iptables -A(append) INPUT(chain) -s(source) 192.168.1.80 -p(protocol) icmp --icmp-type echo-request -j(jump) REJECT/DROP/ACCEPT
REJECT means the server will be responding with the courtesy message.
-i (which interface you want to filter to be specific)
All of the iptables rules are dynamic similar to the cisco router, every thing will be stored in the ram dynamic
cisco takes lot from UNIX
Flush the iptables rules
iptables -F ( It will flush all the chains)
ipables -F INPUT ( It will flush only the INPUT chain)
deny the icmp echo reply.
Rule deny all the protocols from a given host.
iptables -A INPUT -s 192.168.0.20 -j DROP (When we dont' specify protocol it will grant as all )
Any traffic that comes from source ip will be dropped.
we are able to send output traffic to them but input traffic is blocked.
Remember communication works both ways
protocol help
iptables -p icmp -h
Information about a particular chain
The chains are processed by the kernel in a sequential order.Top down processing order
very careful about the chain input creation
absolutely careful about the chain input creation.
iptables -D INPUT 1 (delete)
iptables -R INPUT -s 127.0.0.1 -p icmp -j DROP (modify )
iptables -F INPUT (Remember: It will flush all the chains)
The most frequently used rules keep on the top it increases the performance
iptables -A INPUT -s ip -p icmp --icmp-type echo-request -j REJECT
cat /etc/services ( Which will map all the services to the port number )
iptables resolve the port number with the /etc/service file.
Blocking the apache web service for the particular system
iptables -A INPUT -s 192.168.1.80 -p tcp --dport(destination port)/(protocol name ) 80 -j REJECT or --sport (source port )
netstat -ant | less ( It will show list of ports running...)
iptables -A INPUT -s 192.168.1.0/24 -p tcp --dport 22 -j REJECT
Delete a specific chain
iptables -D(delete) INPUT(chain) 5(number)
iptables -R INPUT ... the entire chain
OUTPUT CHAIN (outbound)
iptables -A OUTPUT -d 192.168.1.80 -p icmp --icmp-type echo-request -j REJECT
iptables -A output -d 10.1.2.159 -p tcp --dport 22 -j REJECT
iptables -D OUTPUT 3 ( delete the 3rd output chain)
**Chain management
iptables -L INPUT/OUTPUT/FORWARD
INserting a particular rule in particular table/chain in any order
Insert enrty in any position in our chain list
iptables -I INPUT 1 -s ip -p tcp --dport 80 -j REJECT ( we have inserted very top above the input chain )
Manuplating the INPUT chain
iptables -R INPUT 1 -s IP -p tcp --dport 80 -j DROP
Drop entry
iptables -D INPUT 1
flush the output list
iptables -F OUTPUT
iptables -L OUTPUT " ON the fly we can flush the chain "
policy changing
iptables P OUTPUT/INPUT/FORWARD -j ACCEPT/DROP/REJECT
keep in mind the changes that happening all in memory
save your settings
service iptables save
/etc/sysconfig/iptables
Ip Masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/etc/init.d/iptables save
echo 1 > /proc/sys/net/ipv4/ip_forward
What is Masquerading?
All computers appear to have the same IP
This is done with Network Address Translation
It’s easy to fake the “outgoing packet”
“Incoming packets” must be translated too
Port translation - a must
*** TCP WRAPPERS ***
In this section you will learn how to implement tcp-wrappers support
By default it will install
rpm -q tcp_wrappers
It filter tcp,udp,icmp packets
tcp_wrappers sits in front of the daemons and it intercepts requests of any tcp_wrappers capable daemon. functions like the firewall
There are 2 files in order to support tcp_wrappers
hosts.allow and hosts.deny
Immediately after entering the daemons in the files the kernel will process, the daemon is built into the kernel Once you make some
changes it will immediately process into the kernel
hosts.allow - connections that match will be allowed
hosts.deny - connections that match will be denied
If there is no entry in hosts.allow and hosts.deny it allows all the traffic to be allowed.
It processing order is hosts.allow and then hosts.deny
It is very much transparent, it doesn't do any funky to the protocol and nat address translation, doesn't mangle etc
particular IP range, host , domain
tcp_wrappers can logs to syslogs
iptables sits at the edge, then tcp_wrappers , daemons can be filtered with xinetd etc
finding out the particular daemon weather or not supports the tcp_wrappers
eg: which sshd
strings /usr/sbin/sshd | grep hosts_access
If the file persists then simply we can say it supports the tcp_wrappers
alternative way
eg: which sshd
ldd /usr/sbin/sshd
you will find a library called "libwrap.so" this is the shared object needed for the tcp_wrappers to function
unlike in sunsolaris the tcp_wrappers is not installed by default in redhat it is by default.
start with the hosts.deny file
daemon: ip/subnet
sshd: 127.0.0.1 (log file tail -f /var/log/secure )
rdesktop -g 550x440 -u gyani ip " relatively simple and elegent "
We can give multiple hosts
vsftpd: ip1,ip2 ( block multiple hosts comma separated list )
vsftpd: ALL ( This will block any one getting into the daemon )
subnet
vsftpd: 10.1.2.0/255.255.255.0
vsftpd: 10.1.2. ( Its' another cool feature it will block the entire subnet )
man hosts.allow ( we get some more information )
we want to block the subnet In addition we want to send a message to the machine " stay away "
vsftpd: 10.1.2.: twist /bin/echo " keep out %a" %a - client side ip address
ifconfig | grep inet
hosts.allow file processed first
Its' just easy to maintain all the tcp_wrappers in hosts.allow " lets analyse "
That means we can permit and access every thing in one file
hosts.allow (by deafult allow will, to be specific it will deny )
vsftpd: 192.168.1.100,10.1.2.159: deny (that's marvelous, good to know )
It is truly dynamic... it works on the fly... configuring tcp_wrappers is realitively very simple
The lists are processed in top down fashion in hosts.allow and deny
*** Secure shell ***
known_hosts file consists of all the public keys that we communicated with
The public key can be decripted by universal one private key
cd /etc/ssh
cat ssh_host_rsa_key.pub (This is the public key)
netstat -ant | grep 22
file known_hosts-- (ASCII text)
port scanning --> nmap -v(verbose) -p(port) 22 192.168.1.0/2
ssh authentication with public key and the private key, here the public key and the private key work together its asymetric
host ssh_host_dsa_key and ssh_host_key_pub are the private keys
A platform for transport layer security
default key length is 1024 bits
ssh-keyscan IP1,ip2
ssh configuration file /etc/ssh/ssh_config
port forwarding is a pretty good feature, Tunneling, Reverse-Tunneling
ssh ,scp,sftp all these works on port 22, this opens up multiple channels
port forwarding allows us to set up local port and redirects to the remote port it totally encrypted
if we want to setup port forwarding lower that 1024 required the root access.
port forwarding disable/enable ssh_config we can do it by default it is enable
-L [Bind address:] port:host:hostport
ssh -L 8080(local port):www.linux.net:80 www.linux.net
proof netstat -ant | grep 80 ( on the local computer )
open the browser http://localhost:8080 ( You will get the webpage of the remote machine) port forwarding
the session has to be active ssh other wise webpage will turn down
ssh -g(entire subnet ) -L 8080(local port):www.linux.net:80 www.linux.net ( all users on the subnet)
## AUTOFS ##
Features:
1. Automatically mounts file systems (NFS, local, smbfs etc) upon I/O request
Requirements:
1. autofs - must be installed
/etc/auto.master - primary configuration file
- also contains mount points and their mappings
/etc/sysconfig/autofs - default startup directives
Note: AutoFS must be running in order to auto-mount directories
Task:
1. Create an automount for /shares, which will mount /nfs1 & /nfs2
a. update /etc/auto.master - '/shares /etc/auto.shares'
vim /etc/auto.master , add the below contents
#### autofs for the nfs
/shares /etc/auto.shares
mkdir /shares
b. cp /etc/auto.misc /etc/auto.shares
c. update the rules in /etc/auto.shares
d. update the auto.shares - 'nfs1 -fstype=nfs 192.168.75.199:/nfs1
service autofs restart
e. Create autoFS tree: /shares/
f. Test access to Autofs controlled directory
g. Unmount: /nfs1 & /nfs2 if necessary
Note: Do Not auto-mount directories that are already mounted.
g1. 'ls -l /shares/nfs1'
g2. update in auto.shares - ' nfs2 -fstype=nfs 192.168.75.199:/nfs2'
service autofs reload ( Things are done with out restarting the service )
g3. modify /etc/sysconfig/autofs 'DEFAULT_TIMEOUT=30' wait the clock for 30 sec, if the file systme is not
accessible it will unmount.
service autofs restart
Note: syntax for auto-mount files is as follows:
[]
nfs1 -fstype=nfs 192.168.75.199:/nfs
## samba- cient ##
Features:
1. Provides Windows features (file & print ) on linux | unix
findsmb - find the clients which responds to the smb protocols...
nmblookup - which looks for the netbios hosts
smbget - get files from remote hosts
2. smbtree - equivalent to network neighborhood/my network places ( prints workgroups hosts and shares )
3. smbget - similar to 'wget', in that, it will download files from the remote share
4. smbclient - interactive ( ftp-like utility to connect to shares permits uploads/downloads from shares)
a. smbclient -U gyani //linuxwin1/mtemp ( windows box with share name )
download: mget filename ( download to the present directory of the file system )
b. mget file* - downloads file(s)
c. mput file* - downloads file(s)
5. smbtar - backs-up smb shares to a TAR archive
a. smbtar -s linuxwin1 -x mtemp -u gyani -t backup1.tar
## linux rescue ##
df -h
cd /mnt/sysimage
eg: try to change the password --- wont' be.
chroot /mnt/sysimage
appropriate directory stucture
( u can fix the password issue )
hda,sda = hd0( first hard drive )
hda1,sda1 = hd0,0 ( first hard drive and first partitions ) so our system boots from the first harddirve and first harddisk
## SQUID and PROXY SERVER ##
Allows you to filters the out-bound request to the internet.
Features:
1. Caching server ( caches http,https,ftp contents )
2. Filters Access to the Net ( we can block websites )
3. Efficient bandwidth usage
users will first check the cache if the cache is there the user don't have to visit the internet
4. supports a wide criteria of ACLS based on ( dstdomain, src_IP, Time of day, etc) block/allow based on mac address
make use in many production Environment.
rpm -ql pkg | less
/etc/httpd/conf.d/squid.conf - caching
/etc/logrotate.d/squid -- log files
/etc/pam.d/squid - authentication
/etc/rc.d/init.d/squid - service
/etc/squid - primary configuration container
/etc/squid/squid.conf -- primary configuration file
/etc/squid/cachemgr.conf - cache manager
/etc/squid/mime.conf - mime type reverse webserver , content access http, ftp translate file type
access is disable by default through proxy -- /etc/squid/squid.conf
/etc/sysconfig/squid
startup file
squid startup file
squid-shutdown 100 this make sense because we may have the active connections...
When squid starts it will disable the dns checks... namelook up that can impede the performance
It got various modules for various purposes
ip_user_check
getpwname
IMportant compponent
/usr/sbin/squidclient - used to test squid Proxy server
/var/log/squid -- primary log directory
/etc/squid/squid.conf - - primary configuration file
/var/spool/squid/ -- cache directory container
2. start squid, and ensure that it starts when the system reboots
a. service squid start
b. chkconfig level 35 squid on
starting the squid proxy server starts the caching server caching path
ls -ltr /var/spool/squid/
you should have ample/fast disk storage in this /var/spool directory (Enough disk storage in this particular directory)
stores the contents visited by the users
Important location from where the webpages has been cached.
Note: Ensure that ample/fast disk storage is availeble for : /var/spool/squid
fast can be lvm , raid level 0 , zfs file system , xfs file system
netstat -ntlp | grep squid
Note: squid defaults to TCP:3128
3. Configure firefox browser to use the Squid proxy server
4. configure Squid to allow LAN access through, to resources
a. vim /etc/squid/squid.conf
search acl, access control section
localhost is allow. Here acls are very important
eg:
##to permit access to the through proxy server by members of: 192.168.75.0/24
b.acl lan_users src 192.168.75.0/24
c.http_access allow lan_users
restart the squid
squid access logs -- /var/log/squid/access.log
when you see the tcp_miss it is comming from the cache
5. Deny 192.168.75.10, but allow ALL other users from the local subnet
a. acl lan_bad_users src 192.168.75.10
http_access deny lan_bad_users
but this as to come before the allow
for testing on the remote machine, login with ssh , wget supports the squid
we need to export the variable
note: wget http_proxy=http://192.168.75.199:3128
wget http://www.linux.com/index.php
squidclient is used to test the proxy server
eg: squidclient -g 2 http://www.google.com
you can change the port if the port doesn't suits you
http_port 3128
linux journal
The proxy also provided a convenient point to log outbound Web requests, to maintain whitelists of allowed sites or blacklists of forbidden sites and to enforce an extra layer of authentication in cases where some, but not all, of your users had Internet privileges.
Nowadays, of course, Internet access is ubiquitous. The eclipsing of proprietary LAN protocols by TCP/IP, combined with the technique of Network Address Translation (NAT), has made it easy to grant direct access from “internal” corporate and organizational networks to Internet sites. So the whole idea of a Web proxy is sort of obsolete, right?
I describe, in depth, the security benefits of proxying your outbound Web traffic, and some architectural and design considerations involved with doing so. In subsequent columns, I'll show you how to build a secure Web proxy using Squid, the most popular open-source Web proxy package, plus a couple of adjunct programs that add key security functionality to Squid.
The concept of a Web proxy is simple. Rather than allowing client systems to interact directly with Web servers, a Web proxy impersonates the server to the client, while simultaneously opening a second connection to the Web server on the client's behalf and impersonating the client to that server
Web proxies have been so common for so long, all major Web browsers can be configured to communicate directly through Web proxies in a “proxy-aware” fashion. Alternatively, many Web proxies support “transparent” operation, in which Web clients are unaware of the proxy's presence, but their traffic is diverted to the proxy via firewall rules or router policies.
Obviously, this technique works only if you've got other types of gateways for the non-Web traffic you need to route outward, or if the only outbound Internet traffic you need to deal with is Web traffic. My point is, a Web proxy can be a very useful tool in controlling outbound Internet traffic.
A Web proxy, therefore, provides a better place to capture and record logs of Web activity than on firewalls or network devices.
Another important security function of Web proxies is blacklisting. This is an unpleasant topic—if I didn't believe in personal choice and freedom, I wouldn't have been writing about open-source software since 2000—but the fact is that many organizations have legitimate, often critical, reasons for restricting their users' Web access.
A blacklist is a list of forbidden URLs and name domains. A good blacklist allows you to choose from different categories of URLs to block, such as social networking, sports, pornography, known spyware-propagators and so on. Note that not all blacklist categories necessarily involve restricting personal freedom per se; some blacklists provide categories of “known evil” sites that, regardless of whatever content they're actually advertising, are known to try to infect users with spyware or adware, or otherwise attack unsuspecting visitors
Nevertheless, at this very early stage in our awareness of and ability to mitigate this type of risk, blacklists add some measure of protection where presently there's very little else. So, regardless of whether you need to restrict user activity per se (blocking access to porn and so forth), a blacklist with a well-maintained spyware category may be all the justification you need to add blacklisting capabilities to your Web proxy. SquidGuard can be used to add blacklists to the Squid Web proxy.
Just How Intelligent Is a Web Proxy?
Blacklists can somewhat reduce the chance of your users visiting evil sites in the first place, and content filters can check for inappropriate content and perhaps for viruses. But, hostile-Web-content attacks, such as invisible iframes that tell an attacker's evil Web application which sites you've visited, typically will not be detected or blocked by Squid or other mainstream Web proxies.
Second, encrypted HTTPS (SSL or TLS) sessions aren't truly proxied. They're tunneled through the Web proxy. The contents of HTTPS sessions are, in practical terms, completely opaque to the Web proxy.
If you're serious about blocking access to sites that are inappropriate for your users, blacklisting is an admittedly primitive approach. Therefore, in addition to blacklists, it makes sense to do some sort of content filtering as well—that is, automated inspection of actual Web content (in practice, mainly text) to determine its nature and manage it accordingly. DansGuardian is an open-source Web content filter that even has antivirus capabilities.
Squid supports authentication via a number of methods, including LDAP, SMB and PAM. However, I'm probably not going to cover Web proxy authentication here any time soon—802.1x is a better way to authenticate users and devices at the network level.
The main reason many organizations deploy Web proxies, even though it isn't directly security-related—performance. By caching commonly accessed files and Web sites, a Web proxy can reduce an organization's Internet bandwidth usage significantly, while simultaneously speeding up end-users' sessions.
Fast and effective caching is, in fact, the primary design goal for Squid, which is why some of the features I've discussed here require add-on utilities for Squid (for example, blacklisting requires SquidGuard).
it is a good idea to place it in a DMZ network. If you have no default route, you can force all Web traffic to exit via the proxy by a combination of firewall rules, router ACLs and end-user
Because the proxy is connected to a switch or router in the DMZ, if some emergency occurs in which the proxy malfunctions but outbound Web traffic must still be passed, a simple firewall rule change can accommodate this. The proxy is only a logical control point, not a physical one.
If the Web proxy is in a DMZ, the attacker will be able to attack systems on your LAN only through additional reverse-channel attacks that somehow exploit user-initiated outbound connections, because Firewall 1 allows no DMZ-originated, inbound transactions. It allows only LAN-originated, outbound transactions.
In contrast, if the Web proxy resides on your LAN, the attacker needs to get lucky with a reverse-channel attack only once and can scan for and execute more conventional attacks against your internal systems. For this reason, I think Web proxies are ideally situated in DMZ networks, although I acknowledge that the probability of a well-configured, well-patched Squid server being compromised via firewall-restricted Web transactions is probably low.
ACLs in More Depth
Besides clients' (source) IP addresses, Squid also can match a great deal of other proxy transaction characteristics. Note that some of these deal with arcane HTTP headers and parameters, many of which are minimally useful for most Squid users anyhow.
Table 1. Complete List of ACL Types Supported in Squid 2.6
ACL Type
Description
src
Client (transaction source) IP address or network address.
dst
Server (transaction destination) IP address or network address.
myip
Local IP address on which Squid is listening for connections.
arp
Client's Ethernet (MAC) address (matches local LAN clients only).
srcdomain
Client's domain name as determined by reverse DNS lookup.
dstdomain
Domain portion of URL requested by client.
srcdom_regex
Regular expression matching client's domain name.
dstdom_regex
Regular expression matching domain in requested URL.
time
Period of time in which transaction falls.
url_regex
Regular expression matching entire requested URL (not just domain).
urlpath_regex
Regular expression matching path portion of requested URL.
urllogin
Regular expression matching requested URL's “login” field.
port
Requested site's (destination) TCP port.
myport
Local TCP port on which Squid is listening for connections.
proto
Application-layer protocol of request (HTTP, HTTPS, FTP, WHOIS or GOPHER).
method
Request's HTTP method (GET, POST or CONNECT).
browser
Matches the client's browser, per HTTP “User-Agent” header.
referer_regex
Regular expression matching the unreliable HTTP “Referer” header (that is, the supposed URL of some page on which the user clicked a link to the requested site).
ident
Matches specified user name(s) of user(s) running client browser, per an “ident” lookup. Note that ident replies, which often can be spoofed, should not be used in lieu of proper authentication.
ident_regex
Regular expression defining which client user names to match per ident lookup.
src_as
Matches client IP addresses associated with the specified Autonomous System (AS) number, usually an ISP or other large IP registrant.
dst_as
Matches destination-server IP addresses associated with the specified AS number.
proxy_auth
Matches the specified user name, list of user names or the wild card REQUIRED (which signifies any valid user name).
proxy_auth_regex
Regular expression defining which user names to match.
snmp_community
For SNMP-enabled Squid proxies, matches client-provided SNMP community string.
maxconn
Matches when client's IP address has established more than the specified number of HTTP connections.
max_user_ip
Matches the number of IP addresses from which a single user attempts to log in.
req_mime_type
Matches a regular expression describing the MIME type of the client's request (not the server's response).
req_header
Matches a regular expression applied to all known request headers (browser, referer and mime-type) in the client's request.
rep_mime_type
Matches a regular expression describing the MIME type of the server's response.
rep_header
Matches a regular expression applied to all known request headers (browser, referer and mime-type) in the server's response.
external
Performs an external ACL lookup by querying the specified helper class defined in the external_acl_type tag.
urlgroup
Matches a urlgroup name, as defined in redirector setups.
user_cert
Matches specified attribute (DN, C, O, CN, L or ST) and values against client's SSL certificate.
ca_cert
Matches specified attribute (DN, C, O, CN, L or ST) and values against client certificate's issuing Certificate Authority certificate.
ext_user
Matches specified user name(s) against that returned by an external ACL/authentication helper (configured elsewhere in squid.conf).
ext_user_regex
Matches a regular expression describing user names to be matched against that returned by an external ACL/authentication helper.
Web Proxy Architecture
Because the proxy is connected to a switch or router in the DMZ, if some emergency occurs in which the proxy malfunctions but outbound Web traffic must still be passed,
a simple firewall rule change can accommodate this. The proxy is only a logical control point, not a physical one.
If the Web proxy is in a DMZ, the attacker will be able to attack systems on your LAN only through additional reverse-channel attacks that somehow exploit user-
initiated outbound connections, because Firewall 1 allows no DMZ-originated, inbound transactions. It allows only LAN-originated, outbound transactions.
In contrast, if the Web proxy resides on your LAN, the attacker needs to get lucky with a reverse-channel attack only once and can scan for and execute more
conventional attacks against your internal systems. For this reason, I think Web proxies are ideally situated in DMZ networks, although I acknowledge that the
probability of a well-configured, well-patched Squid server being compromised via firewall-restricted Web transactions is probably low.
As you also may recall, unlike a firewall, a Web proxy doesn't need to be a physical choke point through which all traffic must pass for a physical path to the outside.
Instead, you can use firewall rules or router ACLs that allow only Web traffic, as a means of ensuring your users will use the proxy. Accordingly, your Web proxy can be
set up like any other server, with a single network interface.
//
On Ubuntu and other Debian variants (not to mention Debian itself), you need the packages squid and squid-common. On Red Hat and its variants, you need the package
squid. And, on SUSE and OpenSUSE systems, you need squid.
By the way, you do not need to install Apache or any other Web server package on your Squid server, unless, of course, you're also going to use it as a Web server or
want to use some Web-based administration tool or another. Squid itself does not need any external Web server software or libraries in order to proxy and cache Web
connections.
back up the default squid.conf file
Believe it or not, all you need to do to get Squid running is add two lines to the ACL (Access Control List) section of this file: an object definition that describes
your local network and an ACL allowing members of this object to use your proxy. For my network, these lines look like this:
acl mick_network src 10.0.2.0/24
http_access allow mick_network
If more than one network address comprises your local network, you can specify them as a space-delimited list at the end of the acl statement, for example:
acl mick_network src 10.0.2.0/24 192.168.100.0/24
Because ACLs are parsed in the order in which they appear (going from top to bottom) in squid.conf, do not simply add these acl and http_access lines to the very end of
squid.conf, which will put them after the default “http_access deny all” statement that ends the ACL portion of the default squid.conf file. On my Ubuntu system, this
statement is on line 641, so I inserted my custom acl and http_access lines right above that.
In case you haven't guessed, all is a wild-card ACL object that means “all sources, all ports, all destinations” and so forth. Any transaction that is evaluated against
any http_access statement containing any will match it, and in this case, will be dropped, unless, of course, it matches a preceding http_access line.
tail -f /var/log/squid/access.log
quid's main purpose in life is to cache commonly accessed Web and FTP content locally, thereby both reducing Internet bandwidth usage and speeding up end users'
download times.
The negative side of this is that Squid doesn't have as rich of a security feature set built in to it as commercial security-oriented Web proxies, such as BlueCoat and
Sidewinder. In fact, Squid (years ago) used to ship with a default configuration that allowed completely open access.
You can correctly infer from this that, by default, Squid denies proxy connections from all clients.
###Howto: Squid proxy authentication using ncsa_auth helper
by Vivek Gite · 40 comments
For fine control you may need to use Squid proxy server authentication. This will only allow authorized users to use proxy server.
You need to use proxy_auth ACLs to configure ncsa_auth module. Browsers send the user's authentication in the Authorization request header. If Squid gets a request and
the http_access rule list gets to a proxy_auth ACL, Squid looks for the Authorization header. If the header is present, Squid decodes it and extracts a username and
password.
However squid is not equipped with password authentication. You need to take help of authentication helpers. Following are included by default in most squid and most
Linux distros:
=> NCSA: Uses an NCSA-style username and password file.
=> LDAP: Uses the Lightweight Directory Access Protocol
=> MSNT: Uses a Windows NT authentication domain.
=> PAM: Uses the Linux Pluggable Authentication Modules scheme.
=> SMB: Uses a SMB server like Windows NT or Samba.
=> getpwam: Uses the old-fashioned Unix password file.
=> SASL: Uses SALS libraries.
=> NTLM, Negotiate and Digest authentication
Configure an NCSA-style username and password authentication
I am going to assume that squid is installed and working fine.
Tip: Before going further, test basic Squid functionality. Make sure squid is functioning without requiring authorization :)
Step # 1: Create a username/password
First create a NCSA password file using htpasswd command. htpasswd is used to create and update the flat-files used to store usernames and password for basic
authentication of squid users.
# htpasswd /etc/squid/passwd user1
Output:
New password:
Re-type new password:
Adding password for user user1
Make sure squid can read passwd file:
# chmod o+r /etc/squid/passwd
Step # 2: Locate nsca_auth authentication helper
Usually nsca_auth is located at /usr/lib/squid/ncsa_auth. You can find out location using rpm (Redhat,CentOS,Fedora) or dpkg (Debian and Ubuntu) command:
# dpkg -L squid | grep ncsa_auth
Output:
/usr/lib/squid/ncsa_auth
If you are using RHEL/CentOS/Fedora Core or RPM based distro try:
# rpm -ql squid | grep ncsa_auth
Output:
/usr/lib/squid/ncsa_auth
Step # 3: Configure nsca_auth for squid proxy authentication
Now open /etc/squid/squid.conf file
# vi /etc/squid/squid.conf
Append (or modify) following configration directive:
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
Also find out your ACL section and append/modify
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
Save and close the file.
Where,
* auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd : Specify squid password file and helper program location
* auth_param basic children 5 : The number of authenticator processes to spawn.
* auth_param basic realm Squid proxy-caching web server : Part of the text the user will see when prompted their username and password
* auth_param basic credentialsttl 2 hours : Specifies how long squid assumes an externally validated username:password pair is valid for - in other words how often
the helper program is called for that user with password prompt. It is set to 2 hours.
* auth_param basic casesensitive off : Specifies if usernames are case sensitive. It can be on or off only
* acl ncsa_users proxy_auth REQUIRED : The REQURIED term means that any authenticated user will match the ACL named ncsa_users
* http_access allow ncsa_users : Allow proxy access only if user is successfully authenticated.
Restart squid:
# /etc/init.d/squid restart
Squid Authentication
As I mentioned previously, one of Squid's most handy capabilities is its ability to authenticate proxy users by means of a variety of external helper mechanisms. One of
the simplest and probably most commonly used helper applications is ncsa_auth, a simple user name/password scheme that uses a flat file consisting of rows of user
name/password hash pairs. The HOWTO by Vivek Gite and, to a lesser extent, the Squid User's Guide, explain how to set this up (see Resources).
Briefly, you'll add something like this to /etc/squid/squid.conf:
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squidpasswd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server at Wiremonkeys.org
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
And, in the ACL section:
acl ncsa_auth_users proxy_auth REQUIRED
http_access allow ncsa_auth_users
The block of auth_param tags specifies settings for a “basic” authentication mechanism:
* program is the helper executable ncsa_auth, using the file /etc/squid/squidpassd as the user name/password hash list (created previously).
children, the number of concurrent authentication processes, is five.
* realm, part of the string that greets users, is “Squid proxy-caching Web server at Wiremonkeys.org”.
*credentialsttl, the time after authentication that a successfully authenticated client may go before being re-authenticated, is two hours.
*casesensitive, which determines whether user names are case-sensitive, is off.
In the ACL section, we defined an ACL called ncsa_auth_users that says the proxy_auth mechanism (as defined in the auth_param section) should be used to authenticate
specified users. Actually in this case, instead of a list of user names to authenticate, we've got the wild card REQUIRED, which expands to “all valid users”. The net
effect of this ACL and its subsequent http_access statement is that only successfully authenticated users may use the proxy.
The main advantages of the NCSA mechanism are its simplicity and its reasonable amount of security (only password hashes are transmitted, not passwords proper). Its
disadvantage is scalability, because it requires you to maintain a dedicated user name/password list. Besides the administrative overhead in this, it adds yet another
user name/password pair your users are expected to remember and protect, which is always an exercise with diminishing returns (the greater the number of credentials
users have, the less likely they'll avoid risky behaviors like writing them down, choosing easy-to-guess passwords and so forth).
Therefore, you're much better off using existing user credentials on an external LDAP server (via the ldap_auth helper) on an NT Domain or Active Directory server (via
the msnt_auth helper) or the local Pluggable Authentication Modules (PAM) facility (via the pam_auth helper). See Resources for tutorials on how to set up Squid with
these three helpers.
Note that Squid's helper programs are located conventionally under /usr/lib/squid. Checking this directory is a quick way to see which helpers are installed on your
system, although some Linux distributions may use a different location.
Other Squid Defenses
Access Control Lists really are Squid's first line of defense—that is, Squid's primary mechanism for protecting your network, your users and the Squid server itself.
There are a couple other things worth mentioning, however.
First, there's the matter of system privileges. Squid must run as root, at least while starting up, so that, among other things, it can bind to privileged TCP ports
such as 80 or 443 (although by default it uses the nonprivileged port 3128). Like other mainstream server applications, however, Squid's child processes—the ones with
which the outside world actually interacts—are run with lower privileges. This helps minimize the damage a compromised or hijacked Squid process can do.
By default, Squid uses the user proxy and group proxy for nonprivileged operations. If you want to change these values for effective UID and GID, they're controlled by
squid.conf's cache_effective_user and cache_effective_group tags, respectively.
Squid usually keeps its parent process running as root, in case it needs to perform some privileged action after startup. Also, by default, Squid does not run in a
chroot jail. To make Squid run chrooted, which also will cause it to kill the privileged parent process after startup (that is, also will cause it to run completely
unprivileged after startup), you can set squid.conf's chroot tag to the path of a previously created Squid chroot jail.
If you're new to this concept, chrooting something (changing its root) confines it to a subset of your filesystem, with the effect that if the service is somehow hacked
(for example, via some sort of buffer overflow), the attacker's processes and activities will be confined to an unprivileged “padded cell” environment. It's a useful
hedge against losing the patch rat race.
Chrooting and running with nonroot privileges go hand in hand. If a process runs as root, it can trivially break out of the chroot jail. Conversely, if a nonprivileged
process nonetheless has access to other (even nonprivileged) parts of your filesystem, it still may be abused in unintended and unwanted ways.
Somewhat to my surprise, there doesn't seem to be any how-to for creating a Squid chroot jail on the Internet. The world could really use one—maybe I'll tackle this
myself at some point. In the meantime, see Resources for some mailing-list posts that may help. Suffice it to say for now that as with any other chroot jail, Squid's
must contain not only its own working directories, but also copies of system files like /etc/nsswitch.conf and shared libraries it uses.
Common Squid practice is to forego the chroot experience and to settle for running Squid partially unprivileged per its default settings. If, however, you want to run a
truly hardened Squid server, it's probably worth the effort to figure out how to build and use a Squid chroot jail.
Conclusion
Setting ACLs, running Squid with nonroot privileges most or all of the time and running Squid in a chroot jail constitute the bulk of Squid's built-in security
features. But, these are not the only things you can do to use Squid to enhance your network and end-user systems' security.
Next time, I'll show you how to use add-on tools such as SquidGuard to increase Squid's intelligence in how it evaluates clients' requests and servers' replies. I'll
also address (if not next time then in a subsequent column) some of the finer points of proxying TLS/SSL-encrypted sessions. Until then, be safe!
SQUIDGUARD
squidGuard lets you selectively enforce “blacklists” of Internet domains and URLs you don't want end users to be able to reach. Typically, people use squidGuard with
third-party blacklists from various free and commercial sites, so that's the usage scenario I describe in this article.
Put simply, squidGuard is a domain and URL filter. It filters domains and URLs mostly by comparing them against lists (flat files), but also, optionally, by comparing
them against regular expressions.
Getting and Installing Blacklists
Once you've obtained and installed squidGuard, you need a set of blacklists. There's a decent list of links to these at squidguard.org/blacklists.html, and of these, I
think you could do far worse than Shalla's Blacklists (see Resources), a free-for-noncommercial-use set that includes more than 1.6 million entries organized into 65
categories. It's also free for commercial use; you just have to register and promise to provide feedback and list updates. Shalla's Blacklists are the set I use for the
configuration examples through the rest of this article.
Once you've got a blacklist archive, unpack it. It doesn't necessarily matter where, so long as the entire directory hierarchy is owned by the same user and group under
which Squid runs (proxy:proxy on Ubuntu systems). A common default location for blacklists is /var/lib/squidguard/db.
To extract Shalla's Blacklists to that directory, I move the archive file there:
bash-$ cp mv shallalist.tar.gz /var/lib/squidguard/db
Then, I unpack it like this:
bash-$ sudo -s
bash-# cd /var/lib/squidguard/db
bash-# tar --strip 1 -xvzf shallalist.tar.gz
bash-# rm shallalist.tar.tz
Note also that at this point you're still in a root shell; you need to stay there for just a few more commands. To set appropriate ownership and permissions for your
blacklists, use these commands:
bash-# chown -R proxy:proxy /var/lib/squidguard/db/
bash-# find /var/lib/squidguard/db -type f | xargs chmod 644
bash-# find /var/lib/squidguard/db -type d | xargs chmod 755
bash-# exit
Configuring squidGuard
On Ubuntu and OpenSUSE systems (and probably others), squidGuard's configuration file squidGuard.conf is kept in /etc/squid/, and squidGuard automatically looks there
when it starts. As root, use the text editor of your choice to open /etc/squid/squidGuard.conf. If using a command-line editor like vi on Ubuntu systems, don't forget
to use sudo, as with practically everything else under /etc/, you need to have superuser privileges to change squidGuard.conf.
squidGuard.conf's basic structure is:
1. Options (mostly paths)
2. Time Rules
3. Rewrite Rules
4. Source Addresses
5. Destination Classes
6. Access Control Lists
dbhome /var/lib/squidguard/db
logdir /var/log/squid
acl {
default {
pass !remotecontrol !spyware all
redirect http://www.google.com
}
}
In this example, default is the name of the ACL. Your default squidGuard.conf file probably already has an ACL definition named default, so be sure either to edit that
one or delete it before entering the above definition; you can't have two different ACLs both named default.
The pass statement says that things matching remotecontrol (as defined in the prior Destination Class of that name) do not get passed, nor does spyware, but all (a wild
card that matches anything that makes it that far in the pass statement) does. In other words, if a given destination matches anything in the remotecontrol or spyware
blacklists (either by domain or URL), it won't be passed, but rather will be redirected per the subsequent redirect statement, which points to the Google home page.
Just to make sure you understand how this works, let me point out that if the wild card all occurred before !remotecontrol, as in “pass all !remotecontrol !spyware”,
squidGuard would not block anything, because matched transactions aren't compared against any elements that follow the element they matched. When constructing ACLs,
remember that order matters!
Standard Proxy Cache
A standard proxy cache is used to cache static web pages (html and images) to a machine on the local network. When the page is requested a second time, the browser returns the data from the local proxy instead of the origin web server.
Transparent Cache
A transparent cache achieves the same goal as a standard proxy cache, but operates transparently to the browser. The browser does not need to be explicitly configured to access the cache. Instead, the transparent cache intercepts network traffic, filters HTTP traffic (on port 80), and handles the request if the item is in the cache. If the item is not in the cache, the packets are forwarded to the origin web server. For Linux, the transparent cache uses iptables or ipchains to intercept and filter the network traffic.
Reverse Proxy Cache
A reverse proxy cache differs from standard and transparent caches, in that it reduces load on the origin web server, rather than reducing upstream network bandwidth on the client side. Reverse Proxy Caches offload client requests for static content from the web server, preventing unforeseen traffic surges from overloading the origin server. The proxy server sits between the Internet and the Web site and handles all traffic before it can reach the Web server.
A reverse proxy is positioned between the internet and the web server
When a client browser makes an HTTP request, the DNS will route the request to the reverse proxy machine, not the actual web server. The reverse proxy will check its cache to see if it contains the requested item. If not, it connects to the real web server and downloads the requested item to its disk cache. The reverse proxy can only server cacheable URLs (such as html pages and images).
Dynamic content such as cgi scripts and Active Server Pages cannot be cached. The proxy caches static pages based on HTTP header tags that are returned from the web page.
In order for this to work you will need Squid and iptables installed.
How do I enable a transparent proxy with Squid?
First find the following items in /etc/squid/squid.conf:
httpd_accel_host
httpd_accel_port
httpd_accel_with_proxy
httpd_accel_uses_host_header
Replace with the following :
# HTTPD-ACCELERATOR OPTIONS
# -----------------------------------------------------------------------------
# TAG: httpd_accel_host
# TAG: httpd_accel_port
# If you want to run Squid as an httpd accelerator, define the
# host name and port number where the real HTTP server is.
# If you want virtual host support then specify the hostname
# as "virtual".
# If you want virtual port support then specify the port as "0".
# NOTE: enabling httpd_accel_host disables proxy-caching and
# ICP. If you want these features enabled also, then set
# the 'httpd_accel_with_proxy' option.
#Default:
httpd_accel_host virtual
httpd_accel_port 80
# TAG: httpd_accel_with_proxy on|off
# If you want to use Squid as both a local httpd accelerator
# and as a proxy, change this to 'on'. Note however that your
# proxy users may have trouble to reach the accelerated domains
# unless their browsers are configured not to use this proxy for
# those domains (for example via the no_proxy browser configuration
# setting)
#Default:
httpd_accel_with_proxy on
# TAG: httpd_accel_uses_host_header on|off
# HTTP/1.1 requests include a Host: header which is basically the
# hostname from the URL. Squid can be an accelerator for
# different HTTP servers by looking at this header. However,
# Squid does NOT check the value of the Host header, so it opens
# a big security hole. We recommend that this option remain
# disabled unless you are sure of what you are doing.
# However, you will need to enable this option if you run Squid
# as a transparent proxy. Otherwise, virtual servers which
# require the Host: header will not be properly cached.
#Default:
httpd_accel_uses_host_header on
Next configure iptables to forward all http requests to the Squid server.
Change "Squid-Server-IP","Local-Network-IP", and "Machine-Running-Iptables" to your appropriate network settings.
Transparent Proxy configuration
In squid.conf
http_port 192.168.0.1:3128 transparent
iptables -t nat -A PREROUTING -i eth1 -s ! 192.168.233.129 -p tcp --dport 80 -j DNAT --to 192.168.233.129:3128
iptables -t nat -A POSTROUTING -o eth1 -s 192.168.233.130 -d 192.168.233.129 -j SNAT --to 192.168.233.130
iptables -A FORWARD -s 192.168.233.130 -d 192.168.233.129 -i eth1 -o eth1 -p tcp --dport 3128 -j ACCEPT
service iptables save
#Transparent proxy
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
#Two nics
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
Also, if you didnt already enabled the forwarding add this to your /etc/sysctl.conf
net.ipv4.conf.default.forwarding=1
The first one sends the packets to squid-box from iptables-box. The second makes sure that the reply gets sent back through iptables-box, instead of directly to the client (this is very important!). The last one makes sure the iptables-box will forward the appropriate packets to squid-box. It may not be needed. YMMV. Note that we specified '-i eth0' and then '-o eth0', which stands for input interface eth0 and output interface eth0. If your packets are entering and leaving on different interfaces, you will need to adjust the commands accordingly.
It is only meant for the single interface…
Thats' all
# iptables -t nat -A PREROUTING -i eth0 -s ! SQUID-SERVER-IP -p tcp
--dport 80 -j DNAT --to Squid-Server-IP:3128
# iptables -t nat -A POSTROUTING -o eth0 -s LOCAL-NETWORK-IP -d
SQUID-SERVER-IP -j SNAT --to MACHINE-RUNNING-IPTABLES
# iptables -A FORWARD -s LOCAL-NETWORK-IP -d SQUID-SERVER-IP -i eth0
-o eth0 -p tcp --dport 3128 -j ACCEPT
# service iptables save
They generally look upon the bind-utils package.
nslookup utlity , resolving hostname, we will get the resolution, It is a member of bind-utils
Dig is the default query to mantain the main query
which nslookup, rpm -qf nslookup
Every one in the system can run nslookup and dig utility.
Berkeley Software Distribution (BSD)
BSD is responsible for much of the popularity of Unix.
Basic Shell Skills
1. tty - reveals the current terminal
2. whoami - reveals the currently logged-in user
3. which - reveals where in the screen path a program is located
4. echo - prints to the screen
a. echo $PATH - dumps the current path of STDOUT
b. $PWD - dumps the contents of the $PWD variable
c. echo $OLDPWD - dumps the most recently visited directory
5. set - prints and optionally sets shell variables
6. clear - clears the screen or terminal
7. reset - resets the screen buffer ( commands will be cleared in the terminal but not in history )
8. history - reveals your command history
a. !690 - executes the 690th command in our history
b. command history is maintained or a per-user basis via: ~/.bash_history
~ = users's $HOME directory in the BASH shell
9. pwd - prints the working directory
10. cd - changes to the $HOME directory
a. 'cd ' with no options changes to the $HOME directory
b. 'cd ~' changes to the $HOME directory
c. 'cd /' changes to the root of the file system
d. 'cd Desktop/' changes us to the relative directory tree
e. 'cd ..' changes us one-level up in the directory tree
f. 'cd ../..' changes us two-levels up in the directory tree
11. Arrow keys (up and down ) navigates through your command history
12. BASH supports tab compltetion:
a. type unique character in the command and press 'Tab' key
13. You can copy and paste in GNOME terminal windows using:
a. left button to block
b. right button to paste or ctrl-shift-v to paste
14. ls - lists files and directories
a. ls / - lists the contents of the '/' mount point
b. ls -l - lists the contents of a directory in long format
Includes: permissions, links , ownership, size , date ,name
c. ls -ld /etc - lists properties of the directory '/etc', NOT the contents of '/etc'
d. ls -ltr - sorts chrolologically from older to newer (bottom)
f. ls -a - reveals hidden files eg '.bash_history'
Note: files/directories prefixed with '.' are hidden
15. cat - concatenates files
a. cat 123.txt - dumps the contents of '123.txt' to STDOUT
b. cat 123.txt 456.txt dumps both files to STDOUT
c. cat 123.txt 456.txt > 123456.txt - creates new concatenates file
16. mkdir - creates a new directory
17. cp - copied files
18. mv - moves files
19. rm - removes files/directories (rm -rf) removes recursively and enforces
20. touch - creates bland file/updates timestamp
21. stat - reveals statistics of files
stat 123.txt - reveals full attributes of the file
22. find - finds files using search patterns
a. find / -name 'fstab'
Note: 'find' can search for fields returned by the 'stat' command
23. alias - returns/sets aliases for commands
a. alias - dumps current aliases
b. alias copy ='cp -v'
### Linux Redirection & Pipes ##
Features:
1. Ability to control input and Output
Input redirection '<':
1. cat < 123.txt
Note: Use input redirection when program does not default to file as input
OutPut redirection '>':
1. cat 123.txt > onetwothree.txt
Note: Default nature is to:
1. Clobber the target file
2. Populate with information form input stream
Append redirection '>>':
1. cat 12.txt >> numbers.txt - creates 'numbers.txt' if it doesn't exist, or appends if it does
## Command Chaining ##
Features:
1. permits the execution of multiple commands in sequence
2. Also permits execution based on the success or failure of a previous command
1. cat 12.txt ; ls -l - this runs first command then second command without regards for exit status of the first command
2. cat 12.txt && ls -l - this runs second command, if first command is successful
3. cat 123.txt || ls -l - this runs second command, if first command fails
more|less - paginators, which display text one-page @ a time
1. more /etc/fstab
25. seq - echoes a sequence of numbers
a. seq 1000 > 1thousand.txt - ceates a file with numbers 1-1000
26. su - switched users
a. su - with no options attempts to log in as 'root'
27. head - displays opening lines of text files
a. head /var/log/messages
28. tail - displays the closing lines of test files
a. tail /var/log/messages
29. wc - counts words and optionally lines of test files
a. wc -l /var/log/messages
b. wc -l 123.txt
30.file - determines file type
a. file /var/log/messages
Monitoring tools
Nagios
Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.
Nagios monitors your entire IT infrastructure to ensure systems, applications, services, and business processes are functioning properly. In the event of a failure, Nagios can alert technical staff of the problem, allowing them to begin remediation processes before outages affect business processes, end-users, or customers. With Nagios you'll never be left having to explain why a unseen infrastructure outage hurt your organization's bottom line.
Is a popular open source computer system and network monitoring software application. It watches hosts and services, alerting users when things go wrong and again when they get better.
MTR – tool combined with ping and traceroute
Vnstat – console based network monitoring tool
Nmap – scan server for open ports
Ethereal – Network protocol Analyzer
is used by network professionals around the world for troubleshooting, analysis, software and protocol development, and education. It has all of the standard features you would expect in a protocol analyzer, and several features not seen in any other product.
Ettercap is a Unix and Windows tool for computer network protocol analysis and security auditing. It is capable of intercepting traffic on a network segment
** Ubuntu Book **
The kernel is primarily resposible for four main functions
- System memory management
- Software Program management
- Hardware Management
- Filesystem management
The kernel swaps the contents of virtual memory locations back and forth from the swap space to the actual physical memory. This process allows the system to think there is more memory available than what physically exists
The memory locations are grouped into blocks called pages. The kernel locates each page of memory in either the physical memory or the swap space. It then maintains a table of the memory pages that indicates which pages are in physical memory and which pages are swapped out to disk.
The special ipcs command allows us to view the current shared memory pages on the system.
The Linux operating system calls a running program a process. A process can run in the foreground, displaying output on a display, or it can run in the background, behind the scenes. The kernel controls how the Linux system manages all the processes running on the system.
The kernel creates the frst process, called the init process, to start all other processes on the system. When the kernel starts, it loads the init process into virtual memory. As the kernel starts each additional process, it allocates to it a unique area in virtual memory to store the data and code that the process uses.
Most Linux implementations contain a table (or tables) of processes that start automati-cally on boot-up. This table is often located in the special fle /etc/inittabs. However, the Ubuntu Linux system uses a slightly different format, storing multiple table fles in the /etc/event.d folder by default.
Saving iptables
If you were to reboot your machine right now, your iptables configuration would disappear. Rather than type this each time you reboot, however, you can save the configuration, and have it start up automatically. To save the configuration, you can use iptables-save and iptables-restore.
Save your firewall rules to a file
# iptables-save >/etc/iptables.rules
Two methods are used for inserting device driver code into the linux kernel
- Drivers compiled in the kernel.
- Driver Modules added to the kernel.
san
rss red .. storage set
min hd 6 algo/// red boosted .... 12 2set rss...
apart from raid failure...
VMWARE
Bridged Networking
Bridged networking connects a virtual machine to a network by using the host computer’s network adapter. If your host computer is on a network, this is often the easiest way to give your virtual machine access to that network. The virtual network adapter in the virtual machine connects to the physical network adapter in your host computer, allowing it to connect to the LAN the host computer uses.
Bridged networking configures your virtual machine as a unique identity on the network, separate from and unrelated to its host. It makes the virtual machine visible to other computers on the network, and they can communicate directly with the virtual machine. Bridged networking works with both wired and wireless physical host network cards.
Network Address Translation (NAT)
NAT configures a virtual machine to share the IP and MAC addresses of the host. The virtual machine and the host share a single network identity that is not visible outside the network. NAT can be useful when your network administrator lets you use a single IP address or MAC address. If cannot give your virtual machine an IP address on the external network, you can use NAT to give your virtual machine access to the Internet or another TCP/IP network. NAT uses the host computer’s network connection. NAT works with Ethernet, DSL, and phone modems.
Setup Requirements for IP Addresses
If you use NAT, your virtual machine does not have its own IP address on the external network. Instead, a separate private network is set up on the host computer. Your virtual machine gets an address on that network from the VMware virtual DHCP server. The VMware NAT device passes network data between one or more virtual machines and the external network. It identifies incoming data packets intended for each virtual machine and sends them to the correct destination.
Host-Only Networking
Host-only networking creates a network that is completely contained within the host computer. Host-only networking provides a network connection between the virtual machine and the host computer, using a virtual network adapter that is visible to the host operating system. This approach can be useful if you need to set up an isolated virtual network. In this configuration, the virtual machine cannot connect to the Internet. For more information on host-only networking, see Selecting IP Addresses on a Host-Only Network or NAT Configuration.
############## CLOUD #############
ubuntu enterprise cloud (UEC) private cloud eucaluptus
eucalyptus that enables our users to build their own private clouds that match the popular emerging standard of amazons
elastic cloud (ec2)
The latest wave is cloud computing
Infrastructure (or Hardware) as a Service providers such as Amazon and FlexiScale.
Platform (or Framework) as a Service providers like Ning, BungeeLabs and Azure.
Application (or Software) as a Service providers like Salesforce, Zoho and Google Apps.
pay-per-use through utility charging
elastic infrastructure,
efficiency of resource utilisation, reduction of capital expenditure, focus on core activities
What this means is that more and more of the applications that
we use today on our personal computers or servers will soon migrate to the cloud and self-
service IT environments.
In the cloud, these risk are heightened and new risks such as a lack of
transparency in relationships appear.
However, cloud computing effects all layers of the computing
stack from infrastructure to application
A private cloud offers a company the ability to quickly develop and prototype cloud-aware applications behind the firewall. This includes the development of privacy-sensitive applications such as credit card processing, medical record database,classified data handling, etc.
High-performance applications whose load varies over time will benefit from being run on a platform that is “elastic”. Instead of having your IT infrastructure built for the sum of all the peak loads of different application, you can build a cloud infrastructure for the
aggregated peak load at a single point in time instead. Furthermore opportunities exist to burst from a private cloud to a public environment in times of peak load.
Self-Service IT: Using a private cloud technology, organisations can now put together a pool of hardware inside the firewall, a set of standard base images that should be used, and provide a simple web interface for their internal users to create instances on the fly. This should maximise the speed of development and testing of new services whilst reducing the backlog on IT
• Cloud Controller (CLC)
• Walrus Storage Controller (WS3)
• Elastic Block Storage Controller (EBS)
• Cluster Controller (CC)
• Node Controller (NC)
Elastic Computing, Utility Computing, and Cloud Computing are (possibly synonymous) terms referring to a popular SLA-based computing paradigm that allows users to "rent" Internet-accessible computing capacity on a for-fee basis. While a number of commercial enterprises currently offer Elastic/Utility/Cloud hosting services and several proprietary software systems exist for deploying and maintaining a computing Cloud, standards-based open-source systems have been few and far between.
EUCALYPTUS -- Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems -- is an open-source software infrastructure for implementing Elastic/Utility/Cloud computing using computing clusters and/or workstation farms. The current interface to EUCALYPTUS is interface-compatible with Amazon.com's EC2 (arguably the most commercially successful Cloud computing service), but the infrastructure is designed to be modified and extended so that multiple client-side interfaces can be supported. In addition, EUCALYPTUS is implemented using commonly-available Linux tools and basic web service technology making it easy to install and maintain.
Cloud computing is Internet- ("cloud-") based development and use of computer technology ("computing"). In concept, it is a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.
The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.
These applications are broadly divided into the following categories: Software as a Service (SaaS), Utility Computing, Web Services, Platform as a Service (PaaS), Managed Service Providers (MSP), Service Commerce, and Internet Integration. The name cloud computing was inspired by the cloud symbol that is often used to represent the Internet in flow charts and diagrams.
Cloud Controller
The Cloud Controller (CLC) is the most visible element of the Eucalyptus architecture, as it is providing the interface with which users of the cloud interact. This interface is comprised of a standard SOAP based API matching the Amazon EC2 API (see Amazon EC2 API below), a simpler “Query Interface” which euca2ools and ElasticFox uses and a traditional web interface for direct user interaction.The CLC talks with the Cluster Controllers (CC) and makes the top level choices for allocating new instances. This elements holds all information linking users to running instances, the collection of available machines to be run, as well as view of the load of the entire system.
Walrus Storage Controller
The Walrus Storage Controller (WS3) implements a REST (Representational State Transfer) and a SOAP (Simple Object Access Protocol) API which are compatible with Amazon Simple Storage Protocol (S3). It is used for:
• Storing the the machine images (MI) that can be instantiated on our cloud;
• Accessing and storing data (either from a running instance or from anywhere on the web).
WS3 should be considered as a file level storage system. While it does not provide the ability to lock a file or portion of a file, users are guaranteed that a consistent copy of the file will be saved if there are concurrent writes to the same file. If a write to a file is encountered while there is a previous write in progress, the previous write is invalidated. Currently, the machine on which the Cloud Controller runs also hosts the Walrus Storage Controller (WS3), but this limitation will be removed in a forthcoming version.
Elastic Block Storage Controller
The Elastic Block Storage Controller (EBS) runs on the same machine(s) as the Cluster Controller and is configured automatically when the Cluster Controller is installed. It allows to create persistent block devices that can be mounted on running machines in order to gain access to virtual hard drive. Storage volumes behave like raw, unformatted block devices, with user supplied device names and a block device interface. You can create a file system on top of EBS volumes, or use them in any other way you would use a block device. EBS also provides the ability to create point-in-time snapshots of volumes, which are stored on WS3. These snapshots can be used as the starting point for new EBS volumes and protect data for long-term durability. The same snapshot can be used to instantiate as many volumes as you wish. At the network level, the block device is accessed using ATA over Ethernet (AoE). Since packets cannot be routed, this requires that the EBS controller and the Nodes hosting machine images which are accessing it, to be on the same Ethernet segment. It is planned to add a more flexible protocol in a future version such as iSCSI.
Cluster Controller
The Cluster Controller (CC) operates as the go between between the Node Controller and the Cloud Controller. As such, it needs to have access to both the Node Controller and Cloud Controller networks. It will receive requests to allocate MI (machine images) from the Cloud Controller and in turn decides which Node Controller will run the Minst (machine instance). This decision is based upon status reports which the Cluster Controller receives from each of the Node Controllers. It can also answer requests from the Cloud Controller asking for its left over capacity to run specific instance types, hence allowing the Cloud Controller to decide on which cluster to run new instances. The Cluster Controller is also in charge of managing any virtual networks that the MInst run in and routing traffic to and from them. Its precise role greatly depends on the networking model chosen to run MInst, which we will describe later in this document in the Networking and
Security section.
As described above, the Cluster Controller also runs the EBS Controllers. As a whole, the group formed of one Cluster Controller and EBS Controller and a variable number of Node Controller constitutes the equivalent of Amazon's “availability zones”.
Node Controller
The Node Controllers' (NC) software runs on the physical machines on which the MI will be instantiated. The NC software role is to interact with the OS and hypervisor running on the node, as instructed by the Cluster Controller. The Node Controller's first task is to discover the environment on which it runs in term of available resources (disk space, type and number of cores, memory), as well as running VMs that could be started independently of the NC, CC, and CLC. The Node Controller will then wait for and perform any requested tasks from the Cluster Controller (start and stop instances) or replies to availability queries. When requested to start a MI, it will:-
1. Verify the authenticity of the user request;
2. Download the image from WS3 (images are cached so that starting multiple instances
of the same machine image only downloads that image once);
3. Create the requested virtual network interface;
4. Start the instance of the machine image running as a virtual machine (VM).
A Virtual Private Network, or VPN, is an encrypted network connection between two or more networks. There are several ways to create a VPN using software as well as dedicated hardware appliances. This chapter will cover installing and configuring OpenVPN to create a VPN between two servers.
Kernel
What is the kernel ?
The kernel is the software that directly manages your hardware, allowing application libraries and software like GNOME and Firefox to run on many types of hardware without much difficulty. Because the Linux kernel is the core component of a GNU/Linux system, when it is upgraded, a full restart is required.
Types
Ubuntu packages the Linux kernel for a variety of architectures, including several variants of the x86 architecture. These include a 386 version, a 686 version, and versions for the AMD k6 and k7 processors. While most software for x86 processors in Ubuntu is compiled for 386 or better instruction sets, the kernel and a few other packages are specifically compiled for certain processors for speed reasons. Check the package documentation to determine what type of kernel will perform best for your processor.
Versions
Ubuntu packages the latest 2.6 kernel for optimal desktop speed and features. However, if you want to use 2.4, you still can.
SMP
Some motherboards have more than one processor on them, and some processors have multiple cores. If your computer is like this, then the SMP kernel is for you. Non-SMP kernels will not be able to take advantage of your multiple processors. However, if you do not have multiple processors, the additional code in an SMP kernel will only slow you down. Naturally, Ubuntu provides both SMP and non-SMP kernels for all supported architectures.
PAE
PAE allows the 32 bit version of Ubuntu to access up to 64 Gb of memory, but it isn't enabled in the generic kernel. To enable PAE, install the server kernel.
ZFS was designed and implemented by a team at Sun led by Jeff Bonwick. It was announced on September 14, 2004.[3] Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005[4] and released as part of build 27 of OpenSolaris on November 16, 2005. Sun announced that ZFS was included in the 6/06 update to Solaris 10 in June 2006, one year after the opening of the OpenSolaris community.[5]
The name originally stood for "Zettabyte File System". The original name selectors happened to like the name, and a ZFS file system has the ability to store 340 quadrillion zettabytes (256 pebi-zebibytes exactly, or 2128 bytes). Every ZiB is 270 bytes.[6]
The features of ZFS include support for high storage capacities, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.
##WINDOWS 2003 Server ###
DHCP
Allows the central server to allow and manages the ip addresses
authorigation of the dhcp server
Address pool - The ip address which are reserved and which are not suppose to release
Address Leases - The Ip which are released to the client
Reservation - A reservation ensures that a Dhcp client is always assigned the same iP address (mac address with out dashes )
scope options - These options will reflect on the clients
wins/nbt node - b-node (broadcast, mixed node , hardware node - contact the winserver recommended ) it is a communicatio mechanism in which the client will interact with the wins server and resolve the netbios name .
New scope - new range of Ip address
We can create n number of scopes, the dns,wins server will be automatically created. When we create a new scope without specifing the dns, it will automatically create.
superscope is the new administative group of the scope
Multicast Scope - enabling point to multi point communication, the client will request the ip address
offen used for the streaming media, such as the video conferencing etc
eg: client will be requesting for the ip like the client may be clicking on a url , then the client will get the multicast address
Backup
backup will simply take the dhcp database backup and simply take the backup in the "system32/dhcp/backup"
Reconcile All Scopes - It will recover the inconsitency in the dhcp database stastics - ip address which has been released, ip pool
DHCP Relay Agent
dhcp relay agent listens to the dhcp request in the remote segment and forwards the dhcp server on the local segment
routing and Remote Access provides secure remote Access to private networks
- A secure connection between two private networks.
- Network address translation
- Lan routing
- VPN
enable it and choose custom configuration , chose LAN routing , create new dhcp relay agent interface
#####Introduction and Installing DNS #########
Its' a method of resolving the ipaddress to the english names, wins for the windows network
DNS is the integral part of the active directory and the foundation of the windows network
Wins is for the windows name resolving network
netbios the software for the name resolution for all the windows clients
DNS
computername + domain suffix
server02.testdomain.com
dns structure is the hierarchical structure
DNS (Domain Name System) servers translates domain and computer DNS names to IP address
If you plan to set up this server as a domain controller and it is the first domain controller in the domain, select the domain controller role. In this case, the domain controller role configures the server so that DNS and Active Directory
work together.
# DNS resolution ### from the client perspective
windows hosts file
c:\windows\system32\drivers\etc\hosts - check the host file first end of dns
1. check the host file entires if not - host file will be checked before the cache
2. local cache check - cache for limited amount of time- client will keep the copy of the record
a time when the time expires it will delete from the cache.
3. negative caching
solving Negative Caching
- Add a Hosts file entry
- Wait for the time to live(ttl) to expire
- Run ipconfig /flushdns at a command prompt
eg:- Interesting
suppose the client is looking for a websie called www.winstructor.com, he dont' have any id about the ip address
first thing the client will do is to check the local hosts file " \windows\system32\drivers\hosts" if the ip and website is
found here it will directly contact the server that's the end of the dns server. say for example if the ip address of the
website is changed then it won't be able to find the website.
If the entries are not found in the hosts file then it will check for the "local cache"
The dns record will be kept in the localcache for some amount of time depends upon the ttl value.
Negative caching
If a client is "ping fileserver.test.com" it was not able to reach, we found out that the hostname of the server was having some mistake, we rectified it and rebooted. from the client we executed the same command then again it says "fileserver is not reachable" this means the client have the cache record, to resolve the problem we need to flush the cache. "ipconfig /flushdns " now it will ping
solving Negative Caching
- Add a Hosts file entry
- Wait for the time to live(ttl) to expire
- Run ipconfig /flushdns at a command prompt
DNS reslution from a client perspective
- check for the local hosts file
- local cache checked
- ask the DNS server for help through UDP port 53
- from the dns server the client Request contains A destination port from the DNS server
- The DNS server will check the local dns cache, if it finds the match immediately "Responds on Requested port "
- However if the dns server doesn't have the cache it will check "DNS Server Checks its Zone"
Zone is simpy a namespace where the server has been given authority. if the zone file also fails to find the Ip address of the server
- Then it contacts the "Root Hints" -> DNs Server contacts ROOT DNS Server if it also fails to find the
"www.winstructor.com" then it will send the IP address of the .com domains to the dns server
form which it got the dns request.
This dns server do not know the winstructor.com, IP address but it do know the who knows this information.
- The .com domain will send the ip address of the winstructor.com to the DNS server, the DNS server will
request for the dns record to the "winstructor DNS server" then the "DNS server" will cache the record
- After caching the record by the dns server it will send the ip address to the client(user) .
- The user will directly request the webpage from the "www.winstructor.com" webserver we will retrieve
the webpage.
## DNS ZoNES ###
It is simply a fall of the domain name server that manages the portion to the domain namespace
Forward Lookup Zones and Reverse Lookup Zones
Forward - When we are mapping hostname with the ip address we are performing the forward lookup. hostname -> ip
Reverse - We know the ip-address but we don't know the hostname it will bind. IP -> hostname
Primary zone - creates a copy of a zone that can be updated directly on this server (read and write )
Secondary zone - Creates a copy of a zone that exists on another service. This options helps balance the processing load of primary servers and provides fault tolerance. (copy of the primary zone, secondary zone are read-only which
means they cannot be updated, primary dns will copy to the secondary dns server, the clients will be contacted to the secondary dns server might get a fast reponse to the clients.if i had to contact the primary dns server
stub zone - Created a copy of a zone containing only Name Server (NS), start of Authority (soa), and possibly glue host(A) records. A server containing a stub zone is not authoritative for that zone
forward lookup to the external domain , it contains the entry of nameserver for the external domain, This record is
called the "blue record". The dns server doesn't have to go to the .com domain to locate the external domain, if is
preconfigured, we can forward queries to the right dns server.
forward lookup zone works from - general to specific
reverse lookup zone works form - specific to general
In the secondary domain controller, we can't make changes in the dns zone files.
1 comment:
The blacklists from shallist.de are inadequate.
Squidblacklist.org is the worlds leading publisher of native acl blacklists tailored specifically for Squid proxy, and alternative formats for all major third party plugins as well as many other filtering platforms. Including SquidGuard, DansGuardian, and ufDBGuard, as well as pfSense and more. Our adult blacklist contains over 1.1 million domains, we have unique blacklists that you will not find any other place.
There is room for better blacklists, we intend to fill that gap.
It would be our pleasure to serve you.
Signed,
Benjamin E. Nichols
http://www.squidblacklist.org
Post a Comment