AIDE (Advanced Intrusion Detection Environment) setup

AIDE is a host-based file and directory integrity checking tool, similar to Tripwire. It creates a snapshot of file details during initialization and stores them in a database. The files that AIDE monitors are user-defined rules, where the admin can specify which directories/files to keep an eye on. The snapshot is basically a message digest of the files/directories information returned by stat command. One AIDE is initialized, it can detect any changes in the future and alert the admin of such changes. AIDE can be configured to run on a scheduled based using cron jobs for instance.

Installation

yum list aide
yum install aide

Initialization

Create AIDE DB – stores snapshot of file or directory stats by scanning the monitored resources.

$ /usr/sbin/aide --init 
$ mv /var/lib/aide/aidb.db.new.gz /var/lib/aide/aide.db.gz

To minimize false positives – Set PRELINKING=no in /etc/sysconfig/prelink and run

 /usr/sbin/prelink -ua 

to restore the binaries to their prelinked state.

Scheduled integrity checks
Add a cron job to check file integrity, say every morning at 8 AM –

echo '0 8 * * * /usr/sbin/aide --check' >> /etc/crontab

Updating DB after making changes or verifying any changes reported during change –

$ aide -c aide.conf --update

References –

AIDE (Advanced Intrusion Detection Environment)

Linux – run a scheduled command once

When we think of running scheduled tasks in Linux, the first tool which comes to mind to most Linux users and admins is cron. Cron is very popular and useful when you want to run a task regularly – say after a given interval, hourly, weekly or even every time the system reboots. The scheduled tasks are faithfully executed by the crond daemon based on the scheduling we set, if somehow crond missed the task because the machine was not running 24/7, then anacron takes care of it. My topic today though is about at which executes a scheduled task only ones at a later time.

1. Adding future commands interactively

Let us schedule to run a specific command 10 minutes from now, press CTRL+D once you have entered the command –

daniel@lindell:~$ at now +10 minutes
at> ps aux &> /tmp/at.log
[[PRESS CTRL+D HERE]]
job 4 at Wed Mar  1 21:24:00 2017

Now the above command ‘ps aux’ is scheduled to run 10 minutes from now, only once. We can check the pending jobs using atq command –

daniel@lindell:~$ atq
4	Wed Mar  1 21:24:00 2017 a daniel

2. Remove scheduled jobs from queue using atrm or at -r

daniel@lindell:~$ at now +1 minutes
at> ps aux > /tmp/atps.logs
at> <EOT>
job 8 at Wed Mar  1 21:25:00 2017
daniel@lindell:~$ atq
8	Wed Mar  1 21:25:00 2017 a daniel
daniel@lindell:~$ atrm 8
daniel@lindell:~$ atq
daniel@lindell:~$ 

3. Run jobs from a script or file.

In some cases the job you want to run is a script –

daniel@lindell:~$ at -f /tmp/myscript.sh 8:00 AM tomorrow
daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel

4. Embed shell commands inline –

at now +10 minutes <<-EOF
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi
EOF

5. View contents of scheduled task using ‘at -c JOBNUMBER’ :

daniel@lindell:~$ at now +10 minutes <<-EOF
> if [ -d ~/pythonscripts ]; then
>  find ~/pythonscripts/ -type f -iname '*.pyc' -delete
> fi
> EOF
job 13 at Wed Mar  1 21:51:00 2017

daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel
12	Wed Mar  1 21:45:00 2017 a daniel
13	Wed Mar  1 21:51:00 2017 a daniel


daniel@lindell:~$ at -c 13
 [[ TRUNCATED ENVIRONMENTAL STUFF ]]
cd /home/daniel || {
	 echo 'Execution directory inaccessible' >&2
	 exit 1
}
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi

In this small tutorial about at utility, we saw some of the use cases for at – especially where we had to execute a scheduled task only once. The time specification it uses is human friendly, example it supports time specs such as midnight, noon, teatime or today. Feel free to read the man pages for details.

References –

https://linux.die.net/man/1/at

How to run playbooks against a host running ssh on a port other than port 22.

Ansible is a simple automation or configuration management tool, which allows to execute a command/script on remote hosts in an adhoc or using playbooks. It is push based, and uses ssh to run the playbooks against remote hosts. The below steps are on how to run ansible playbooks on a host running ssh on port 2222.

One of the hosts managed by ansible is running in a non-default port. It is a docker container listening on port 2222. Actually ssh in container listens on port 22, but the host redirect port 2222 on host to port 22 on container.

1. Use environment variable –


 ansible-playbook tasks/app-deployment.yml --check -e ansible_ssh_port=2222

2. specify the port in the inventory or hosts file –

Under hosts file set the hostname to have the format ‘server:port’ –

[docker-hosts]
docker1:2222

Let us run the playbook now –

root@linubuvma:/tmp/ansible# cat tasks/app-deployment.yml
- hosts: docker-hosts
  vars:
    app_version: 1.1.0
  tasks:
  - name: install git
    apt: name=git state=latest
  - name: Checkout the application from git
    git: repo=https://github.com/docker/docker-py.git dest=/srv/www/myapp version={{ app_version }}
    register: app_checkout_result


root@linubuvma:/tmp/ansible# ansible-playbook tasks/app-deployment.yml

PLAY [docker-hosts] ************************************************************

TASK: [install git] ***********************************************************
changed: [docker1]

TASK: [Checkout the application from git] *************************************
changed: [docker1]

PLAY RECAP ********************************************************************
docker1                    : ok=2    changed=2    unreachable=0    failed=0

References –

http://docs.ansible.com/
http://docs.ansible.com/ansible/intro_inventory.html

How to add a disk to a running VM

This tutorial will show you how to make Vmware autodetect a new virtual disk in a running Linux VM without rebooting it.

1. Start by adding the SCSI virtual disk

In my case I am using VMware workstation and following this link to add the disk. Use the steps relevant for your system, it shouldn’t be that difficult.

2. Status of the disk on the VM

By running fdisk on my Linux VM, I can see that it has two disks attached – /dev/sda and /dev/sdb – both having the same size of 107.4GB. The later is an LVM partition, which I can resize on the fly.

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1 

[root@lincenvma ~]# ls /sys/class/scsi_host/
host0  host1  host2

Apparently after adding the disk, the VM didn’t automatically detect the disk, that takes us to the next step of re-scanning the disks.

3. Rescan Scsi bus

This is where we run the trigger command, to scan the SCSI bus for everything – channel number, SCSI target ID, and LUN values. Check the /var/log/dmesg log file or run dmesg command in another window to see the action live –

[root@lincenvma ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

[root@lincenvma ~]# dmesg
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 61 00 00 00
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
 sdc: unknown partition table
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Attached SCSI disk


[root@lincenvma ~]# tail -f /var/log/messages
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB)
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Write Protect is off
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sdc: unknown partition table
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Attached SCSI disk

We can see that the system detected the new disk and identified it as /dev/sdc.

In RHEL/CentOS 5.4 or above, the script /usr/bin/rescan-scsi-bus.sh will have the same effect.

4. Validate

At the bottom, fdisk shows the new disk as /dev/sdc with size 1073 MB –

[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1  /dev/sdc

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

From here, you can partition the disk and mount it directly or create a PV and merge it with the existing LVM to increase the size of the LVM.

References –

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/adding_storage-device-or-path.html

http://serverfault.com/questions/490397/what-does-in-echo-sys-class-scsi-host-host0-scan-mean

https://www.vmware.com/support/ws5/doc/ws_disk_add_virtual.html

How to get the original file from an RPM.

You might have accidentally deleted a configuration or binary file which was installed as part of a package OR may be you modified the original file and you want to restore the original as you didn’t take a back – this blog will help you in resolving similar issues.

The steps below are for Redhat/CentOS based Linux systems, where the package was installed using rpm or yum. The steps basically outline how to grab the rpm package, unpack and gain access to the files inside the rpm. I will demo the steps i used to recover ntp.conf –

1. Identify the package owning/containing the file –

[root@tester ~]# rpm -qf /etc/ntp.conf
ntp-4.2.6p5-1.el6.centos.x86_64

2. download the original package –
We will download the rpm package in /tmp in order to unpack it later –

[root@tester ~]# cd /tmp/
[root@tester tmp]# yumdownloader ntp-4.2.6p5-1.el6.centos.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.aol.com
 * epel: reflector.westga.edu
 * extras: centos-distro.cavecreek.net
 * updates: lug.mtu.edu
ntp-4.2.6p5-1.el6.centos.x86_64.rpm    | 592 kB     00:00
[root@tester tmp]# ls -lh ntp-4.2.6p5-1.el6.centos.x86_64.rpm
-rw-r--r--. 1 root root 592K Mar  9 03:19 ntp-4.2.6p5-1.el6.centos.x86_64.rpm

Note – you can following the steps in this link to install yumdownloader or alternative means to download a package. For a short answer, just run ‘yum install yum-utils’ to install yumdownloader.

3. extrack RPM package –

We will use rpm2cpio to extract the RPM package and then pipe to cpio to copy the files from the archive –

[root@tester tmp]# rpm2cpio ntp-4.2.6p5-1.el6.centos.x86_64.rpm | cpio -i --make-directories
3344 blocks
[root@tester tmp]# ls
etc  ntp-4.2.6p5-1.el6.centos.x86_64.rpm  usr  var  yum_save_tx-2014-03-09-01-00h9I83Y.yumtx

4. Access the file you are looking for –

Once we extracted the rpm package, the directory structure is easy to navigate – for instance if we looking for ntp.conf, it is under etc/ntp.conf – the directory structure mirrors that of the OS –

[root@tester tmp]# ls -al etc/
total 28
drwxr-xr-x. 6 root root 4096 Mar  9 03:19 .
drwxrwxrwt. 6 root root 4096 Mar  9 03:19 ..
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 dhcp
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 ntp
-rw-r--r--. 1 root root 1778 Mar  9 03:19 ntp.conf
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 rc.d
drwxr-xr-x. 2 root root 4096 Mar  9 03:19 sysconfig
[root@tester tmp]# ls -al etc/ntp
ntp/      ntp.conf
[root@tester tmp]# ls -al etc/ntp.conf
-rw-r--r--. 1 root root 1778 Mar  9 03:19 etc/ntp.conf

At this point, you can view the files in the original rpm and copy the ones you need. You might also find the link below that I referenced for quickly re-installing original files using

yum reinstall ntp

.

References –

https://access.redhat.com/solutions/10154
https://www.g-loaded.eu/2012/03/26/restore-original-configuration-files-from-rpm-packages/

tcpdump – how to grep or save output in real time

Tcpdump is a handy tool for capturing network packets. It will keep on capturing packets until it receives a SIGINT or SIGTERM signal, or the specified number of packets have been processed. If you have tried to pipe the output of tcpdump to a file or tried to grep it, you will notice a significant delay before you even see an output. The reason behind that is, tcpdump buffers output in 4k byte chunks and it doesn’t flush it until 4k of data is captured.

To get around the buffering, you can use ‘-l’ option to see the packets captured in real time in order to ‘grep’ or ‘tee’ output to a file. From the man page –


-l Make stdout line buffered. Useful if you want to see the data while capturing it. E.g.,
``tcpdump -l | tee dat'' or ``tcpdump -l > dat & tail -f dat''.

Send output to a file while watching the captured packets in real time –

root@linubuvma:~# tcpdump -l -i any -qn port 53 | tee -a /tmp/dnslogs
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
09:02:48.772892 IP 192.168.10.206.29185 > 192.168.10.109.53: UDP, length 33
09:02:48.773196 IP 192.168.10.206.35333 > 192.168.10.109.53: UDP, length 33
09:02:48.775062 IP 192.168.10.109.53 > 192.168.10.206.29185: UDP, length 78
09:02:48.775085 IP 192.168.10.109.53 > 192.168.10.206.35333: UDP, length 117
09:02:50.274318 IP 192.168.10.206.46983 > 192.168.10.109.53: UDP, length 33
09:02:50.274695 IP 192.168.10.206.55061 > 192.168.10.109.53: UDP, length 33
09:02:50.275531 IP 192.168.10.109.53 > 192.168.10.206.46983: UDP, length 78
09:02:50.276384 IP 192.168.10.109.53 > 192.168.10.206.55061: UDP, length 117

Grep text pattern in real time –

root@linubuvma:~# tcpdump -l -i any -vv |grep --color -i google
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
    linubuvma.home.net.34647 > ns1.home.net.domain: [bad udp cksum 0x96c1 -> 0x4797!] 34365+ A? google.com. (28)
    linubuvma.home.net.34647 > ns1.home.net.domain: [bad udp cksum 0x96c1 -> 0x9bf1!] 12744+ AAAA? google.com. (28)
    ns1.home.net.domain > linubuvma.home.net.34647: [udp sum ok] 12744 q: AAAA? google.com. 1/0/0 google.com. AAAA 2607:f8b0:4002:c07::66 (56)
    ns1.home.net.domain > linubuvma.home.net.34647: [udp sum ok] 34365 q: A? google.com. 6/0/0 google.com. A 74.125.196.139, google.com. A 74.125.196.100, google.com. A 74.125.196.101, google.com. A 74.125.196.102, google.com. A 74.125.196.113, google.com. A 74.125.196.138 (124)
173 packets captured
240 packets received by filter
0 packets dropped by kernel

References –
http://www.tcpdump.org/tcpdump_man.html
http://unix.stackexchange.com/questions/15989/how-to-process-pipe-tcpdumps-output-in-realtime

Redhat satellite or Spacewalk – real time push to clients.

By default, a client waits for a set of interval (minutes) configured in /etc/sysconfig/rhn/rhnsd to pull scheduled tasks from satellite server. For instance, if a remote command is set to be executed on client or a patch is waiting to be applied, rhn_check has to wait at least for 60 minutes to pick up the task.

For real time command execution or patch or configuration deployment, the following steps have to be performed on server and client –

1. Server : Install osa-dispatcher

root:homevm:~:# rpm -q osa-dispatcher
osa-dispatcher-5.11.43-1.el6.noarch

root:homevm:~:# service osa-dispatcher status

root:homevm:~:# chkconfig osa-dispatcher on

root:homevm:~:# chkconfig osa-dispatcher --list
osa-dispatcher  0:off   1:off   2:on    3:on    4:on    5:on    6:off

2. Client : Install and enable osad (OSA daemon).

# yum install osad -y
# chkconfig osad on
# /etc/init.d/osad restart

3. Client : Make sure the deploy and run options are enabled –

# rhn-actions-control --enable-run
# rhn-actions-control --enable-deploy

# rhn-actions-control --report
deploy is enabled
diff is enabled
upload is enabled
mtime_upload is enabled
run is enabled

Extra steps in case you encounter SSL certificate issues –
OSA is picky on SSL certificte verification, make sure the right CA cert is deployed on client, and the serverURL on up2date should match with the CN on the server certificate.

1. Copy RHN certificate from satellite server to client, make sure the cert has not expired and the CN matches server name.

root:homevm:~:# openssl x509 -in /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT -noout -subject
subject= /C=US/ST=CA/L=SanFrancisco/O=home.net/OU=spacewalk.home.net/CN=homevm.home.net

root:homevm:~:# openssl x509 -in /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT -noout -dates
notBefore=Aug  2 06:04:05 2014 GMT
notAfter=Jul 27 06:04:05 2036 GMT

root:homevm:~:# scp /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT root@client:/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT

[root@blackhat rpm-gpg]# grep -i serverurl /etc/sysconfig/rhn/up2date 
serverURL=http://homevm.home.net/XMLRPC

2. If you get certificate error, during package deployment, copy the RPM GPG public keys from satellite to the clients
On Server side –

root:homevm:/etc/pki/rpm-gpg:# ls -al RPM-GPG-KEY-*
-rw-r--r-- 1 root root 1706 Nov 30  2013 RPM-GPG-KEY-CentOS-6
-rw-r--r-- 1 root root 1730 Nov 30  2013 RPM-GPG-KEY-CentOS-Debug-6
-rw-r--r-- 1 root root 1730 Nov 30  2013 RPM-GPG-KEY-CentOS-Security-6
-rw-r--r-- 1 root root 1734 Nov 30  2013 RPM-GPG-KEY-CentOS-Testing-6
-rw-r--r-- 1 root root 1649 Nov  4  2012 RPM-GPG-KEY-EPEL-6
-rw-r--r-- 1 root root 1011 Feb  5  2011 RPM-GPG-KEY-oracle

root:homevm:/etc/pki/rpm-gpg:# scp RPM-GPG-KEY-* root@client:/etc/pki/rpm-gpg

On client side -

# rpm --import RPM-GPG-KEY-CentOS-*

References –
https://access.redhat.com/documentation/en-US/Red_Hat_Network_Satellite/5.3/html/Installation_Guide/s1-maintenance-push-clients.html

Reduce or shrink the size of non root LVM mount.

In a system with limited disk size, you might run out of disk space in one LVM mount while having plenty of space in another mount. If both LVMs are in the same volume group (VGs), you can easily take away some of the free space from one LVM and add it to the one with low disk space. Both lvreduce and lvresize commands can be used to shrink the LVM. In this example, we will use lvresize.

Note – the steps below have to be done with care, there is a potential for losing data. If the data in the existing partition is critical, make sure you take a backup.

Shrink LVM by example – we will reduce the LVM for /usr/local file system mount from 2.0G to approximately 1.5G.

1. Unmount partition after confirming no file is in use from the partition.

root:homevm:~:# df -Pvh /usr/local
/dev/mapper/vg00-lvol04  2.0G   68M  1.9G   4% /usr/local

root:homevm:~:# lsof /dev/mapper/vg00-lvol04 

root:homevm:~:# umount /usr/local/

2. Do a file system consistency check –

root:homevm:~:# e2fsck -f /dev/mapper/vg00-lvol04 
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg00-lvol04: 46/131072 files (0.0% non-contiguous), 25423/524288 blocks

3. Reduce the file system first, so that the logical volume is always at least as large as the file system expects it to be.

root:homevm:~:# resize2fs /dev/mapper/vg00-lvol04 1400M
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/vg00-lvol04 to 358400 (4k) blocks.
The filesystem on /dev/mapper/vg00-lvol04 is now 358400 blocks long.

root:homevm:~:# mount /usr/local/

root:homevm:~:# lvresize -L 1500M /dev/mapper/vg00-lvol04 
  Rounding size to boundary between physical extents: 1.47 GiB
  WARNING: Reducing active and open logical volume to 1.47 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol04? [y/n]: y
  Reducing logical volume lvol04 to 1.47 GiB
  Logical volume lvol04 successfully resized

root:homevm:~:# resize2fs /dev/mapper/vg00-lvol04 
resize2fs /dev/mapper/vg00-lvol04
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg00-lvol04 is mounted on /usr/local; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/vg00-lvol04 to 385024 (4k) blocks.
The filesystem on /dev/mapper/vg00-lvol04 is now 385024 blocks long.

root:homevm:~:# df -Pvh /usr/local
/dev/mapper/vg00-lvol04  1.5G   68M  1.4G   5% /usr/local

References –
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Logical_Volume_Manager_Administration/

yum – dump all yum repos configuration directives

Per the man page, the yum-config-manager is “a program that can manage main yum configuration options, toggle which repositories are enabled or disabled, and add new repositories.” The details on how to use the command is in the Official Redhat documentation.

One feature that the man page does not list is how you can use the yum-config-manager to display the yum repo configuration sections/directives and options. Not only can you use it to just show the configuration in your system, but it can also help you with displaying all the options supported by yum configuration. It might be useful for scripting as well.

Installation – identify the package name:

yum whatprovides */yum-config-manager

Install package –

yum install yum-utils

Once the package is installed, the command yum-config-manager should be available –

[root@kauai /tmp]# which yum-config-manager
/usr/bin/yum-config-manager

Running yum-config-manager will dump a list of all repositories in the server, and for each repository it will list all directives, including the hidden ones.

Below is just a the truncated version of the output, the output is much more longer depending on the number of yum repositories in your system –

[root@kauai /tmp]# yum-config-manager
===================================== main =====================================
[main]
alwaysprompt = True
assumeno = False
assumeyes = False
bandwidth = 0
bugtracker_url = http://bugs.centos.org/set_project.php?project_id=19&ref=http://bugs.centos.org/bug_report_page.php?category=yum
cache = 0
cachedir = /var/cache/yum/x86_64/6
clean_requirements_on_remove = False
color = auto
color_list_available_downgrade = dim,cyan
color_list_available_install = normal
color_list_available_reinstall = bold,underline,green
color_list_available_upgrade = bold,blue
color_list_installed_extra = bold,red
color_list_installed_newer = bold,yellow
color_list_installed_older = bold
color_list_installed_reinstall = normal
color_search_match = bold
color_update_installed = normal
color_update_local = bold
color_update_remote = normal
commands = 
debuglevel = 2
depsolve_loop_limit = 100
diskspacecheck = True
distroverpkg = centos-release
downloaddir = 
downloadonly = 
enable_group_conditionals = True
enabled = True
enablegroups = True
errorlevel = 2
exactarch = True
exactarchlist = 
exclude = 
exit_on_lock = False
failovermethod = priority
ftp_disable_epsv = False
gaftonmode = False
gpgcheck = True
group_package_types = mandatory,
   default
groupremove_leaf_only = False
history_list_view = users
history_record = True
history_record_packages = yum,
   rpm
http_caching = all
installonly_limit = 5
installonlypkgs = kernel,
   kernel-bigmem,
   installonlypkg(kernel-module),
   installonlypkg(vm),
   kernel-enterprise,
   kernel-smp,
   kernel-debug,
   kernel-unsupported,
   kernel-source,
   kernel-devel,
   kernel-PAE,
   kernel-PAE-debug
installroot = /
keepalive = True
keepcache = False
kernelpkgnames = kernel,
   kernel-smp,
   kernel-enterprise,
   kernel-bigmem,
   kernel-BOOT,
   kernel-PAE,
   kernel-PAE-debug
loadts_ignoremissing = False
loadts_ignorerpm = False
localpkg_gpgcheck = False
logfile = /var/log/yum.log
mdpolicy = group:primary
metadata_expire = 21600
mirrorlist_expire = 86400
multilib_policy = best
obsoletes = True
overwrite_groups = False
password = 
persistdir = /var/lib/yum
pluginconfpath = /etc/yum/pluginconf.d
pluginpath = /usr/share/yum-plugins,
   /usr/lib/yum-plugins
plugins = True
progess_obj = 
protected_multilib = True
protected_packages = yum
proxy = False
proxy_password = 
proxy_username = 
query_install_excludes = False
recent = 7
recheck_installed_requires = True
repo_gpgcheck = False
reposdir = /etc/yum/repos.d,
   /etc/yum.repos.d
reset_nice = True
retries = 10
rpm_check_debug = True
rpmverbosity = info
showdupesfromrepos = False
skip_broken = False
ssl_check_cert_permissions = True
sslcacert = 
sslclientcert = 
sslclientkey = 
sslverify = True
syslog_device = /dev/log
syslog_facility = LOG_USER
syslog_ident = 
throttle = 0
timeout = 30.0
tolerant = True
tsflags = 
username = 
================================== repo: base ==================================
[base]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/6
baseurl = 
cache = 0
cachedir = /var/cache/yum/x86_64/6/base
cost = 1000
enabled = True
enablegroups = True
exclude = 
failovermethod = priority
ftp_disable_epsv = False
gpgcadir = /var/lib/yum/repos/x86_64/6/base/gpgcadir
gpgcakey = 
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/6/base/gpgdir
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
hdrdir = /var/cache/yum/x86_64/6/base/headers
http_caching = all
includepkgs = 
keepalive = True
mdpolicy = group:primary
mediaid = 
metadata_expire = 21600
metalink = 
mirrorlist = http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os&infra=stock
mirrorlist_expire = 86400
name = CentOS-6 - Base
old_base_cache_dir = 
password = 
persistdir = /var/lib/yum/repos/x86_64/6/base
pkgdir = /var/cache/yum/x86_64/6/base/packages
proxy = False
proxy_dict = 
proxy_password = 
proxy_username = 
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert = 
sslclientcert = 
sslclientkey = 
sslverify = True
throttle = 0
timeout = 30.0
username = 

References –

Redhat official documentation

In part 1 of this series, we saw how to pull Docker images from Docker Hub and launch Docker containers. We interacted with a running Docker container by running some bash commands, in this tutorial we will see how to use Dockerfile to automate image building for quicker deployment of applications in a container.

Dockerfile is a text file containing a set of instructions or commands in order to build a Docker image.

Prerequisites

1. Complete the tutorial on part 1 before proceeding. You will need a Docker engine running and the latest official Ubuntu Docker images locally hosted.

2. Create directories

$mkdir ~/docker-flask 
$cd ~/docker-flask

3. Add Dockerfile : ~/docker-flask/Dockerfile
The commands below will be used to create the Docker image. It will pull the latest Ubuntu official Docker image as a first layer or base. Then it will resynchronize the apt package index files from their sources.

A /flask directory will be created in the image, followed by installing Flask and running our Flask app, which we will write in next step.

FROM ubuntu:latest
RUN apt-get update && apt-get install -y python-pip python-dev
COPY . /flask
WORKDIR /flask
RUN pip install -r requirements.txt
EXPOSE 80
ENTRYPOINT ["python"]
CMD ["app.py"]

4. Write flask app : ~/docker-flask/app.py
Let us write a practical app, rather than just printing hello world. The flask app will return the user agent information of the visitor if the index page is visited.

We will also have a URL under /status/ followed by a valid HTTP status code. Given this HTTP status code by the visitor, the flask web server will generate the same HTTP status code header. For instance, if the user visits http://localhost/status/502, the flask server will respond with ‘502 BAD GATEWAY’ HTTP header.

Let us write it under ~/docker-flask/app.py

from flask import Flask
from flask import request, jsonify

app = Flask(__name__)

@app.route('/')
def user_agent():
    user_agent = request.headers.get('User-Agent')
    return 'Your browser is %s.' % user_agent

@app.route('/status/<int:httpcode>')
def get_status(httpcode):
    httpcode = int(httpcode)
    if httpcode < 100 or httpcode >= 600:
        return jsonify({'Status': 'Invalid HTTP status code'})
    elif httpcode >= 100 and httpcode < 500:
        return jsonify({'Status': 'UP'}) , httpcode
    else:
        return jsonify({'Status': 'DOWN'}) , httpcode

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0', port=80)

5. requirements.txt : ~/docker-flask/requirements.txt

cat ~/docker-flask/requirements.txt
Flask==0.12

By now, your directory structure should look similar to this –

daniel@lindell:~/docker-flask$ pwd
/home/daniel/docker-flask

daniel@lindell:~/docker-flask$ ls
app.py  Dockerfile  requirements.txt

Time to build the Docker image –

sudo docker build -t flaskweb:latest .

This will execute the series of commands under Dockerfile. If successful, you will end up with a Docker image named flaskweb and tagged latest –

root@lindell:~# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
flaskweb            latest              6b45443b6380        22 minutes ago      440.4 MB
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

If you encounter any errors, validate you don’t have any syntax errors on Dockerfile.

It is time to run the container –

daniel@lindell:~/docker-flask$ sudo docker run -d -p 80:80 flaskweb
d9af9a1c92bff45b56fc97d13935972b65e3554bfe22ec2f3c102fd26bd20e4c

daniel@lindell:~/docker-flask$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS                NAMES
d9af9a1c92bf        flaskweb            "python app.py"     About a minute ago   Up About a minute   0.0.0.0:80->80/tcp   drunk_mcnulty

In this case, both the host and container will be listening on port 80, feel free to modify this according to your setup.

Test it, we will use httpie to query the web server, if you don’t have httpie installed, you can use ‘curl -I’ to get the full header –

daniel@lindell:~/blog/docker-flask$ http http://localhost/
HTTP/1.0 200 OK
Content-Length: 29
Content-Type: text/html; charset=utf-8
Date: Fri, 30 Dec 2016 14:35:53 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

Your browser is HTTPie/0.9.2.

daniel@lindell:~/blog/docker-flask$ http http://localhost/status/404
HTTP/1.0 404 NOT FOUND
Content-Length: 21
Content-Type: application/json
Date: Fri, 30 Dec 2016 14:35:56 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

{
    "Status": "UP"
}

daniel@lindell:~/blog/docker-flask$ http http://localhost/status/502
HTTP/1.0 502 BAD GATEWAY
Content-Length: 23
Content-Type: application/json
Date: Fri, 30 Dec 2016 14:35:58 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

{
    "Status": "DOWN"
}

Full clean up – if you want to start all over again or want to delete the container and images we have created, i have outlined the steps below. The first step is to stop the running container using ‘docker stop’ command, pass it the first few digits of the container ID.

Once the container is stopped, use ‘docker rm’ to delete the container. At this point, we can proceed with deleting the image as the image is not attached to any running container. Use ‘docker rmi’ to delete the image. We will keep the base Ubuntu image for future use.

daniel@lindell:/tmp$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                NAMES
d9af9a1c92bf        flaskweb            "python app.py"     12 minutes ago      Up 12 minutes       0.0.0.0:80->80/tcp   drunk_mcnulty

daniel@lindell:/tmp$ sudo docker stop d9a
d9a

daniel@lindell:/tmp$ sudo docker rm d9a
d9a

daniel@lindell:/tmp$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

daniel@lindell:/tmp$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
flaskweb            latest              6b45443b6380        39 minutes ago      440.4 MB
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

daniel@lindell:/tmp$ sudo docker rmi 6b45
Untagged: flaskweb:latest
Deleted: sha256:6b45443b63805583f41fbf60aaf5cf746b871fdcfa8fe1c6d5adfb52870e7c89
Deleted: sha256:02062a8ea251d993f54e15f9e5654e40894449430acd045476000cd9ebbdf459
Deleted: sha256:fa2439cd5bc8a53152877c1dc3b12a60ab808bcfe5078549ea5e945f462330da
Deleted: sha256:3bac38b223d80a4db6c4283fd56275fe05ceeab6a1dfd81871aa14c6cda387df
Deleted: sha256:d97357dc5d7454e3b7757f2c348323c84d1902dd806792c53d1fd0ca7813b091
Deleted: sha256:b55dd5bd3326ec4657dc389f4aae69c34a7ba222872f7b868eb8de69d7f69dab
Deleted: sha256:eab59ae84eb136339d08fbacd2905a1ee80a0c875e8e14a4d5184fac30445714
Deleted: sha256:588253a9066c49786fcd0121353e7f0f2cea05cebbc6b9cef67f0c823d23dce8
Deleted: sha256:fe9f27a1cb9165531a1f5149c16ebcd522422e4ac2610035bbbcada7fd0b7551
Deleted: sha256:18ca1bc40895f6f97cae28fa5707bde537ac27023762303f98912c11549431ae

daniel@lindell:/tmp$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

References –
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
http://flask.pocoo.org/docs/0.12/quickstart/