Getting started with Google Cloud Platform(GCP)

Google provides the same cloud services as other cloud providers such as Amazon Web Services(AWS) and Microsoft (Azure). It refers it as Google Cloud Platform or GCP. You can easily get started by signing up for free – https://cloud.google.com/free/

List of all products provided in GCP – https://cloud.google.com/products/

Google provides several ways to interact with its services-

1. GCP console (web ui)
GCP console is a web user interface which lets you interact with GCP resources. You can view, create, update and delete cloud resources from this page.

How to create a Linux vm(instance) using the console – https://cloud.google.com/compute/docs/quickstart-linux

2. Command Line Interface (gcloud cli toolset)
Install gcloud : https://cloud.google.com/sdk/gcloud/

The gcloud toolkit is a command line interface tool to interact with GCP resources. Very useful in automating cloud tasks, with its command completion and help pages, it is almost a necessity to familiarize yourself with this tool.

How to create an instance using gcloud cli – https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

3. Cloud deployment manager
GCP deployment manager allows you to create, delete and update GCP resources in parallel by declaring a set of templates written in jinja2 or python. Templates can be shared with other teams and can be re-used with little modification.

What deployment manager is and how it works – https://cloud.google.com/deployment-manager/

How to deploy an a GCP instance using deployment manager – https://cloud.google.com/deployment-manager/docs/quickstart

4. APIs
Google provides application programming interface(APIs) to interact with its GCP services. Google recommends using the client libraries over directly calling the RESTful apis.

a. Client libraries

List of client libraries for different programming languages – https://cloud.google.com/apis/docs/cloud-client-libraries

How to interact with Google Compute Engine(GCE) using the Python client library – https://cloud.google.com/compute/docs/tutorials/python-guide#addinganinstance

b. RESTful or raw APIs

API Reference – https://cloud.google.com/compute/docs/reference/beta/

Method for creating an instance – https://cloud.google.com/compute/docs/reference/beta/instances/insert

Ansible – How to run a portion of a playbook using tags.

If you have a large playbook it may become useful to be able to run a specific part of it or only a single task without running the whole playbook. Both plays and tasks support a “tags:” attribute for this reason.

In this specific scenario, I have a playbook which configures all productions servers from the moment the servers boot till they start taking traffic. While testing the plays in dev environment, I was debugging an issue on the parts which does dns configuration. This is where the “tags” attributes comes handy –

1. Tag the task –

...
- name: Configure resolv.conf
  template: src=resolv.conf.j2 dest=/etc/resolv.conf
  when: ansible_hostname != "ns1"
  tags:
    - dnsconfig
...

2. Run only the tasks tagged with a specific name –

root@linubuvma:/etc/ansible# ansible-playbook -i dc1/hosts dc1/site.yml --tags "dnsconfig" --check

PLAY [Setup data center 1 servers] *****************************************************

TASK: [common | Configure resolv.conf] ****************************************
skipping: [ns1]
changed: [docker]
ok: [ns2]
ok: [whitehat]
ok: [mail]
ok: [www]
ok: [ftp]

PLAY RECAP ********************************************************************
whitehat                   : ok=1    changed=0    unreachable=0    failed=0
docker                     : ok=1    changed=1    unreachable=0    failed=0
ns1                        : ok=0    changed=0    unreachable=0    failed=0
ns2                        : ok=1    changed=0    unreachable=0    failed=0
mail                        : ok=1    changed=0    unreachable=0    failed=0
www                   : ok=1    changed=0    unreachable=0    failed=0
ftp                   : ok=1    changed=0    unreachable=0    failed=0

Ansible playbook will run only the task with the specified tag, it will skip the rest of the tasks in the playbook. Use the ‘–list-tags’ flag to view all the tags.

References –

http://docs.ansible.com/playbooks_tags.html

https://www.percona.com/live/mysql-conference-2015/sites/default/files/slides/Ansible.pdf

Ansible – Enable logging

By default, Ansible logs the output of playbooks to the standard output only. In order to enable logging to a file for later review or auditing, it can be turned on by setting log_path to a path location where Ansible has a write access.

In my case, i have added the “log_path” setting in the ansible configuration file “/etc/ansible/ansible.cfg”

# grep log_path /etc/ansible/ansible.cfg
log_path = /var/log/ansible.log

Now I can view the log file to all the details on ansible runs –

root@linubuvma:/etc/ansible# ansible-playbook tasks/groupby.yml --check
PLAY [all:!swarm:!docker1] ****************************************************

TASK: [group_by key=os_{{ ansible_os_family }}] *******************************
changed: [ns2]
.....

root@linubuvma:/etc/ansible# ls -al /var/log/ansible.log
-rw-r--r-- 1 root root 4255 May 16 21:21 /var/log/ansible.log
root@linubuvma:/etc/ansible# head  /var/log/ansible.log
2015-05-16 21:21:43,732 p=22946 u=root |
2015-05-16 21:21:43,732 p=22946 u=root |  /usr/local/bin/ansible-playbook tasks/groupby.yml --check
2015-05-16 21:21:43,732 p=22946 u=root |
2015-05-16 21:21:43,734 p=22946 u=root |  ERROR: the playbook: tasks/groupby.yml could not be found
2015-05-16 21:21:48,575 p=22954 u=root |
2015-05-16 21:21:48,576 p=22954 u=root |  /usr/local/bin/ansible-playbook tasks/groupby.yml --check
2015-05-16 21:21:48,576 p=22954 u=root |
2015-05-16 21:21:48,594 p=22954 u=root |  PLAY [all:!swarm:!docker1] ****************************************************
2015-05-16 21:21:48,609 p=22954 u=root |  TASK: [group_by key=os_{{ ansible_os_family }}] *******************************
2015-05-16 21:21:48,641 p=22954 u=root |  changed: [ns2]

It logs dry-runs (–check) as well and it is smart enough not to log Password arguments.

References –

http://docs.ansible.com/ansible/latest/intro_configuration.html#log-path

ipython tutorial and how to delete sensitive data from history

ipython is program which allows you to run python code in an interactive shell. Although Python itself when run from CLI opens an interactive shell as well, ipython is much more powerful and greatly improves your productivity. Some of the things you can do with ipython but not the default python shell is command or code and file name completion, view history, copy/paste a single or multiline code, nicely colored help with in the shell, run Linux commands such as ls or cat, scroll up/down to previous commands, automatically adds spaces after you press enter, etc.

Installation

pip install ipython

Quick demo
Start ipython by typing the

ipython

command in your CLI –

daniel@lindell:/tmp$ ipython
Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
Type "copyright", "credits" or "license" for more information.

IPython 5.4.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: print('ipython')
ipython

In [2]: 

With in the ipython interactive shell you can run any python code, let us walk through some examples –


  In [1]: x=2

In [2]: x
Out[2]: 2

In [3]: mylist=[1,2,3,4,5]

In [4]: [i**3 for i in mylist]
Out[4]: [1, 8, 27, 64, 125]

In [5]: with open('/etc/hosts') as fp:
   ...:     for line in fp:
   ...:         if 'localhost' in line:
   ...:             print line
   ...:             
127.0.0.1	localhost

::1     ip6-localhost ip6-loopback


In [6]: ls /opt/
ansible/  google/  vagrant/

In [7]: 

Go back to previously typed commands / History
With ipython, you can either press the UP arrow key or type

 history 

command to view history. ipython keeps session history as well as all input and output lines in a SQLite file which is located in

~/.ipython/profile_default/history.sqlite 

You can view and modify this file using

sqlite3

tool –

daniel@lindell:/tmp$ sqlite3 ~/.ipython/profile_default/history.sqlite
SQLite version 3.11.0 2016-02-15 17:29:24
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE sessions (session integer
                        primary key autoincrement, start timestamp,
                        end timestamp, num_cmds integer, remark text);
CREATE TABLE history
                (session integer, line integer, source text, source_raw text,
                PRIMARY KEY (session, line));
CREATE TABLE output_history
                        (session integer, line integer, output text,
                        PRIMARY KEY (session, line));
sqlite> 

Deleting sensitive data from history
You can delete any line from history by using SQL. First use SELECT statement to find the line number and then use DELETE statement to delete it. In this example, we are deleting line number 10 from the history table –

sqlite> select * from history;
sqlite> .schema history
CREATE TABLE history
                (session integer, line integer, source text, source_raw text,
                PRIMARY KEY (session, line));
sqlite> delete from history where line=10;

References –
https://ipython.org/
http://www.sqlitetutorial.net/sqlite-delete/

Linux – Mount partition from a raw disk image : dd and mount

In this post, I will share how you can mount a raw disk image such as an image generated with dd. Raw disk image or RAW Image Format is a bit-for-bit copy of disk data, without any metadata information on files. In Linux, dd is a popular tool for data transfer by duplicating entire disk for instance. Let us create a disk image of a mount with an EXT3 file system –

[root@kauai src]# dd if=/dev/sdb of=disk.img 

7233761+0 records in
7233760+0 records out
3703685120 bytes (3.7 GB) copied, 236.166 s, 15.7 MB/s

[root@kauai src]# ls -alh disk.img 
-rw-r--r--. 1 root root 3.5G Jan 15 18:44 disk.img

We have copied a mount with multiple files into a single disk.img file which we can copy to another system. Now let us examine the raw disk layout, that we can use to mount as a file system –

[root@kauai src]# fdisk -lu disk.img 
You must set cylinders.
You can do this from the extra functions menu.

Disk disk.img: 0 MB, 0 bytes
124 heads, 62 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xdebbbd93

   Device Boot      Start         End      Blocks   Id  System
disk.img          630416      945623      157604   83  Linux

As we can see the raw disk has 512 byte size sectors and it starts at offset 630416, given this information we can use mount command to mount the disk image –

[root@kauai src]# mount -o loop,offset=$((630416*512)) disk.img /mnt/hdisk/
[root@kauai src]# ls -al /mnt/hdisk/
total 37
drwxr-xr-x. 3 root root  1024 Jan 15 18:39 .
drwxr-xr-x. 4 root root  4096 Nov 17 20:04 ..
-rw-r--r--. 1 root root    15 Jan 15 18:39 file21
-rw-r--r--. 1 root root    15 Jan 15 18:39 file22
-rw-r--r--. 1 root root    15 Jan 15 18:39 file23
-rw-r--r--. 1 root root    15 Jan 15 18:39 file24
-rw-r--r--. 1 root root    15 Jan 15 18:39 file25
-rw-r--r--. 1 root root    15 Jan 15 18:39 file26
-rw-r--r--. 1 root root    15 Jan 15 18:39 file27
-rw-r--r--. 1 root root    15 Jan 15 18:39 file28
-rw-r--r--. 1 root root    15 Jan 15 18:39 file29
-rw-r--r--. 1 root root    15 Jan 15 18:39 file30
drwx------. 2 root root 12288 Jan 15 18:37 lost+found

[root@kauai src]# cat /mnt/hdisk/file26 
File number 26

Here we were able to mount the disk image and be able to read the content of one of the text files.

References –

https://en.wikipedia.org/wiki/Dd_(Unix)

https://linux.die.net/man/8/mount

How to copy to a clipboard in Linux

Problem statement – You have a file with hundreds or thousands of lines and you want to copy the contents of this file and paste it to an external application, for instance to a browser.

Solution – The first attempt is to try to cat the file and scroll down with your mouse to select each line. This is time consuming or in some cases might not work if there are too many lines as some of the lines will ‘scroll out of the terminal’. One way of getting around this is to use “xclip” – a command line interface to X selections (clipboard).

In my case I wanted to copy the contents of ‘/tmp/ipaddresses.txt’ file to a browser for blogging. The file had 10000 lines. I used the following commands, first to install xclip and then to copy the file contents to a clipboard –

apt-get -y install xclip
xclip -sel cli < /tmp/ipaddresses.txt

The xclip command basically does a selection (-sel) from the file into the clipboard(-cli), where you can copy paste to any other external application.

References
https://linux.die.net/man/1/xclip

https://stackoverflow.com/questions/5130968/how-can-i-copy-the-output-of-a-command-directly-into-my-clipboard

How to fake or spoof x-forwarded-for header

The x-forwarded-for header is a way of identifying the IP address of the original client when a web server is sitting behind a proxy or load-balancer. The load-balancer does get the actual client IP as it directly sets up the TCP session with the load-balancer. But the x-forwarded-for address might contain a list of comma separated IP addresses in addition to the immediate client IP. It is these extra IPs that we can spoof and the procedure is similar to modifying any HTTP header such as user agent.

import requests
headers={'X-Forwarded-For':'1.1.1.1'}
r = requests.get('http://web.home.net/index.html', headers=headers)
if r.ok:
    print('Success.')

How the log likes like on an nginx access log –

1.1.1.1, 192.168.10.206 - - [19/Mar/2017:16:43:55 -0700] "GET /index.html HTTP/1.0" 200 1311 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-121-generic"
1.1.1.1, 192.168.10.206 - - [19/Mar/2017:16:53:55 -0700] "GET /index.html HTTP/1.0" 200 1311 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-121-generic"
1.1.1.1, 192.168.10.206 - - [19/Mar/2017:16:58:55 -0700] "GET /index.html HTTP/1.0" 200 1311 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-121-generic"

The take away is not to trust any IPs in the x-forwarded-for list apart from the load balancer IP and the immediate client IP which made a direct call to the load balancer. If we trust our load balancer, we can also reliably identify the immediate client IP. The rest of the IPs in the x-forwarded-for list can be ignored.

References –

https://en.wikipedia.org/wiki/X-Forwarded-For

git – add local files to a git repository in local file system (bare git repo).

In this blow, I will show you how you can turn your local files into a github style repository. In my case I had files in `/etc/puppet` that I wanted to version control, but I wanted to push to a bare repository in the same machine or localhost. Here are the steps I followed –

Files to version control : /etc/puppet
Bare git repository that we will push changes in /etc/puppet : /var/lib/puppet/gitrepo/

1. Create a github style git repository in /var/lib/puppet/gitrepo

root@linubuvmb:/# mkdir -p /var/lib/puppet/gitrepo && cd /var/lib/puppet/gitrepo
root@linubuvmb:/var/lib/puppet/gitrepo# git --bare init
Initialized empty Git repository in /var/lib/puppet/gitrepo/

2. Initialize files as git repository

root@linubuvmb:/# cd /etc/puppet
root@linubuvmb:/etc/puppet# git init
Initialized empty Git repository in /etc/puppet/.git/
root@linubuvmb:/etc/puppet# git add .
root@linubuvmb:/etc/puppet# git commit -m 'First commit'
[master (root-commit) b71ef42] First commit
 50 files changed, 3913 insertions(+)
 create mode 100644 auth.conf
 create mode 100644 environments/example_env/README.environment
 create mode 100755 etckeeper-commit-post
 create mode 100755 etckeeper-commit-pre
 create mode 100644 fileserver.conf
 create mode 100644 manifests/base.pp
 create mode 100644 manifests/nodes.pp
 create mode 100644 manifests/site.pp
 create mode 100644 modules/apache/manifests/init.pp
...

3. Add bare repo as remote

root@linubuvmb:/etc/puppet# git remote add origin file:///var/lib/puppet/gitrepo/

4. Push to local git repository

root@linubuvmb:/etc/puppet# git push -u origin master
Counting objects: 84, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (70/70), done.
Writing objects: 100% (84/84), 129.33 KiB | 0 bytes/s, done.
Total 84 (delta 6), reused 0 (delta 0)
To file:///var/lib/puppet/gitrepo/
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
root@linubuvmb:/etc/puppet# git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

        modified:   puppet.conf

no changes added to commit (use "git add" and/or "git commit -a")
root@linubuvmb:/etc/puppet# git commit -a
[master f57997d] test
 1 file changed, 1 deletion(-)
root@linubuvmb:/etc/puppet# git status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

nothing to commit, working directory clean

Reference –

https://git-scm.com/documentation

List shared or dynamic libraries required by a program

In Linux, the

ldd

is used to find out the shared libraries or dependencies required by a program if it is a dynamic executable. ldd requires the full path to the executable as input.

For instance, the Linux ps command depends on the following shared or dynamic libraries –

[root@kauai rtc0]# ldd $(which ps)
	linux-vdso.so.1 =>  (0x00007ffeb6277000)
	libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003ef6200000)
	libproc-3.2.8.so => /lib64/libproc-3.2.8.so (0x0000003ef4e00000)
	libc.so.6 => /lib64/libc.so.6 (0x0000003ef4a00000)
	libdl.so.2 => /lib64/libdl.so.2 (0x0000003ef5600000)
	/lib64/ld-linux-x86-64.so.2 (0x0000003ef4600000)

You can also use the ldd command to find out if an executable has an expected dependencies. In this case, we expect that the htpasswd, login and sshd commands depend on the crypt library as they prompt a user for a password for authentication purposes –


[root@kauai rtc0]# ldd $(which htpasswd) |grep crypt
	libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f010c8ab000)

[root@kauai rtc0]# ldd $(which login) | grep crypt
	libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003efd200000)

[root@kauai rtc0]# ldd $(which sshd) | grep crypt
	libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007ffb0b1f2000)
	libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ffb0a988000)
	libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007ffb0a015000)

References –

http://man7.org/linux/man-pages/man1/ldd.1.html

Getting date from Real time clock (RTC) without using the date command or any other Linux time related commands.

In Linux, the “Real Time Clock” tracks wall clock time and is battery backed so that it works even with system power off. The RTC has no concept of time zone or daylight saving, it defaults to UTC. One of the user interfaces that the Linux Kernel exposes is

 /sys/class/rtc/rtc{N} 

and we will use the files in that directory to directly read time related data from the RTC.

* Files –

[root@ns3 rtc0]# ls /sys/class/rtc/rtc0
date  dev  device  hctosys  max_user_freq  name  power  since_epoch  subsystem  time  uevent  wakealarm

* Date and time in UTC

[root@ns3 rtc0]# cat date
2015-01-19
[root@ns3 rtc0]# cat time
23:05:05

* The maximum interrupt rate an unprivileged user may request from this RTC.

# cat max_user_freq
64

* The name of the RTC corresponding to this sysfs directory

[root@ns3 rtc0]# cat name
rtc_cmos

* The number of seconds since the epoch according to the RTC

[root@ns3 rtc0]# cat since_epoch
1421708627

* Status information is reported through the pseudo-file /proc/driver/rtc

[root@ns3 rtc0]# cat /proc/driver/rtc
rtc_time        : 23:06:58
rtc_date        : 2015-01-19
alrm_time       : 01:00:02
alrm_date       : ****-**-**
alarm_IRQ       : no
alrm_pending    : no
24hr            : yes
periodic_IRQ    : no
update_IRQ      : no
HPET_emulated   : no
DST_enable      : no
periodic_freq   : 1024
batt_status     : okay

References –

Real Time Clock (RTC) Drivers for Linux