Nginx / Apache – log real client IP or x-forwarded-for address.

Web servers such as Nginx or Apache when configured as reverse proxy behind a load balancer, they log the IP address of the load balancer in the access logs as the source IP. For practical use cases, you will usually want to log the actual client IP addresses.

In this setup, Nginx is setup to mimic a load balancer (reverse proxy) with multiple Apache web servers as backend.

1. Nginx snippet configuration to set x_forwarded_for proxy header –

server {
  listen 80;
  listen 443 default ssl;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

2. Apache snippet Configuration to capture x_forwarded_for header in the access logs –

<VirtualHost *:443>
    DocumentRoot /var/www/homenet
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
    SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
    CustomLog "logs/" combined env=!forwarded
    CustomLog "logs/" proxy env=forwarded

Before making the above custom changes , the logs showed the load balancer IP only – - - [19/Mar/2015:16:21:10 -0700] "GET /signup.php HTTP/1.0" 200 1237 - - [19/Mar/2015:16:21:11 -0700] "GET /login.php HTTP/1.0" 200 1715

After the change the client IP ( was logged – - - [19/Mar/2015:16:26:43 -0700] "GET / HTTP/1.0" 200 1311 "" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:35.0) Gecko/20100101 Firefox/35.0" - - [19/Mar/2015:16:26:44 -0700] "GET /signup.php HTTP/1.0" 200 1237 "" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:35.0) Gecko/20100101 Firefox/35.0"

References –

Free SSL certificates with Let’s Encrypt certbot – tested in Ubuntu 14.04 with Apache 2.

It is nice to have a site with valid SSL certificates, your visitors will be happy when they see that green padlock. Unfortunately it generally costs time and money to setup SSL certificates. Most big businesses with buy SSL certificates from well know Certificate Authorities(CAs) such as VeriSign, Symantec or GlobalSign. If you run a personal blog though and you can still get free SSL certificates.

Benefits of certificates –

a. Search engines such as Google give preference to secure sites
b. Security reasons – encryption and extended validation.

Disadvantages –

a. Introduces latency or delay
b. Operational cost to setup/renew certificates

One of the most popular SSL certificate providers was StarCom or StarSSL, until Google recently stopped trusting the certificates issues by this CA in Google Chrome. In the blog post, Google says –‘Google has determined that two CAs, WoSign and StartCom, have not maintained the high standards expected of CAs and will no longer be trusted by Google Chrome, in accordance with our Root Certificate Policy. ‘

So what is the alternative? Once my site was blocked by Chrome with a cert warning – ERR_CERT_AUTHORITY_INVALID – I did a research on new options and I can across “Let us encrypt”. And it was way better than StartSSL as it was easy to generate and renew certificates. Every thing was automated. No more certificate creation and renewal hassle.

Here are the steps I followed to get new certificates for my site –

1. Install certbot

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-apache

2. Get SSL certificates and modify Apache configuration automagically with certbot!!

root@localhost:~# certbot --apache

Interactive session –

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Enter email address (used for urgent renewal and security notices) (Enter 'c' to

Please read the Terms of Service at You must agree
in order to register with the ACME server at
(A)gree/(C)ancel: A

Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
(Y)es/(N)o: Y

Here, certbot automatically detects my domains –

Which names would you like to activate HTTPS for?
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):1,2,3,4

Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for
tls-sni-01 challenge for
tls-sni-01 challenge for
tls-sni-01 challenge for
Waiting for verification...
Cleaning up challenges
Deploying Certificate for to VirtualHost /etc/apache2/sites-available/danasmera-ssl
Deploying Certificate for to VirtualHost /etc/apache2/sites-available/danasmera-ssl
Deploying Certificate for to VirtualHost /etc/apache2/sites-available/linuxfreelancer-ssl
Deploying Certificate for to VirtualHost /etc/apache2/sites-available/linuxfreelancer-ssl

Please choose whether HTTPS access is required or optional.
1: Easy - Allow both HTTP and HTTPS access to these sites
2: Secure - Make all requests redirect to secure HTTPS access
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1

Congratulations! You have successfully enabled,,, and

You should test your configuration at:

- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/ Your cert will
expire on 2017-09-08. To obtain a new or tweaked version of this
certificate in the future, simply run certbot again with the
"certonly" option. To non-interactively renew *all* of your
certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt:
Donating to EFF:

Just making sure my apache configuration is valid after certbot modified it –

root@localhost:~# apache2ctl -t
Syntax OK

certbot will create a ‘/etc/letsencrypt/live/’ directory and dump the SSL certificate, private key and cert chain in that directory –

SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/
SSLCertificateChainFile /etc/letsencrypt/live/

Certbot created a multidomain SSL certiticate for 90 days, and a renewal cron job was added to my server so that I don’t have to do manual renewals –

root@localhost:~# cat /etc/cron.d/certbot
# /etc/cron.d/certbot: crontab entries for the certbot package
# Upstream recommends attempting renewal twice a day
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew

References –

Linux kernel – check the kernel options enabled during kernel compilation.

You might want to know whether a certain kernel option was enabled or not when your kernel was built, say if Symmetric multiprocessing (SMP) was enabled, or if KVM was compiled directly into the kernel or just as a loadable module. To answer this, you can look at the /boot/config-$(uname -r) file.

To find out if SMP is enabled in your system for instance, search for all SMP keywords in the kernel configuration –

daniel@linubuvma:~$ grep SMP /boot/config-$(uname -r)
# CONFIG_X86_VSMP is not set
# CONFIG_MAXSMP is not set

The ‘CONFIG_SMP=y’ setting indicates that the SMP module was compiled directly in the kernel, it is part of the monolithic kernel.

If your kernel was built with ‘CONFIG_IKCONFIG_PROC’, then the /proc/config.gz will contain the .config file the Linux kernel was compiled with.

daniel@linubuvma:~$ grep CONFIG_IKCONFIG_PROC  /boot/config-$(uname -r)
daniel@linubuvma:~$ ls /proc/config.gz
ls: cannot access /proc/config.gz: No such file or directory

In my case, the kernel was not built with ‘CONFIG_IKCONFIG_PROC’.

curl – get only numeric HTTP response code

Most browsers have developer plugins where you can see the HTTP status code response and other request/response headers. For automation purposes though, you are most likely to use tools such as curl, httpie or python requests modules. In this post, we will see how to use curl for parsing HTTP response to get only the response code.

1. First attempt – use ‘-I’ option to fetch HTTP-header only.

The first line will show the response code.

daniel@linubuvma:~$ curl -I
HTTP/1.1 200 OK
Date: Sun, 09 Apr 2017 06:45:00 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN; Htty
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding

But does this work all the time? No, some web services have problem with the HEAD HTTP request. Let us try for instance –

daniel@linubuvma:~$ curl -I
HTTP/1.1 503 Service Unavailable
Content-Type: text/html
Content-Length: 6450
Connection: keep-alive
Server: Server
Date: Sun, 09 Apr 2017 06:50:02 GMT
Set-Cookie: skin=noskin; path=/;
Vary: Content-Type,Host,Cookie,Accept-Encoding,User-Agent
X-Cache: Error from cloudfront
Via: 1.1 (CloudFront)

daniel@linubuvma:~$ curl -I -A "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20101026 Firefox/3.6.12"
HTTP/1.1 405 MethodNotAllowed
Content-Type: text/html; charset=ISO-8859-1
Connection: keep-alive
Server: Server
Date: Sun, 09 Apr 2017 06:49:47 GMT
Set-Cookie: skin=noskin; path=/;
Strict-Transport-Security: max-age=47474747; includeSubDomains; preload
x-amz-id-1: N2RDV79SBB791BTYG2K8
allow: POST, GET
Vary: Accept-Encoding,User-Agent
X-Frame-Options: SAMEORIGIN
X-Cache: Error from cloudfront
Via: 1.1 (CloudFront)

In the first attempt, was actually blocking automated checks by looking at the user-agent in the header, so i had to trick it by changing the user-agent header. The response code was 503. Once I changed the user-agent, I am getting 405 – the web server does not like our HEAD HTTP (‘-I’) option.

2. Second attempt – use ‘-w’ option to write-out specific parameter.

curl has ‘-w’ option for defining specific parameter to write out to the screen or stdout. Some of the variables are content_type, size_header, http_code. In our case, we are interested in http_code, which will dump the numerical response code from the last HTTP transfer. Let us try it –

daniel@linubuvma:~$ curl -I -s -w "%{http_code}\n" -o /dev/null

We use ‘-I’ to get only the header and redirect the header to /dev/null and only print http_code to stdout. This is by far the most efficient way of doing it, as we are not transferring the whole page. If the ‘-I’ option does not work though, for sites such as, we can drop ‘-I’ as follows –

daniel@linubuvma:~$ curl -s -w "%{http_code}\n" -o /dev/null -A "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20101026 Firefox/3.6.12"

This is very useful when are writing scripts to get only the HTTP status code.

References –

How to share your terminal session with another user in real time.

Linux has a script command which is mainly used for ‘typescripting’ all output printed on terminal. Commands typed on a terminal and the resulting output can be written to a file for later retrieval.

One little known use of the script command is for sharing your terminal session with another user, this would particularly be useful for telecooperation say between a user and instructor. The instructor can lead the session by executing commands on the shell while the student would observe. Here is one way of doing this –

1. Instructor creates a named pipe using mkfifo

instructor@linubuvma:/$ mkfifo /tmp/shared-screen

instructor@linubuvma:/$ ls -al /tmp/shared-screen 
prw-rw-r-- 1 instructor instructor 0 Mar 31 00:08 /tmp/shared-screen

instructor@linubuvma:/$ script -f /tmp/shared-screen 

2. Student views the session in real time by reading the shared-screen file –

student@linubuvma:/tmp$ cat shared-screen
Script started on Fri 31 Mar 2017 12:09:42 AM EDT

As soon as the student runs the

cat shared-screen

command, the script command also gets started on the instructor’s session.

Whatever is typed on the instructor’s terminal will show up on the student’s screen and the student’s terminal will be restored as soon as the instructor exits or terminates the script session –

instructor@linubuvma:/$ free -m
             total       used       free     shared    buffers     cached
Mem:          3946       3572        374         40        288        996
-/+ buffers/cache:       2288       1658
Swap:         4092        195       3897
instructor@linubuvma:/$ exit

Script done on Fri 31 Mar 2017 12:12:02 AM EDT

Note – the student’s screen will show the user id of the instructor at the bash prompt, as it is a replica of the instructors session. Once the instructor terminates the session, the student will get back to their original bash prompt.


User administration: restricting access

1. With the chage command, an account expiration can be set. Once that date is reached, the user cannot log into the system interactively.
Let us run ‘chage’ interactively to set user’s account expiry –

[root@kauai /]# chage sshtest
Changing the aging information for sshtest
Enter the new value, or press ENTER for the default

	Minimum Password Age [0]: 
	Maximum Password Age [99999]: 
	Last Password Change (YYYY-MM-DD) [2015-11-04]: 
	Password Expiration Warning [7]: 
	Password Inactive [-1]: 
	Account Expiration Date (YYYY-MM-DD) [-1]: 2017-03-30

[root@kauai /]# chage -l sshtest
Last password change					: Nov 04, 2015
Password expires					: never
Password inactive					: never
Account expires						: Mar 30, 2017
Minimum number of days between password change		: 0
Maximum number of days between password change		: 99999
Number of days of warning before password expires	: 7

2. In addition to this, the usermod command can “lock” an account with the -L option. Say when a user is no longer with a company, the administrator may lock and expire an account with a single usermod command. The date must be given as the number of days since January 1, 1970. Setting the expiredate to 1 will immediately lock the account –

[student@serverX ~]$ sudo usermod -L -e 1 elvis

[student@serverX ~]$ sudo usermod -L elvis
[student@serverX ~]$ su - elvis
Password: elvis
su: Authentication failure

Locking the account prevents the user from authenticating with a password to the system. It is the recommended method of preventing access to an account by an employee who has left the company. If the employee returns, the account can later be unlocked with

usermod -U USERNAME

. If the account was also expired, be sure to also change the expiration date.

3. The nologin shell. Sometimes a user needs an account with a password to authenticate to a system, but does not need an interactive shell on the system.
For example, a mail server may require an account to store mail and a password for the user to authenticate with a mail client used to retrieve mail.
That user does not need to log directly into the system.

A common solution to this situation is to set the user’s login shell to /sbin/nologin. If the user attempts to log into the system directly,
the nologin “shell” will simply close the connection.

[root@serverX ~]# usermod -s /sbin/nologin student
[root@serverX ~]# su - student
Last login: Tue Feb  4 18:40:30 EST 2014 on pts/0
This account is currently not available.

References –

Linux – Sort IPv4 addresses numerically

A novice user’s first attempt to sort a list of IP addresses would be to use ‘sort -n’, that is a numeric-sort option for sort command. Unfortunately, this will sort only the first quadrant of the IP address preceding the initial dot(‘.’). Definitely the GNU sort command does support sorting IPv4 addresses in numeric order, we just have to specify the right options.

Question to answer –

1. What is our delimiter for IPv4? dot.
2. What type of sorting? numeric.
3. How many fields? four.

Reading the man page for sort provides an option for each – 1) -t. 2) -n 3)-k
The third part might need clarification – since we have dot as a separator, the IP address will have four fields. We need to give sort a key specification (-k), with start and stop positions i.e to story by first quadrant(-k1,1), followed by second(-k2,2), followed by third(-k3,3) and finally by fourth(-k4,4).

The full command looks like this –

sort -t. -n -k1 -k2 -k3 -k4 /tmp/ipv4_file.txt

Let us use ForgeryPy to generate random Ipv4 addresses, we will write a simple python script to generate these random IPs to a file.

First install ForgeryPY –

pip install ForgeryPY

Script to generate IPv4 addresses –


#!/usr/bin/env python

import forgery_py

for i in range(50):

with open('/tmp/ipv4_addresses.txt', 'w') as fp:
     for line in uniq_ipv4:

Output –

daniel@linubuvma:/tmp$ cat /tmp/ipv4_addresses.txt
cat: /tmp/ipv4_addresses.txt: No such file or directory
daniel@linubuvma:/tmp$ python
daniel@linubuvma:/tmp$ cat /tmp/ipv4_addresses.txt

Let us sort it –

daniel@linubuvma:/tmp$ sort -n -t. -k1,1 -k2,2 -k3,3 -k4,4 /tmp/ipv4_addresses.txt

Hope this help.

How to be your own Certificate Authority(CA) with self signed certificates

This is a hands on tutorial on how you can setup your own Certificate Authority(CA) for internal network use. Once the CA certs are setup, you will generate certificate request(CSR) for your clients and sign them with your CA certs to create SSL certs for your internal network use. If you import your CA certs to your browser, you will be able to visit all internal sites using https without any browser warning, as long as the certs the your internal services are using are signed by your internal CA.

*Demo – Own CA for the internal domain

1. Prepare certificate environment
and default parameters to use when creating CSR –

# mkdir /etc/ssl/CA
# mkdir /etc/ssl/newcerts
# sh -c "echo '100000' > /etc/ssl/CA/serial"
# touch /etc/ssl/CA/index.txt

# cat /etc/ssl/openssl.cnf
 dir		= /etc/ssl		# Where everything is kept
 database	= $dir/CA/index.txt	# database index file.
 certificate	= $dir/certs/home_cacert.pem 	# The CA certificate
 serial		= $dir/CA/serial 		# The current serial number
 private_key	= $dir/private/home_cakey.pem  # The private key
 default_days	= 1825			# how long to certify for
 default_bits		= 2048
 countryName_default		= US
 stateOrProvinceName_default	= California
 0.organizationName_default	= Home Ltd

2. Create self signed root certificate and install the root certificate and key

# openssl req -new -x509 -extensions v3_ca -keyout home_cakey.pem -out home_cacert.pem -days 3650
# mv home_cakey.pem /etc/ssl/private/
# mv home_cacert.pem /etc/ssl/certs/

3. Generate a CSR for the domain you want to issue a certificate –

# openssl genrsa -des3 -out home_server.key 2048
# openssl rsa -in home_server.key -out server.key.insecure
# mv server.key
# mv server.key.insecure server.key

4. Create the CSR now and generate a CA signed certificate

# openssl req -new -key server.key -out server.csr
# openssl ca -in server.csr -config /etc/ssl/openssl.cnf

Directory structure after signing and issuing certificates –

# ls -l /etc/ssl/CA/
total 24
-rw-r--r-- 1 root root 444 Aug 29 18:20 index.txt
-rw-r--r-- 1 root root  21 Aug 29 18:20 index.txt.attr
-rw-r--r-- 1 root root  21 Aug 29 18:16 index.txt.attr.old
-rw-r--r-- 1 root root 328 Aug 29 18:18 index.txt.old
-rw-r--r-- 1 root root   7 Aug 29 18:20 serial
-rw-r--r-- 1 root root   7 Aug 29 18:19 serial.old

# ls -l /etc/ssl/newcerts/
total 32
-rw-r--r-- 1 root root 4612 Aug 29 16:24 100000.pem
-rw-r--r-- 1 root root 4613 Aug 29 16:51 100001.pem
-rw-r--r-- 1 root root 4574 Aug 29 17:50 100002.pem
-rw-r--r-- 1 root root 4619 Aug 29 18:20 100003.pem

# cat /etc/ssl/CA/index.txt
V	190828202443Z		100000	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/
V	190828205127Z		100001	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/
V	190828215006Z		100002	unknown	/C=US/ST=California/O=Home Ltd/
V	190828222038Z		100003	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/

# cat /etc/ssl/CA/serial

Now that you have your certificate, in this example /etc/ssl/certs/home_cacert.pem, you can import it to your web client such as a web browser, LDAP client etc.

References –

Server refused to allocate pty : pseudoterminal in use reached maximum allowed limit.

You are unlikely to encounter this error in most cases, as the default maximum number of pseudoterminal(pty) in a Linux environment is large enough for typical use cases. The error might occur though under either an admin lowering the pty limit or unusual high number of connections to the system, using ssh or GUI terminal. Under those circumstances, you will see the below error during ssh interaction –

$ssh daniel@
daniel@'s password:
Server refused to allocate pty

GUI terminal error –

There was an error creating the child process for this terminal
getpt failed: No such file or directory

Per the man page –

” The Linux kernel imposes a limit on the number of available UNIX 98
pseudoterminals. In kernels up to and including 2.6.3, this limit is
configured at kernel compilation time (CONFIG_UNIX98_PTYS), and the
permitted number of pseudoterminals can be up to 2048, with a default
setting of 256. Since kernel 2.6.4, the limit is dynamically
adjustable via /proc/sys/kernel/pty/max, and a corresponding file,
/proc/sys/kernel/pty/nr, indicates how many pseudoterminals are
currently in use.

To resolve this, get a count of pty currently allocated using either of the below commands –

[root@kauai tmp]# sysctl = 10

[root@kauai tmp]# cat /proc/sys/kernel/pty/nr 

You can list the allocated pts names –

# ps aux |grep -o -P '\s+pts/\d+\s+' |sort -u

If the currently allocated count is closer or less than to the limit, which you can find using


, go ahead increase the max limit as follows, say to 4096 in this example –

sysctl -w kernel.pty.max=4096

References –

AIDE (Advanced Intrusion Detection Environment) setup

AIDE is a host-based file and directory integrity checking tool, similar to Tripwire. It creates a snapshot of file details during initialization and stores them in a database. The files that AIDE monitors are user-defined rules, where the admin can specify which directories/files to keep an eye on. The snapshot is basically a message digest of the files/directories information returned by stat command. One AIDE is initialized, it can detect any changes in the future and alert the admin of such changes. AIDE can be configured to run on a scheduled based using cron jobs for instance.


yum list aide
yum install aide


Create AIDE DB – stores snapshot of file or directory stats by scanning the monitored resources.

$ /usr/sbin/aide --init 
$ mv /var/lib/aide/ /var/lib/aide/aide.db.gz

To minimize false positives – Set PRELINKING=no in /etc/sysconfig/prelink and run

 /usr/sbin/prelink -ua 

to restore the binaries to their prelinked state.

Scheduled integrity checks
Add a cron job to check file integrity, say every morning at 8 AM –

echo '0 8 * * * /usr/sbin/aide --check' >> /etc/crontab

Updating DB after making changes or verifying any changes reported during change –

$ aide -c aide.conf --update

References –

AIDE (Advanced Intrusion Detection Environment)