Google Chrome browse is not using my host DNS settings.

Environment –
Operating System : CentOS release 6.8
Google Chrome: Version 55.0.2883.87 (64-bit)

I have an Intranet DNS server with internal domain name. The domain name is resolved internally by my DNS server to an internal private IP address. With Firefox I could always visit my internal site without issues, but recently I installed Chrome browser into my CentOS desktop and when I tried to visit my internal site, it was directing me to an Internet site which I don’t own. Apparently Google Chrome was ignoring my dns setting and using its own name servers.

My first attempt – flush DNS on Chrome (failed)
I went to the DNS configuration for chrome and cleared host cache. The dns settings clearly showed my ‘nameservers’ as the (my internal name servers) and yet Chrome was not using it. Even after cleaning the host cache and flushing sockets, it didn’t help.

Chrome setting to view and manage DNS settings –


My second attempt – block IPv6 (worked)
After running tcpdump on port 53, Chrome was calling an IPv6 address 2001:4860:4860::8888 to resolve domains.

Added below lines to


in order to disable IPv6 temporarily for a test –

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

And executed the command

 sysctl -p 

to apply the new kernel settings. After this change and flushing dns, I was able to visit my internal site.

I am guessing the DNS IPs used by Chrome as somehow internally coded, and haven’t been able to find those settings.

References –

curl – get only numeric HTTP response code

Most browsers have developer plugins where you can see the HTTP status code response and other request/response headers. For automation purposes though, you are most likely to use tools such as curl, httpie or python requests modules. In this post, we will see how to use curl for parsing HTTP response to get only the response code.

1. First attempt – use ‘-I’ option to fetch HTTP-header only.

The first line will show the response code.

daniel@linubuvma:~$ curl -I
HTTP/1.1 200 OK
Date: Sun, 09 Apr 2017 06:45:00 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN; Htty
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding

But does this work all the time? No, some web services have problem with the HEAD HTTP request. Let us try for instance –

daniel@linubuvma:~$ curl -I
HTTP/1.1 503 Service Unavailable
Content-Type: text/html
Content-Length: 6450
Connection: keep-alive
Server: Server
Date: Sun, 09 Apr 2017 06:50:02 GMT
Set-Cookie: skin=noskin; path=/;
Vary: Content-Type,Host,Cookie,Accept-Encoding,User-Agent
X-Cache: Error from cloudfront
Via: 1.1 (CloudFront)

daniel@linubuvma:~$ curl -I -A "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20101026 Firefox/3.6.12"
HTTP/1.1 405 MethodNotAllowed
Content-Type: text/html; charset=ISO-8859-1
Connection: keep-alive
Server: Server
Date: Sun, 09 Apr 2017 06:49:47 GMT
Set-Cookie: skin=noskin; path=/;
Strict-Transport-Security: max-age=47474747; includeSubDomains; preload
x-amz-id-1: N2RDV79SBB791BTYG2K8
allow: POST, GET
Vary: Accept-Encoding,User-Agent
X-Frame-Options: SAMEORIGIN
X-Cache: Error from cloudfront
Via: 1.1 (CloudFront)

In the first attempt, was actually blocking automated checks by looking at the user-agent in the header, so i had to trick it by changing the user-agent header. The response code was 503. Once I changed the user-agent, I am getting 405 – the web server does not like our HEAD HTTP (‘-I’) option.

2. Second attempt – use ‘-w’ option to write-out specific parameter.

curl has ‘-w’ option for defining specific parameter to write out to the screen or stdout. Some of the variables are content_type, size_header, http_code. In our case, we are interested in http_code, which will dump the numerical response code from the last HTTP transfer. Let us try it –

daniel@linubuvma:~$ curl -I -s -w "%{http_code}\n" -o /dev/null

We use ‘-I’ to get only the header and redirect the header to /dev/null and only print http_code to stdout. This is by far the most efficient way of doing it, as we are not transferring the whole page. If the ‘-I’ option does not work though, for sites such as, we can drop ‘-I’ as follows –

daniel@linubuvma:~$ curl -s -w "%{http_code}\n" -o /dev/null -A "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20101026 Firefox/3.6.12"

This is very useful when are writing scripts to get only the HTTP status code.

References –

How to share your terminal session with another user in real time.

Linux has a script command which is mainly used for ‘typescripting’ all output printed on terminal. Commands typed on a terminal and the resulting output can be written to a file for later retrieval.

One little known use of the script command is for sharing your terminal session with another user, this would particularly be useful for telecooperation say between a user and instructor. The instructor can lead the session by executing commands on the shell while the student would observe. Here is one way of doing this –

1. Instructor creates a named pipe using mkfifo

instructor@linubuvma:/$ mkfifo /tmp/shared-screen

instructor@linubuvma:/$ ls -al /tmp/shared-screen 
prw-rw-r-- 1 instructor instructor 0 Mar 31 00:08 /tmp/shared-screen

instructor@linubuvma:/$ script -f /tmp/shared-screen 

2. Student views the session in real time by reading the shared-screen file –

student@linubuvma:/tmp$ cat shared-screen
Script started on Fri 31 Mar 2017 12:09:42 AM EDT

As soon as the student runs the

cat shared-screen

command, the script command also gets started on the instructor’s session.

Whatever is typed on the instructor’s terminal will show up on the student’s screen and the student’s terminal will be restored as soon as the instructor exits or terminates the script session –

instructor@linubuvma:/$ free -m
             total       used       free     shared    buffers     cached
Mem:          3946       3572        374         40        288        996
-/+ buffers/cache:       2288       1658
Swap:         4092        195       3897
instructor@linubuvma:/$ exit

Script done on Fri 31 Mar 2017 12:12:02 AM EDT

Note – the student’s screen will show the user id of the instructor at the bash prompt, as it is a replica of the instructors session. Once the instructor terminates the session, the student will get back to their original bash prompt.


Server refused to allocate pty : pseudoterminal in use reached maximum allowed limit.

You are unlikely to encounter this error in most cases, as the default maximum number of pseudoterminal(pty) in a Linux environment is large enough for typical use cases. The error might occur though under either an admin lowering the pty limit or unusual high number of connections to the system, using ssh or GUI terminal. Under those circumstances, you will see the below error during ssh interaction –

$ssh daniel@
daniel@'s password:
Server refused to allocate pty

GUI terminal error –

There was an error creating the child process for this terminal
getpt failed: No such file or directory

Per the man page –

” The Linux kernel imposes a limit on the number of available UNIX 98
pseudoterminals. In kernels up to and including 2.6.3, this limit is
configured at kernel compilation time (CONFIG_UNIX98_PTYS), and the
permitted number of pseudoterminals can be up to 2048, with a default
setting of 256. Since kernel 2.6.4, the limit is dynamically
adjustable via /proc/sys/kernel/pty/max, and a corresponding file,
/proc/sys/kernel/pty/nr, indicates how many pseudoterminals are
currently in use.

To resolve this, get a count of pty currently allocated using either of the below commands –

[root@kauai tmp]# sysctl = 10

[root@kauai tmp]# cat /proc/sys/kernel/pty/nr 

You can list the allocated pts names –

# ps aux |grep -o -P '\s+pts/\d+\s+' |sort -u

If the currently allocated count is closer or less than to the limit, which you can find using


, go ahead increase the max limit as follows, say to 4096 in this example –

sysctl -w kernel.pty.max=4096

References –

Google cloud platform – NEXT 2017

As of the beginning of 2017, Amazon Web Services(AWS) is the leader in Cloud based infrastructure as a service(IAS), followed by Microsoft. The cloud business is still competitive and many enterprises have yet to migrate fully to the cloud. Cloud service providers are continuously competing in the quality of service, diversity and range of services provided, price etc.

A new entrant to the Cloud business is Google, which has recently started targeting big enterprises as well as individual developers and small businesses. The core infrastructure Google had used for years internally to service global users on such services as Gmail, Google maps, Google search is now being offered to users. The Gartner magic quadrant for 2016 has put it in the visionaries quadrant

Follow NEXT in twitter
Google cloud in Facebook

To get started with Google cloud platform(GCP), go to the documentation page for GCP.

For list of solution and products offered by GCP – GCP products.

How to run playbooks against a host running ssh on a port other than port 22.

Ansible is a simple automation or configuration management tool, which allows to execute a command/script on remote hosts in an adhoc or using playbooks. It is push based, and uses ssh to run the playbooks against remote hosts. The below steps are on how to run ansible playbooks on a host running ssh on port 2222.

One of the hosts managed by ansible is running in a non-default port. It is a docker container listening on port 2222. Actually ssh in container listens on port 22, but the host redirect port 2222 on host to port 22 on container.

1. Use environment variable –

 ansible-playbook tasks/app-deployment.yml --check -e ansible_ssh_port=2222

2. specify the port in the inventory or hosts file –

Under hosts file set the hostname to have the format ‘server:port’ –


Let us run the playbook now –

root@linubuvma:/tmp/ansible# cat tasks/app-deployment.yml
- hosts: docker-hosts
    app_version: 1.1.0
  - name: install git
    apt: name=git state=latest
  - name: Checkout the application from git
    git: repo= dest=/srv/www/myapp version={{ app_version }}
    register: app_checkout_result

root@linubuvma:/tmp/ansible# ansible-playbook tasks/app-deployment.yml

PLAY [docker-hosts] ************************************************************

TASK: [install git] ***********************************************************
changed: [docker1]

TASK: [Checkout the application from git] *************************************
changed: [docker1]

PLAY RECAP ********************************************************************
docker1                    : ok=2    changed=2    unreachable=0    failed=0

References –

How to locate broadband Internet service providers in your area.

The FCC keeps a database of national broadband providers and it is publicly accessible at Just enter your full address or Zip code, and it will the broadband providers in your area as well as the advertised speed. One caveat is the data was last updated on June 2014, thus you might get latest information.

I checked the database for an area which had Google Fiber for the last 9 or 10 months, and it didn’t show Google Fiber as available in that area. The database has Google Fiber Inc. as a provider listed though.

If you want to check if Google Fiber is available or coming soon to your area check

Once nice thing about the National broadband Map is the open standards API they made available to the public. It is well documented and very easy to pull data from programmatically. The API also gives you access to Census data and demographic information.

Note – most of the queries require the FIPS state and/or county codes (Federal Information Processing Standard state code). For instance, for New York state, the FIPS code is 36. Any county within a state will have FIPS county code of state FIPS code + county FIPS code. Bronx county’s (FIPS 005) full code would be 36005, for instance.

Here is a simple python script on how to interact with the API, will use Bronx county and/or NY as an example.

Let us get the overall broadband ranking within New York state –

import requests
for item in r:
    print item.get('rank'), item.get('geographyName')

Output based on ranking would look like this –
1 Franklin
2 Cattaraugus
3 Allegany
4 Schoharie
5 Otsego
6 Lewis
7 Washington
8 Hamilton
9 Yates
10 Delaware
11 Steuben
12 Wyoming
13 Cayuga
14 Jefferson
15 Herkimer
16 Schuyler
17 Essex
18 Seneca
19 St. Lawrence
20 Clinton
21 Montgomery
22 Chautauqua
23 Wayne
24 Columbia
25 Greene
26 Tioga
27 Livingston
28 Tompkins
29 Rensselaer
30 Chemung
31 Genesee
32 Cortland
33 Oswego
34 Sullivan
35 Albany
36 Oneida
37 Chenango
38 Orleans
39 Fulton
40 Madison
41 Niagara
42 Ontario
43 Warren
44 Schenectady
45 Ulster
46 Erie
47 Putnam
48 Onondaga
49 Saratoga
50 Broome
51 Suffolk
52 Monroe
53 Kings
54 Queens
55 New York
56 Bronx
57 Nassau
58 Westchester
59 Richmond
60 Orange
61 Rockland
62 Dutchess

Bronx county is ranked 56 out of 62, and the data for Bronx would be –

for item in r:
    if item.get('geographyId') == '36005':
        print item

{u'anyWireline': 1.0,
 u'anyWirelineError': 0.0,
 u'downloadSpeedGreaterThan3000k': 1.0,
 u'downloadSpeedGreaterThan3000kError': 0.0,
 u'geographyId': u'36005',
 u'geographyName': u'Bronx',
 u'myAreaIndicator': False,
 u'population': 1482311,
 u'providerGreaterThan3': 1.0,
 u'rank': 56,
 u'stateFips': u'36',
 u'wirelineProviderEquals0': 0.0}

There is lots more you can do with the data, feel free to dig further.

Splunk offers a free version with a 500 MB per day indexing limit, which means you can only add 500 MB amount of new data for indexing per day. This might work for most home users, the only problem is the first time you install Splunk, you might configure it to injest your existing log files which most likely are above 500 MB if you consolidate your logs in a syslog server like I do. In this case, Splunk will stop indexing any data above 500 MB per day. During first time indexing, make sure your existing data or log files are below this limit. If for some reason, you ask Splunk to injest way more than 500 MB of data and you want to start fresh, run the following command to clean up the data –

 splunk  clean eventdata 

You can find the details on Splunk Free on this link.

Here is the series of commands I had to execute to clean up the event data –

[daniel@localhost]$ pwd 
[daniel@localhost]$ sudo -H -u splunk ./splunk  clean eventdata
In order to clean, Splunkd must not be running.

[daniel@localhost bin]$ sudo -H -u splunk /opt/splunk/bin/splunk stop
Stopping splunkd...
Shutting down.  Please wait, as this may take a few minutes.
..                                                         [  OK  ]
Stopping splunk helpers...
                                                           [  OK  ]

[daniel@localhost bin]$ sudo -H -u splunk ./splunk  clean eventdata
This action will permanently erase all events from ALL indexes; it cannot be undone.
Are you sure you want to continue [y/n]? y
Cleaning database _audit.
Cleaning database _blocksignature.
Cleaning database _internal.
Cleaning database _introspection.
Cleaning database _thefishbucket.
Cleaning database history.
Cleaning database main.
Cleaning database summary.
Disabled database 'splunklogger': will not clean.

[daniel@localhost bin]$ sudo -H -u splunk /opt/splunk/bin/splunk start
Checking prerequisites...
	Checking http port [8000]: open
	Checking mgmt port [8089]: open
	Checking appserver port []: open
	Checking kvstore port [8191]: open
	Checking configuration...  Done.
	Checking critical directories...	Done
	Checking indexes...
		Validated: _audit _blocksignature _internal _introspection _thefishbucket history main summary
	Checking filesystem compatibility...  Done
	Checking conf files for problems...
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
                                                           [  OK  ]

Waiting for web server at to be available.. Done

If you get stuck, we're here to help.  
Look for answers here:

The Splunk web interface is at https://localhost:8000

How do you find out the number of CPU cores available in your Linux system? Here are a number of way, pick the one which works for you –

1. nproc command –

[daniel@kauai tmp]$ nproc

2. /proc/cpuinfo

[daniel@kauai tmp]$ grep proc /proc/cpuinfo 
processor	: 0
processor	: 1

3. top – run top command and press ‘1’ (number 1), you will see the list of cores at the top, right below tasks.

Cpu0 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 2.7%us, 1.0%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

4. lscpu – display information about the CPU architecture. Count Sockets times Core(s) per socket, in this case 2 x 1=2 –

[daniel@kauai tmp]$ lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             AuthenticAMD
CPU family:            16
Model:                 6
Model name:            AMD Athlon(tm) II X2 250 Processor
Stepping:              3
CPU MHz:               3000.000
BogoMIPS:              6027.19
Virtualization:        AMD-V
L1d cache:             64K
L1i cache:             64K
L2 cache:              1024K
NUMA node0 CPU(s):     0,1

5. Kernel threads – pick one of the kernel house keeping threads, such as “migration” or “watchdog” and see on how many cores it is running –

[daniel@kauai tmp]$ ps aux |grep '[m]igration'
root         3  0.0  0.0      0     0 ?        S    Dec09   0:02 [migration/0]
root         7  0.0  0.0      0     0 ?        S    Dec09   0:02 [migration/1]

[daniel@kauai tmp]$ ps aux |grep '[w]atchdog'
root         6  0.0  0.0      0     0 ?        S    Dec09   0:00 [watchdog/0]
root        10  0.0  0.0      0     0 ?        S    Dec09   0:00 [watchdog/1]