How to interact with web services.

Curl is the defacto CLI tool for interacting with web services and other non-HTTP services such as FTP or LDAP. Linux or Unix system administrators as well as developers love it for its ease of use and debugging capabilities. When you want to interact with web services from within scripts, curl is the number one choice.For downloading files from the web, wget is commonly used as well, but curl can way more.

Since enough has been written about curl, this post is about a tool which takes interaction with web services a lot more human friendly, with nicely formatted and colored output – httpie. It is written in Python.

Installation

apt-get  install httpie     #(Debian/Ubuntu)
yum install httpie          #(Redhat/CentOS)

Note – although the package name is httpie, the binary file is installed as http.

When troubleshooting web services, the first thing we check is usually http request and response headers –

daniel@lindell:/$ http -p hH  httpbin.org
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: httpbin.org
User-Agent: HTTPie/0.9.2

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 12150
Content-Type: text/html; charset=utf-8
Date: Thu, 22 Dec 2016 01:32:13 GMT
Server: nginx

Where -H is for Request headers, -h is for response headers. Similarly, -B is for request body and -b is for response body.

We can also pass more complex HTTP headers, in this case “If-Modified-Since”, the web server will return 304 if the static content i am requesting has not been modified. Moving the date a few years back, it will respond with 200 status code.

daniel@lindell:/$ http -p hH http://linuxfreelancer.com/wp-content/themes/soulvision/images/texture.jpg "If-Modified-Since: Wed, 21 Dec 2016 20:51:14 GMT"
GET /wp-content/themes/soulvision/images/texture.jpg HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: linuxfreelancer.com
If-Modified-Since:  Wed, 21 Dec 2016 20:51:14 GMT
User-Agent: HTTPie/0.9.2

HTTP/1.1 304 Not Modified
Connection: Keep-Alive
Date: Thu, 22 Dec 2016 01:39:28 GMT
ETag: "34441c-f04-4858fcd6af900"
Keep-Alive: timeout=15, max=100
Server: Apache/2.2.14 (Ubuntu)

daniel@lindell:/$ http -p hH http://linuxfreelancer.com/wp-content/themes/soulvision/images/texture.jpg "If-Modified-Since: Wed, 21 Dec 2008 20:51:14 GMT"
GET /wp-content/themes/soulvision/images/texture.jpg HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: linuxfreelancer.com
If-Modified-Since:  Wed, 21 Dec 2008 20:51:14 GMT
User-Agent: HTTPie/0.9.2

HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Length: 3844
Content-Type: image/jpeg
Date: Thu, 22 Dec 2016 01:39:37 GMT
ETag: "34441c-f04-4858fcd6af900"
Keep-Alive: timeout=15, max=100
Last-Modified: Sat, 01 May 2010 22:23:00 GMT
Server: Apache/2.2.14 (Ubuntu)

httpie also makes passing JSON encoding as well as POST/PUT methods a lot easier. No need for formatting your payload as JSON, it defaults to JSON. Debugging is easier to with -v option, which shows the raw wire data –

daniel@lindell:/$ http -v PUT httpbin.org/put name=JoeDoe email=joedoe@gatech.edu
PUT /put HTTP/1.1
Accept: application/json
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 48
Content-Type: application/json
Host: httpbin.org
User-Agent: HTTPie/0.9.2

{
    "email": "joedoe@gatech.edu", 
    "name": "JoeDoe"
}

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 487
Content-Type: application/json
Date: Thu, 22 Dec 2016 01:44:20 GMT
Server: nginx

{
    "args": {}, 
    "data": "{\"name\": \"JoeDoe\", \"email\": \"joedoe@gatech.edu\"}", 
    "files": {}, 
    "form": {}, 
    "headers": {
        "Accept": "application/json", 
        "Accept-Encoding": "gzip, deflate", 
        "Content-Length": "48", 
        "Content-Type": "application/json", 
        "Host": "httpbin.org", 
        "User-Agent": "HTTPie/0.9.2"
    }, 
    "json": {
        "email": "joedoe@gatech.edu", 
        "name": "JoeDoe"
    }, 
    "origin": "192.1.1.2", 
    "url": "http://httpbin.org/put"
}

I have touched just the surface of httpie here, please feel free to get more detailed information on the github repo. It has built-in JSON support, form/file upload, HTTPS, proxies and authentication, custom headers, persistent sessions etc.

Article on wget and curl from previous post.

In these series of Docker tutorials, i will walk you through a hands on experimentation with Docker. The operating system I am working on is Ubuntu 16.04.

Docker is a containerization technology which allows deployment of applications in containers. Its advantage is speed, a docker container hosting an application would be up and running in a few milliseconds.

As opposed to Virtual machines, containers run on top of the host OS. They share the host kernel. Thus you can only run a Linux container on a Linux host or machine.

Docker Installationuse this link for instructions on how to install Docker.

Installation Limitation – Docker runs on 64-bit OS only and it supports Linux kernel version 3.10 and above. You can verify this using the commands below –

root@lindell:~# arch
x86_64
root@lindell:~# uname -r
4.4.0-47-generic

Docker – terminology

    Images – are the building blocks of Docker. Once created or built, they can be shared, updated and used to launch containers. No image, no containers.

    Containers – are images in action. Containers give images life, containers are image plus all the ecosystem the Operating system need to run the application.

    Registry – where images are stored. They can be public or private. DockerHub is a typical example of public registry.

    Data volumes – persistent storage used by containers.
    Dockerfile – file containing instructions to be read by Docker for building a Docker image.

    Node – physical or virtual machine running Docker engine.

Our first Docker container
After installing docker and making sure that the Docker engine is running, run the commands below to check the Docker images(‘docker images’) available or if any Docker containers are running(‘docker ps’). Both commands should not return any results if this is a first time installation.

root@lindell:~# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
root@lindell:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED
             STATUS              PORTS               NAMES

The next step is to get a Docker image from Docker Hub. For security reasons, we are going to use only official images –

root@lindell:~# docker search --filter=is-official=true ubuntu
NAME                 DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
ubuntu               Ubuntu is a Debian-based Linux operating s...   5238      [OK]
ubuntu-upstart       Upstart is an event-based replacement for ...   69        [OK]
ubuntu-debootstrap   debootstrap --variant=minbase --components...   27        [OK]


root@lindell:~# docker run -ti ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu

b3e1c725a85f: Pull complete
4daad8bdde31: Pull complete
63fe8c0068a8: Pull complete
4a70713c436f: Pull complete
bd842a2105a8: Pull complete
Digest: sha256:7a64bc9c8843b0a8c8b8a7e4715b7615e4e1b0d8ca3c7e7a76ec8250899c397a
Status: Downloaded newer image for ubuntu:latest

root@d1b13e2c3d3f:/# docker images
bash: docker: command not found

root@d1b13e2c3d3f:/# hostname -f
d1b13e2c3d3f

root@d1b13e2c3d3f:/# uname -r
4.4.0-47-generic

We just downloaded an official Ubuntu image and started an Ubuntu container by running /bin/bash inside the newly started container. The ‘-ti’ option runs bash interactively(-i) by allocating a pseudo-TTY(-t).

Note that – the kernel version on the container is the same as the host’s kernel version. During first run, Docker will try to find Ubuntu image in our local storage, if it can’t find it, it downloads it from Docker Hub. On next runs, starting the containers will be much faster.

If we check the images and processes running now –

root@lindell:~# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              104bec311bcd        5 days ago          129 MB
root@lindell:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
d1b13e2c3d3f        ubuntu              "/bin/bash"         About a minute ago   Up About a minute              

At this point, if we exit from the container, docker ps will no longer show the container as it has been terminated. We use ‘docker ps -a’ instead to view it and then use ‘docker start’ command to start the container –

root@d1b13e2c3d3f:/# exit
exit


root@lindell:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

root@lindell:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
d1b13e2c3d3f        ubuntu              "/bin/bash"         5 minutes ago       Exited (0) 5 seconds ago

root@lindell:~# docker exec -ti d1b13 /bin/bash
root@d1b13e2c3d3f:/# uptime
 01:39:46 up  1:18,  0 users,  load average: 0.39, 0.39, 0.37
root@d1b13e2c3d3f:/# 
               

On Part 2 of quick introduction to Docker, we will walk through using Dockerfile to automate image creation. We will see how quickly we can go from development to deployment.

You might also find some of the questions I answered in Stackoverflow about Docker.

FCC chairman Tom Wheeler will be resigning next month, and this is not a good news for the proponents of Net Neutrality. Tom Wheeler was the driving force behind the reclassification of broadband internet access as a telecom service. The open internet rules that the FCC approved were a great victory for the supporters of Net Neutrality. With the chairman’s departure, the fight will still continue. The opponents of Net Neutrality, primarily cable/telecom companies, did not like the reclassification of broadband Internet as a telecom service, as that will impose more government and public oversight of the Internet.

For many people, the idea of Net Neutrality is still not clear. In its simplest form, it is the principle that all packets or network traffic should be treated equally. If you pay a monthly fee of $50 to your cable company for a given bandwidth, the cable company should not interfere with your browsing. As far as the cable company is concerned, whether you visit to site A or B, or use your bandwidth to download some media, they should treat it equally. Of course, illegal sites can be blocked per the legality of content in a given country. Basically the pipe has to be agnostic of the type of traffic, source or destination of traffic etc.

That is how the Internet is right now in practice, but the telecom companies want to change it. In my view, it is how the Internet should be. An open Internet encourages innovation at the application/content layer, as any new entrants whether start ups or a kid in a basement can launch a successful product without negotiating with the cable companies. Without Net Neutrality, the cable companies can pick and chose the winners, as they will practically be the gate keepers of content. There is no limit to the amount of control they will have over the Internet, a future with no Net Neutrality is a future of multi-tiered and multi-priced Internet. What that mean is the price as well as quality of your Internet service might vary on any of the following reasons –

1. What site are you visiting? May be Comcast made a deal to priority google.com.
2. What is your source IP address? This identifies your location, AT&T might have a deal with a certain municipality or owner of an IP block.
3. From which country are your browsing? Cable company has a deal with a foreign government.
4. What type of media or content are you viewing? text/audio/video? Cable company want to block a competitor’s streaming video.
5. What time of the day or day of the week are you browsing? Cable company has a popular show that it streams through a recently acquired media company during a certain time of the day.
6. What browser are you using? Internet explorer, Firefox, Safari, Chrome? Browser maker software company has a deal with cable company.
etc.

Any publicly identifiable information that the cable companies can get from your browsing can be used for pricing purposes.

This is how I analogize Net Neutrality – imagine all the Interstate roads were owned by private companies, say companies X, Y and Z. Without any regulatory rules, the Interstate owners can negotiate with car manufacturers on what types of cars take the fastest or even safest lanes. If Ford pays company X more, the “road owner” would allow only Ford cars to take the HOV lanes, or reserve more lanes for Ford cars while limiting drivers of other car types to the slowest lanes. As a prospective car owner, you won’t just pick a car based on just mileage or driving habits, you have to do extra research to find out what kind of deals the car manufacturer has made with the road owners. Travelling long distance would be a nightmare, as the various segments of the Interstate would be owned by different companies, and companies X would charge you at a different rate than companies Y and Z. So by supporting Net Neutrality, we are agreeing to the principle that the type of car you drive should not matter, we should all abide by the same rules. This does not mean that someone who can afford a high quality car can drive faster than an older car, in the same manner that if you have a lower bandwidth package with your cable company, you might not be able to view high quality movies smoothly.

Per opensecrets.org, here is the list of the top spenders on lobbying for the year 2016, surely enough the telecom and cable companies or associations are in the top list.

US Chamber of Commerce $79,205,000
National Assn of Realtors $45,255,769
Blue Cross/Blue Shield $19,058,109
American Hospital Assn $15,454,734
American Medical Assn $15,290,000
Pharmaceutical Research & Manufacturers of America $14,717,500
Boeing Co $12,870,000
AT&T Inc $12,660,000
National Assn of Broadcasters $12,118,000
Alphabet Inc $11,850,000
Business Roundtable $11,530,000
Comcast Corp $10,510,000
Lockheed Martin $10,380,488
Dow Chemical $10,295,982
Southern Co $10,090,000
Northrop Grumman $9,420,000
National Cable & Telecommunications Assn $9,230,000
FedEx Corp $9,221,000
Exxon Mobil $8,840,000
Amazon.com $8,624,000

Support Net Neutrality at Freepress.net.

Splunk offers a free version with a 500 MB per day indexing limit, which means you can only add 500 MB amount of new data for indexing per day. This might work for most home users, the only problem is the first time you install Splunk, you might configure it to injest your existing log files which most likely are above 500 MB if you consolidate your logs in a syslog server like I do. In this case, Splunk will stop indexing any data above 500 MB per day. During first time indexing, make sure your existing data or log files are below this limit. If for some reason, you ask Splunk to injest way more than 500 MB of data and you want to start fresh, run the following command to clean up the data –

 splunk  clean eventdata 

You can find the details on Splunk Free on this link.

Here is the series of commands I had to execute to clean up the event data –

[daniel@localhost]$ pwd 
/opt/splunk/bin
[daniel@localhost]$ sudo -H -u splunk ./splunk  clean eventdata
In order to clean, Splunkd must not be running.

[daniel@localhost bin]$ sudo -H -u splunk /opt/splunk/bin/splunk stop
Stopping splunkd...
Shutting down.  Please wait, as this may take a few minutes.
..                                                         [  OK  ]
Stopping splunk helpers...
                                                           [  OK  ]
Done.

[daniel@localhost bin]$ sudo -H -u splunk ./splunk  clean eventdata
This action will permanently erase all events from ALL indexes; it cannot be undone.
Are you sure you want to continue [y/n]? y
Cleaning database _audit.
Cleaning database _blocksignature.
Cleaning database _internal.
Cleaning database _introspection.
Cleaning database _thefishbucket.
Cleaning database history.
Cleaning database main.
Cleaning database summary.
Disabled database 'splunklogger': will not clean.

[daniel@localhost bin]$ sudo -H -u splunk /opt/splunk/bin/splunk start
Checking prerequisites...
	Checking http port [8000]: open
	Checking mgmt port [8089]: open
	Checking appserver port [127.0.0.1:8065]: open
	Checking kvstore port [8191]: open
	Checking configuration...  Done.
	Checking critical directories...	Done
	Checking indexes...
		Validated: _audit _blocksignature _internal _introspection _thefishbucket history main summary
	Done
	Checking filesystem compatibility...  Done
	Checking conf files for problems...
	Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
Done
                                                           [  OK  ]

Waiting for web server at https://127.0.0.1:8000 to be available.. Done


If you get stuck, we're here to help.  
Look for answers here: http://docs.splunk.com

The Splunk web interface is at https://localhost:8000

Getting yourself familiar with basic programming skills doesn’t hurt. In fact, there are times when you will desperately need to automate a task for some seemingly simple job, and yet not find the right tools out there which cater your needs. It is not about writing thousands of lines of code and designing some user interface, just a dozen or two lines might serve well at times. Here is a list of C codes taken from the book ‘C programming Language’ by K&R and some of the codes might have been changed by me while practicing.

1. Introductory Tutorial – Input/output, characters, strings

2. Types, Operators and Experessions – Upper/lower case conversion, binary operators

3.  Control Flow – If/else, do/while, binary search, sorting, argument list

4.  Functions and Program structure – macros, polish calculator, pattern searching, quick sort

5. Pointers and Arrays – command line argument, find, sort, memory allocation

6.  Structures – self referential arrays, word key counter

7.  Input/Output – file copying, calculator, sscanf

8.  The UNIX System Interface – memory allocations, file & directory listing

The date command in Linux boxes is one of the most powerful open source utilities. It is not just for setting the clock on your PC or server, or showing you what the current time is, it can do amazingly more. It can virtually answer all of your chronological questions.

The simplest use case of date command is to view current time, possibly in different time formats –

$ date
Sat Dec 17 00:45:35 EST 2016

$ date '+%Y-%m-%d'
2016-12-17

$ date '+%c'
Sat 17 Dec 2016 12:45:51 AM EST

It is useful in converting time to/from epoch as well –

$ date '+%s'
1481953669

$ date --date='@1481953669'
Sat Dec 17 00:47:49 EST 2016

The most user friendly use case of the date command is the ‘-d’ or ‘–date’ options, which accepts free format human readable date string such as “yesterday”, “last week”, “next year”, “3 min ago”, “last friday + 2 hours” etc. Here is an excerpt from the man page of the GNU date command –

DATE STRING
The --date=STRING is a mostly free format human readable date string such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29 16:21:42" or even "next Thursday". A date string may con?
tain items indicating calendar date, time of day, time zone, day of week, relative time, relative date, and numbers. An empty string indicates the beginning of the day. The date
string format is more complex than is easily documented here but is fully described in the info documentation.

Let us play with it –

$ date -d '2 hours ago'
Fri Dec 16 22:51:25 EST 2016

$ date -d '2 hours ago' '+%c'
Fri 16 Dec 2016 10:51:30 PM EST

$ env TZ=America/Los_Angeles date -d '2 hours ago' '+%c'
Fri 16 Dec 2016 07:52:33 PM PST

$ date -d 'jan 2 1990'
Tue Jan  2 00:00:00 EST 1990

$ date -d 'yesterday'
Fri Dec 16 00:53:04 EST 2016

$ date -d 'next year + 2 weeks'
Sun Dec 31 00:53:27 EST 2017

To give a practical example, let us use the date command to get, on which day all the birth days of someone fall, given their date of birth. This can be for past birth days as well as the future. For this example, we will do it from date of birth to this date. Let us pick someone who was born on Feb 29, 1988. This is an edge case. The date command should be smart enough to figure out the leap years.

for year in {1988..2016}; do 
  date -d "feb 29 $year" &>/dev/null
  if [ $? -eq 0 ]; then
    echo -n "Year: $year   " ; date -d "feb 29 $year" '+%c'
  fi
done

Year: 1988   Mon 29 Feb 1988 12:00:00 AM EST
Year: 1992   Sat 29 Feb 1992 12:00:00 AM EST
Year: 1996   Thu 29 Feb 1996 12:00:00 AM EST
Year: 2000   Tue 29 Feb 2000 12:00:00 AM EST
Year: 2004   Sun 29 Feb 2004 12:00:00 AM EST
Year: 2008   Fri 29 Feb 2008 12:00:00 AM EST
Year: 2012   Wed 29 Feb 2012 12:00:00 AM EST
Year: 2016   Mon 29 Feb 2016 12:00:00 AM EST

A typical case would be, say for someone born on Jan 8 1990 –

age=0
for year in {1990..2016}; do 
  echo -n "Age: $age  "; date -d "Jan 8 $year" '+%A %d %B %Y'
  age=$((age+1))
done

Age: 0  Monday 08 January 1990
Age: 1  Tuesday 08 January 1991
Age: 2  Wednesday 08 January 1992
Age: 3  Friday 08 January 1993
Age: 4  Saturday 08 January 1994
Age: 5  Sunday 08 January 1995
Age: 6  Monday 08 January 1996
Age: 7  Wednesday 08 January 1997
Age: 8  Thursday 08 January 1998
Age: 9  Friday 08 January 1999
Age: 10  Saturday 08 January 2000
Age: 11  Monday 08 January 2001
Age: 12  Tuesday 08 January 2002
Age: 13  Wednesday 08 January 2003
Age: 14  Thursday 08 January 2004
Age: 15  Saturday 08 January 2005
Age: 16  Sunday 08 January 2006
Age: 17  Monday 08 January 2007
Age: 18  Tuesday 08 January 2008
Age: 19  Thursday 08 January 2009
Age: 20  Friday 08 January 2010
Age: 21  Saturday 08 January 2011
Age: 22  Sunday 08 January 2012
Age: 23  Tuesday 08 January 2013
Age: 24  Wednesday 08 January 2014
Age: 25  Thursday 08 January 2015
Age: 26  Friday 08 January 2016

How do you find out the number of CPU cores available in your Linux system? Here are a number of way, pick the one which works for you –

1. nproc command –

[daniel@kauai tmp]$ nproc
2

2. /proc/cpuinfo

[daniel@kauai tmp]$ grep proc /proc/cpuinfo 
processor	: 0
processor	: 1

3. top – run top command and press ‘1’ (number 1), you will see the list of cores at the top, right below tasks.

Cpu0 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 2.7%us, 1.0%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

4. lscpu – display information about the CPU architecture. Count Sockets times Core(s) per socket, in this case 2 x 1=2 –

[daniel@kauai tmp]$ lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             AuthenticAMD
CPU family:            16
Model:                 6
Model name:            AMD Athlon(tm) II X2 250 Processor
Stepping:              3
CPU MHz:               3000.000
BogoMIPS:              6027.19
Virtualization:        AMD-V
L1d cache:             64K
L1i cache:             64K
L2 cache:              1024K
NUMA node0 CPU(s):     0,1

5. Kernel threads – pick one of the kernel house keeping threads, such as “migration” or “watchdog” and see on how many cores it is running –

[daniel@kauai tmp]$ ps aux |grep '[m]igration'
root         3  0.0  0.0      0     0 ?        S    Dec09   0:02 [migration/0]
root         7  0.0  0.0      0     0 ?        S    Dec09   0:02 [migration/1]

[daniel@kauai tmp]$ ps aux |grep '[w]atchdog'
root         6  0.0  0.0      0     0 ?        S    Dec09   0:00 [watchdog/0]
root        10  0.0  0.0      0     0 ?        S    Dec09   0:00 [watchdog/1]

The Shell has environment variables which determine its behavior. Exported environment variables are also popular ways of making an application change its behavior. These environment variables can be loaded or ‘sourced’ using the source builtin command or ‘.’ notation. In this post, I will share a particular problem I encountered while sourcing environment variables I saved in a .envrc file.

Problem – sourcing .envrc was not loading the right environment variables, including ‘. envrc’. Renaming .envrc to any other file works though.

[daniel@kauai tmp]$ cat .envrc 
NAME='Jhon Doe'
[daniel@kauai tmp]$ source .envrc 
[daniel@kauai tmp]$ echo $NAME
Alice Bob

As you can see, the variable NAME was set to ‘Jhon Doe’ and yet after sourcing .envrc, NAME is showing ‘Alice Bob’! Renaming the file seems to resolve the issue –

[daniel@kauai tmp]$ source .envrcs 
[daniel@kauai tmp]$ echo $NAME
Jhon Doe

Troubleshooting using strace – I followed the tips on ‘is-it-possible-to-strace-the-builtin-commands-to-bash’ to strace ‘source’. Stracing shell builtins is not straight forward. After looking at the output, I found out that the shell builtin source was actually reading the .envrc from a different directory, not my current working directory! The directory it was sourcing from was one of the directories in $PATH environment variables.

Read the man pages – Looking at the man page for bash, under the section for source command –

source filename [arguments]
Read and execute commands from filename in the current shell environment and return the exit status of the last command executed from filename. If filename does not contain a slash, file names in PATH are used to find the directory containing filename. The file searched for in PATH need not be executable. When bash is not in posix mode, the current directory is searched if no file is found in PATH. If the sourcepath option to the shopt builtin command is turned off, the PATH is not searched. If any arguments are supplied, they become the positional parameters when filename is executed. Otherwise the positional parameters are unchanged. The return status is the status of the last command exited within the script (0 if no commands are executed), and false if filename is not found or cannot be read.

Apparently this is an expected behavior. If I hadn’t a .envrc file in one of the $PATH directories, this would have been fine. In this case, there are several solutions –

1. Remove .envrc from $PATH directories [not the best option ]
2. Rename .envrc to a different file [ not ideal either ]
3. When sourcing the file, use absolute path [ good practice ]

[daniel@kauai tmp]$ echo $NAME
Alice Bob
[daniel@kauai tmp]$ pwd
/tmp
[daniel@kauai tmp]$ source /tmp/.envrc 
[daniel@kauai tmp]$ echo $NAME
Jhon Doe

4. When sourcing the file, add slash(‘/’), for instance – source ./.envrc [ good practice ]
5. Disable sourcepath shell option [ not bad idea ]

[daniel@kauai tmp]$ shopt sourcepath
sourcepath     	on
[daniel@kauai tmp]$ shopt -u sourcepath
[daniel@kauai tmp]$ source .envrc 
[daniel@kauai tmp]$ echo $NAME
Jhon Doe

Bottom line – Make sure you understand how environment variables sourcing or loading works in bash and you follow good practices, so that you won’t wast precious hours trying to figure out why your script is behaving strangely.