Every Once in a While

Jun 17, 2019

We get caught up in our daily lives.

Daily lives

Exercise. Bills. Friends. Commitments. The pursuit of happiness.

Pursuit of happiness

And slowly but surely, the world gets smaller.

Pursuit of happiness

And smaller.

Smaller 1

And smaller.

Smaller 2

And smaller.

Smaller 3

And it’s not as obvious as the pictures above. You’re busy living life after all. Pursuing the dream. Work hard today, and you will get everything your heart desires.. somewhere down the line. Isn’t that what the books and movies tell us? The rags to riches story. The guy or gal who works their ass off for a decade or two, starts their own business and retires happily. The fairy tales. The princess who kisses every frog she comes across until one turns into a prince.

Frog turns into prince

We believe that we must suffer in order to deserve a better life. That happiness only lies on the other side of pain. And so we lift our burdens, toss them over our shoulders and we trudge on. Over spikes and flames. Through storms and hail. Fighting demons. Convinced that happiness lies just on the other side.

Fighting demons

Or perhaps you convince yourself that this is all life has to offer, and that life is just about compromises and adjustments. There are no happy endings, no happy-ever-after. You find someone good enough and you try to make it work. Like two pieces of a puzzle that just don’t fit together, but maybe if you tried hard enough, you could get by?

Puzzle pieces that don't fit

You stop believing in true love, or rather you believe that perhaps that it’s simply not something you will experience. You tell yourself that it’s time to stop being childish, to grow up.

Time to put the toys away

One by one, your dreams die. One compromise at a time, you shape yourself to fit into the life that seems to have found you. And then you put a neat little bow on it and declare that you’re done. You’ve figured life out!

Person in a tiny gift box

Every once in a while, someone comes along. They knock on your box and ask you if you’re okay in there. If you’re unlucky or blind enough, you nonchalantly reply “I’m good, how are you?” and they move on with their lives. Sometimes they climb into your box and stay for a little while. The sex is great, but the box isn’t big enough for the both of you.

But sometimes - just sometimes - something clicks. Maybe it’s something in their voice. Maybe it’s something in their eyes. Maybe your inner child remembers what it feels like to dream. You poke your head out of the box, and suddenly the world feels a little bit bigger.

Person peeking out of box

You realize that you’ve been holding your breath all this while. You let it all out, take a fresh one in, and the scent of flowers fills your senses. The world gets a little bit bigger.

Person staring at flowers

Your hear chirps, look up and see birds in the sky. The world gets even bigger.

Person staring at birds

The sun’s shining bright and bathes you in its warm, welcoming embrace. The world gets bigger.

Person staring at the sun

And bigger.

The world is huge

And it’s not that the person you met suddenly made your world bigger. The world has always been the same size, you just forgot. And that person reminded you.

With today’s always-connected world, it’s ironic that many of us feel less connected than ever before. It’s easy to get caught up in the daily drama of right-versus-left, of political correctness, of feeling society’s pressure to fit in. It’s easy to feel like we’re under attack and to strike back at the offenders. Humans evolved in communities, the need to feel like we belong is hard-wired into our DNA. And most of us would rather have an imperfect fit than to not have one at all.

I’m an introvert, and I’ve felt this urge to belong all my life. And there are a lot more of me in this world than there are loud, obnoxious “athletic douchebags” as someone I met recently put it. They’re out there, they’re yearning to connect with people and most of them don’t know how. It look me a year of therapy to learn how to reach out to people, to learn how to be vulnerable and risk getting hurt in order to make a genuine connection.

We humans either consciously or unconsciously search for meaning in our lives. I believe there is meaning to be found in reaching out to people and making their lives better in some way - small or great. But for any relationship to be formed, someone has to make the first move. When two introverts meet, both assume that they’re imposing on the other. Neither makes the first move and both leave convinced that they somehow let the other person down. Take the first step. I know it’s hard - I struggle with it every day. Just reach out and comfort someone in need. Let them know that you care. Become vulnerable and share a bit of your heart with them. People will surprise you.

Static Website With Hugo, AWS and Cloudflare

May 26, 2019

After writing my previous post about cleaning up my Continuous Deployment solution for deploying this site, I decided to experiment with AWS’s CodePipeline to set up my own home-grown pipeline to build and deploy the site instead of relying on CodeShip.

Infrastructure setup

For all its features, AWS doesn’t make it easy to set up and tweak things in a secure fashion. The recommended way of creating AWS resources is by using STS, IAM Roles and Policies to set up permissions in the most restrictive way possible. But that approach makes it really hard to iteratively create something when you aren’t quite sure about what you need to create and how all the parts are going to fit together. And then once you’re done setting everything up, you tend to lose track of all the little bits and pieces you created along the way that maybe didn’t end up fitting into the final product.

Enter Terraform. You create a few config files specifying all the resources you need and terraform goes and creates them for you. It keeps track of everything that it created, so making incremental changes easy. And if you really mess things up or the project doesn’t work out, you can run one command to destroy every single resource that was created.

So, I used a few blog posts (several turn up when you search for “s3 static website terraform github” on your favorite search engine) and a few existing terraform modules as a reference to set up a Terraform configuration that spins up an AWS CodePipeline which:

  1. Uses Github webhooks to listen for commits pushed to my blog’s (private) repository
  2. Uses AWS CodeBuild with a golang:1.12 base image to build the site. In my buildspec.yml I put commands to go get Hugo as the INSTALL step, ran hugo -v as the build command and then.. nothing. I’ll get back to that ominous “nothing” shortly.
  3. Uses Amazon S3 as a Deployment Provider in the Deploy step. More details here.

Everything seemed fine. I ran into a couple of permissions issues that turned out to be related to a misconfiguration of the encryption_key field in Terraform. However, I ran into a big hurdle. The last step for deploying to S3 would just fail with an extremely unhelpful error: InternalError: Error reference code: blah blah blah (the blah’s are mine). With no other information to go on, I created a post on the AWS forums and destroyed all Terraform assets a couple of days later when I hadn’t received a response on my post. But then I happily stumbled across a random buildspec.yml and saw that it had an artifacts section that listed a bunch of files. So I took a look at the reference for buildspec and sure enough, here’s what it says:

artifacts: Optional sequence. Represents information about where CodeBuild can find the build output and how CodeBuild prepares it for uploading to the Amazon S3 output bucket.

No shit, sherlock. Couldn’t you have told me that the deployment step couldn’t find any artifacts to upload instead of the useless InternalError? Anyway, I added an artifacts section to my buildspec, and the pipeline turned green! It was pushing the generated static files to my final destination bucket.

Gotta go fast!

Just one small issue at this point. Downloading hugo for each build took its own sweet time, in the order of a couple of minutes. Which isn’t a big issue when it comes to getting a blog post out, but being an engineer I prefer faster over slower (with some non-software exceptions of course). So I did some searching and came across this EXCELLENT post about creating custom docker images, storing them in AWS ECR and using them in your CodeBuild step instead of creating a new one each time (which is what the INSTALL step essentially does, it creates a base image and then modifies it each time). So I created a custom docker image using a simple Dockerfile that downloads and installs hugo. That made my entire CodeBuild step go from a few minutes to ~16 seconds.

Size is everything

…and dont’ let anyone tell you otherwise.

Despite using golang:1.12-alpine as the base image, the resulting docker image was still pretty big because it had all the Hugo source, Go source and build artifacts. Enter Docker multi-stage builds! This features lets you create one image that produces some build artifacts that can then be sent as input to a second image, and the resulting image is just the SECOND one, with none of the intermediate crap from the first one! Here’s what my Dockerfile looks like now:

FROM golang:1.12-alpine as builder

RUN apk add --no-cache git
RUN go get -v github.com/spf13/hugo

FROM alpine:latest
WORKDIR /go/bin
COPY --from=builder /go/bin/hugo .
ENV PATH=$PATH:/go/bin

Pretty self-explanatory, but the tl;dr version is that it copies the built Hugo binary from the first image to /go/bin/hugo and adds /go/bin to the PATH.

Free HTTPS and CDN

Cloudflare is awesome. I discovered a while ago that they offer a free CDN (content delivery network) and free DDOS protection and configured the DNS on my hostname provider at the time to use Cloudflare’s nameservers. When I first started this porting effort, I set up a CDN using AWS Cloudfront with HTTPS but quickly got overwhelmed. AWS’s pricing is pretty flexible, but the large number of tables talking about in-region and cross-region data transfers made it hard for me to figure out how much the whole shebang would cost me, especially given that my website doesn’t really get a lot of traffic as far as I know. It would probably still have been only pennies at the end, but why spend money if you don’t need to? In addition to the features listed above, Cloudflare also has a nice analytics dashboard, so I wouldn’t need to set up some sort of third-party analytics like Google Analytics. Additionally, Terraform has a Cloudflare provider. So instead of making an AWS-only solution, I tore down the Cloudfront resources and decided to use Cloudflare for CDN, DDOS protection and HTTPS everywhere. Didn’t have to do a lot to get it up and running, just had to use their API to fetch all my DNS records, import them into Terraform resources and then manually add zone settings overrides for other miscellaneous settings. The Cloudflare configuration in Terraform isn’t quite as neat and structured as the one for AWS, but it gets the job done.

Free for all

After putting so much effort into this solution, I wanted to make it available to others in an easy-to-use fashion. ECR doesn’t support public images that can be shared with others at the moment and I didn’t want users to have to first manually build an image and upload it to ECR before terraforming everything else. Fortunately, CodeBuild does support pulling images from Docker Hub so I uploaded my build image there under the name hugo-alpine and configured CodeBuild to use that instead of my ECR one. I moved all of my terraform configuration to a module with configurable variables for secrets and other user-specific values (like Github username, repo, etc). Terraform unfortunately doesn’t really work quite right if modules define their own providers, so the user needs to configure every provider themselves in order to use my module, but I’m mostly happy with the result. The final module is available here: https://github.com/ameyp/terraform-aws-cloudflare-static. I hope it works for you. Issues are welcome, as are pull requests!

A Brand New Git Repo

Apr 28, 2019

I remembered recently that this blog has been sitting around idle for a while. I figured that the continuous deployment solution I’d set up (CodeShip + Amazon S3) might’ve died by now (well, the CodeShip part anyway) but no, it’s still around. Those folks seem to be going strong, much to my surprised and delight.

However, a lot has changed in the past two years. For one, Github now supports unlimited private repositories with a free account. I had originally hosted the code (I use Hugo) on BitBucket because back then they had support for private repos and Github didn’t. However, all of my other repos are on Github, and since Github now supports private repos for free too, I figured that I might as well move this thing over too.

This was slightly complicated by the fact that CodeShip doesn’t allow you to change the backend used for one of their pipelines. You can switch from one BitBucket repo to another, or from one Github repo to another, but not from BitBucket to Github or vice versa. So, I cloned the repo locally, created a new one on Github, added that as a remote, duplicated my pipeline and pushed, and it worked! Of course, I also cleaned up some of the backend mess I had made the first time around. For instance, I had just straight up given CodeShip access to my main AWS account’s Access and Secret keys. Now, I have a custom policy that only grants write access to the one bucket it needs, and CodeShip has the keys to an IAM role with solely that policy attached. Much cleaner (and much, much safer!).

Some things changed on the Hugo side of things too, such as one function being removed and another being on the deprecation path, fixed those nits too.


Install OpenCV 3.1.0 with Python 3.5+ in Ubuntu 14.04

Apr 19, 2016

Getting OpenCV 3 up and running with Python 3.5+ on Ubuntu 14.04 is a pain in the ass. For the most part the instructions here are the same as in this guide, with a few changes needed specifically for Python 3.5+. I’m going to assume that you already have pyenv and pyenv-virtualenv installed for the purpose of this guide.

Install prerequisites

sudo apt-get update
sudo apt-get install -y build-essential cmake git pkg-config
sudo apt-get install -y libjpeg8-dev libtiff4-dev libjasper-dev
sudo apt-get install -y libpng12-dev libavcodec-dev
sudo apt-get install -y libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install -y libgtk2.0-dev libatlas-base-dev gfortran

Install python headers

This is the slightly tricky part. We want OpenCV to work with Python 3.5+, but Ubuntu 14.04 only comes with Python 3.4.x. So, we’ll have to turn to an external repo:

sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install -y python3.5-dev

This will download and install the headers and libraries for the latest release of Python 3.5.x. This number will vary as this post gets older, and I don’t know of a way to check exactly which version is installed. However, for OpenCV to work with virtualenv, we’ll need to know the exact version of python3 headers that are installed. I’ll explain how to do this next.

Fetch opencv and opencv_contrib

The 3.1.0 in the commands below was the latest released version of OpenCV at the time of this post. Check OpenCV News for the latest release number when you run this, and substitute below if it’s different.

cd /tmp
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.1.0

cd ..
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.1.0

Try (and fail) to build OpenCV

We first need to modify the OpenCV build configuration to use python3.5 instead of python3.4.

gedit /tmp/opencv/cmake/OpenCVDetectPython.cmake

In the file that opens, find a line that starts with find_python(3.4 and change it to find_python(3.5. Save the file and quit the editor.

At this point, OpenCV configuration will fail because we haven’t set up python and numpy yet. The only reason for the following step is to find out the version of python3 headers and libs that was installed.

cd /tmp/opencv
mkdir build && cd build
-D OPENCV_EXTRA_MODULES_PATH=/tmp/opencv_contrib/modules \

cmake will succeed, print a long list of configuration variables. Look for lines that look like this:

--   Python 3:
--     Interpreter:  /home/amey/.pyenv/shims/python3.5 (ver 3.5.1)

The (ver 3.5.1) is the part we’re interested in. That’s the python version we’ll have to use.

Create python environment and install numpy

pyenv virtualenv 3.5.1 cv
pyenv activate cv
pip install numpy

Build OpenCV for real

cd /tmp/opencv/build
rm -rf *
-D OPENCV_EXTRA_MODULES_PATH=/tmp/opencv_contrib/modules \

If everything goes according to plan, the output will change and you’ll see something like:

-- Python 3:
--   Interpreter:    /home/amey/.pyenv/shims/python3.5 (ver 3.5.1)
--   Libraries:      /usr/lib/x86_64-linux-gnu/libpython3.5m.so (ver 3.5.1)
--   numpy:          /home/amey/.pyenv/versions/cv/lib/python3.5/site-packages/numpy/core/include (ver 1.11.0)
--   packages path:  lib/python3.5/site-packages

As you can see, cmake has picked up a lot more info this time around. If you still only see an Interpreter line and nothing else.. well, I’m sorry I guess.

Edit - Try this alternative command, sourced from the guide for anaconda:

cmake -DBUILD_TIFF=ON -DBUILD_opencv_java=OFF -DWITH_CUDA=OFF -DENABLE_AVX=ON -DWITH_OPENGL=ON -DWITH_OPENCL=ON -DWITH_IPP=ON -DWITH_TBB=ON -DWITH_EIGEN=ON -DWITH_V4L=ON -DWITH_VTK=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=$(python3 -c "import sys; print(sys.prefix)") -DPYTHON3_EXECUTABLE=$(which python3) -DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") ..

After this, build and install OpenCV:

make -j4
sudo make install

Optional - Create a symbolic link

(Edit - This is optional because if you built and installed OpenCV using the Anaconda command above, the library has been installed in the pyenv directory already.)

All that remains is to create a symlink to the OpenCV library that was (or should’ve been at least) installed to /usr/local/lib.

cd ~/.pyenv/versions/cv/lib/python3.5/site-packages
ln -s /usr/local/lib/python3.5/site-packages/cv2.cpython-35m-x86_64-linux-gnu.so cv2.so

If you get an error while running the ln command, check the /usr/local/lib/python3.5/site-packages/ directory. It’s possible that your library might have been built with a different name. If that directory is empty, well, sorry again.


If nothing went wrong, you should now be able to test the OpenCV installation. Activate the virtualenv if it isn’t already active and try the following:

>>> import cv2
>>> cv2.__version__

Of course, the '3.1.0' version number is what I installed, yours should reflect the version that you installed.

Setup Ubuntu 14.04 with Cuda and Lasagne

Apr 5, 2016

Installation of prerequisites

    # Install common stuff
    sudo apt-get install build-essential
    sudo apt-get install zsh git

    # Install SciPy dependencies
    sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran

    # Install pyenv
    curl -L https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash

    # Install python 3.5.0
    env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 3.5.0

    # Install cuda
    wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda

    # Create theanorc
    echo "[global]" > ~/.theanorc
    echo "floatX = float32"
    echo "device = gpu"

Create pyenv and install lasagne

Choose a name you see fit. I picked “lasagne”.

    pyenv virtualenv 3.5.0 lasagne
    pyenv activate lasagne
    pip3 install --upgrade https://github.com/Theano/Theano/archive/master.zip
    pip3 install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip

Wrap up

Reboot your system so that the installed nvidia drivers will load


Prior to using python, activate the environment with

    pyenv activate lasagne