Servers as Pets to Servers as Cattle Part 2...

June 28, 2018

More cattle!

As you can tell by the title of this post, there was a part one. At the risk of disappointing our devoted bloggies (that’s groupies for a blog) I am going to tell you that we did not succeed in an AWS elastic beanstalk migration…. Instead we just moved our infrastructure to Kubernetes.


Part one talked about all the hard work we had to do in “Dockerizing” the Rails app. Well we ended up having to do a lot more of that. It seems many packages and frameworks that are in the world today make the underlying assumption that the machine they start a job or request on is going to be around for a while. Needless to say we went deep into the bowels of our apps to remove those assumptions. Once that was complete however, we had as close to a stateless app as we would have until further decomposition later into Microservices.

The jump from Docker on Elastic Beanstalk to Kubernetes was actually quite smooth (relative to the initial trouble of Dockerization.) Save for the initial headache of getting Kubernetes up and running (as of this, running EKS is still not an option for us,) the only real work we had to do was figure out how our release pipeline would look and how we would organize our config as code. The rest of this article will give some insight on ways we have found success running a complicated Rails app in production on Kubernetes.

Code Organization

At the code level, we made the choice to make a kubernetes/ folder in the root of the Rails app. In this folder we kept our Kubernetes manifests with most of the scaffolding in terms of code up. Specific variables that would change based on deployed environment (such as RAILSENV) we left in the form of environment variables. We then created a deployment script that would handle the following: web serving pods, background job processing pods, one-off task (like DB migration pods,) and cron jobs that are now scheduled as separate pods. For an example of what I’m talking about, see here:

There was more work that needed to be done, such decoupling our job processing from our web processing workloads, but suffice it to say that it wasn’t interesting relative to the deployment manifests for the purpose of this article.

Cluster Organization

Obviously the most important part of running software on Kubernetes, is Kubernetes itself. We manage our clusters using Terraform and Kops. All changes are checked in to git and reviewed by another party. In doing this, our infrastructure now becomes source code that can be managed by any developer in our organization. Our infrastructure migrations are zero-downtime and we have been able to scale elastically with our load as a result. There are tons of really great guides to setting up clusters on Google, so we won’t attempt to have that information reproduced here. Suffice it to say, if you can…use a managed service that a public cloud already has. There are so many buttons and levers to push/pull that if you don’t already know what you’re doing you can get yourself into trouble.

Final Thoughts

We had great triumph and anguish in getting our clusters going on AWS, and we recommend getting involved with the Kubernetes community. They can help with issues that may arise. We are now members of the Kubernetes org and help contribute back to the Open Source community. It was one of the best decisions we made.

<img src=“” width=“1” height=“1”>

Servers as Pets to Servers as Cattle Part 2 was originally published in Ygrene Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.