Github deployment pipelines and zero-downtime deployment
This week I will show you every step that happens after a pull request is merged into our master branch. We use an automated deployment pipeline for releasing our code into production.
Deployment Pipelines
A deployment pipeline lays out the whole process that your code needs to go through from your repository to production. It breaks the build into several parts (e.g. build, test and deploy) and all the associated steps that need to be taken. By defining a pipeline it is always clear which step needs to happen next. Martin Fowler describes it really well in his blog post.
If you want to dig deeper into Deployment Pipelines I highly recommend Jez Humble and David Farley’s book: Continuous Delivery.
Configure deployments per branch
To automate deployment to different environments we have found that it works best to define actions for different git branches. If you always push the latest commit of your production branch to your production environment, it s very easy to determine what is currently deployed by just looking at the git branch. Git and other source code management systems only permit one commit at the top of a branch, so there can be no confusion.
At Codeship we deploy our master branch automatically to production. Many of our customers deploy the master branch to a staging environment and a production branch to their production environment. A simple git merge and git push, or a Github pull request, is their way of releasing their changes.
One problem with this approach is that branch names have to be meaningful. Having a development branch which is deployed to staging and a master branch that gets deployed to production can confuse new team members. Naming branches that get deployed “production” or “staging” is more intuitive. “Master” is a convention in git and should be kept, but dedicated branch names are easier to understand in a deployment pipeline.
Deployment Strategy
As soon as the feature branch is merged into our master, a new build is started on the Codeship. We run the same test commands again as we did on the feature branch to make sure there are no problems in the merged version.
When all tests pass for the master branch the deployment starts. Before pushing to production we want to make sure that all database migrations work and that the app starts successfully.
First we deploy to staging. Then we run our current set of migrations. We copy our production database to staging once a day. Therefore, when we run migrations on staging, the database is very close to our production database. This allows us to make sure our migrations work, before deploying to production.
The last step in our staging deployment is calling the URL of our staging site to make sure it started successfully. This has saved us twice over the last years, as we would have pushed a change to our unicorn configuration that broke the server. Wget and its retry capabilities make sure the website is up and running.
An enhancement would be to have tests that run against the deployed version, but so far we haven’t had any problems without these tests. Our extensive Cucumber/Capybara test suite has caught all problems so far.
There is one slight difference between our staging and production deployment though:
Zero Downtime Deployment
As we want to deploy several times a day without any downtime we use Heroku’s preboot feature. We started using it at the beginning of this year. Whenever we push a new release, it starts this release on a second server and switches the routing to it after about 3 minutes.
The downside is that zero downtime deployments require more care with database changes. As two versions of your codebase need to be able to work at the same time you can’t just remove or rename fields.
Renaming or deleting a column or table needs to be spread out over several deployments. This way we make sure that the application still works with every incremental change. We will go into more detail on database migrations for zero downtime deployments in a later blog post.
In the meantime, you can take a look at the blog posts in our “Further Info” section that explain Zero Downtime Deployments by Etsy, Braintree and BalancedPayments.
Conclusions
It is important to automate every step of the deployment. No matter if you want to deploy your code on every merge to master or trigger it manually by merging the master into another branch.
Now that we’ve gone from working on a feature to code reviews and finally pushing to production in our web application we will take a closer look at our test infrastructure next time.
In the next blog posts I will delve into immutable infrastructure and how we rebuild our test server infrastructure several times a week.
Let us know what your strategies and lessons learned for deployment are in the comments.
Ship long and prosper
About the Author
Florian Motlik: At Codeship I am responsible for the general tech vision and making sure that all of our users are happy and keep their build green. I’ve always been interested in helping people build great software, great products and just in general make something happen.
About the Codeship
The Codeshipprovides Continuous Integration and Deployment with a simple hosted system. We test every change you do in your application and if everything works we deploy to your staging or production environment. We strongly integrate with all the major Platform as a Service providers like Heroku and Engine Yard. We support GitHub, BitBucket, Amazon Web Services, Digital Ocean and many more.Go check us out!
Interested in working with Florian, or one of our other amazing devs on Gun.io? We specialize in helping engineers hire (and get hired by) the best minds in software development.