Infrastructure from scratch. Part 8. Continuous Integration.


The more we grow up the more routine we have. For example, servers park includes more identical instances. The same thing matters build/release/deploy lifecycle. The more we work, the more business wants. Now typical development team needs to shape the changed software few times of day. It’s a toil.

Why is it important

At first you have to go through many steps with similar workflow. Later you become curious what is the Continuous Integration, Continuous Delivery and Continuous Deployment. Some companies even have the specific release engineer. This engineer works with CI/CD methodologies to reduce commit-deploy lifecycle time.

Continuous Integration is not a technology

DevOps is not a technology (and it’s not even the man!). Microservices is not a technology. Continuous Integration is not a technology with Delivery and Deployment as well. If you have a Jenkins – it doesn’t mean you have a Continuous Integration.

CI (let’s be short) says that each code change should be immediately embed to the product. I may not made myself clear here. Clearly the CI concept – having the build process as a conveyor. That way we assume that software development will be much more productive.

Check what you have

To get started with Continuous Integration, investigate the current build workflow. You team should already have any engine to form executable software. Build is generally a process of executable files generation from the source code. They go through the few steps, it means you already have a pipeline. The next goal – think about pipeline improvement. Write down what do you do manually, think about automation and automate all this stuff. At last move the pipeline execution work to the build console!

For example, we’ve got a Node.js project. Every time we fetch dependencies by npm, build the app by gulp and test it by Mocha. Ops engineer may not be familiar with Node.js pitfalls, so it would be better to let the Dev solve what to do here. After all, we clarified the pipeline workflow on the paper and moved the picture to the reality.

Pick the system

To visualize the CI and CD, get a “big picture” of development process, we should pick the build console. You may call it “Build And Release System”.

If you’re curious what kind of systems exist – you have the great source to investigate. In practice, teams usually pick either Jenkins or Travis CI. It depends on what kind of project: open-source or closed. Generally he are a huge choice of systems. The most part have a containers concept, so go ahead if you use the Docker. The most part – must-paid solutions as well.

So, we picked the Jenkins, because we’re familiar with this. If you’re curious about renovation – try the Concourse CI.

Make pipelines

We’ve got the Jenkins 2.7 and instead of classic jobs I started to make pipelines. Pipeline is a specific concept where you can clearly track each build step and find the sore point. Regular jobs messed the Jenkins reputation. Now a lot of people thinks that Jenkins is a black box where you can’t find what’s going on. I recently got a great reading about why do we need pipelines. Get this too, please.

Pipelines designed in the “Infrastructure as a Code” style. They’re written on the special DSL similar to Groovy programming language. That way you’re enhancing the infrastructure code base and simultaneously improving its agility. There is a big difference between classic job and pipeline maintenance. You can go through the long GUI way at classic job to modify the desired step. At the pipeline you just change the code line and go ahead. What else to say if even getting started guide on Jenkins website based on pipelines?

Making the pipeline I regularly checked each step verifying it works as a whole system. Easy and reliable practice of pipeline testing. To automate manual steps I’ve written Python scripts. Each of these has a specific goal at the pipeline. Here is the sort of release engineering as well.

What’s done

Eventually we’ve got 3 pipelines:

CI for the main app (Node.js)


  1. Named Git branch is fetching from self-hosted repository on Gitlab.
  2. Starting from client side (Angular.js), npm and bower installing dependencies.
  3. Client-side build by gulp.
  4. Custom test is gone through the client-side.
  5. Switch to the server sidet (Node.js); npm installs all required dependencies.
  6. The build process is working by gulp
  7. Application proceeds multiple tests with mocha
  8. Build is split on 2 parallel workflows (Jenkins pipelines feature):
  • package workflow.  Application ZIPing by gulp with upload to specific S3 bucket which plays repository role;
  • infrastructure workflow. At first Terraform applies not ready resources: instances/DNS/balancers and so on. Later we’ve got SSH configuration cleanup. It’s all for Ansible primary provisioning (set the hostname, Zabbix agent and ELK monitoring).
  1. Final ZIP container deploy to servers through AWS Codedeploy.
  2. Service start. Now new application should work on QA environment without any outages.

CI for the subapp (Java)


  1. Named Git branch is fetching from self-hosted repository on Gitlab.
  2. One-command build and testing with Maven
  3. If everything’s good – build will be split on 2 parallel workflows: package and infrastructure. The same steps as we do at the main app.
  4. Final ZIP container deploy to servers through AWS Codedeploy.
  5. Service start. Now new application should work and does it consistently without any outages.


  1. Have the rule “New feature – new release”. The more you will be released, the less troubleshooting time will be. You can easily track what change broken the product and quickly fix the change. If you’ve got 15 changes in new release, it’s not easy to guess what was exactly broken. You look through the plenty of git diffs, think what’s changed and have some luck to pick the right point. That’s not a noble job. Let’s leave the luck for different purposes.
  2. Don’t change the release! The artifact should be complete and inseparable. If you noticed any bug in your code – make the new lifecycle iteration and don’t change the current one. Don’t trust me – trust the 12 factor-app concept.
  3. Don’t go on after failed pipeline step. If something’s failed – it’s failed. Look at the error, guess what’s wrong and start from the first step. If you have the properly working pipeline, each step should work again quickly. It’s more reasonable than skip the error to save some time, but get stuck and anyway observe the mistake.

I got situations where failure skipped, build marked as succeeded and production failed. It was all because of this stupid mistake – skip the failure.

Tags: Continuous Integration, Jenkins, gulp, npm, maven, Terraform, Ansible

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s