A Comparison of Continuous Integration Configuration Files
Modern CI/CD solutions usually work with a file in your code repository that defines the steps that need to be executed. The CI/CD solution will read this file and then execute the appropriate scripts to build, test or deploy your application. Since each tool defines its own configuration format let’s compare them to see similarities and differences.
Travis CI
I will start with Travis CI, because it’s the first one I have used and
one many small open source projects on GitHub use. For Travis CI you have to
put a file named .travis.yml
into the root directory of your GitHub repository.
When you connect Travis to your repository it will monitor the repository
for changes and execute the build or test.
Travis’ configuration file is a YAML file. The first entry you have to set is the selection of the language of your program. This selection will influence the behaviour of some of the other options in the file. For example Travis will assume some default commands based on the language selection.
The Travis configuration file is built around the
Travis Job Lifecycle. The job lifecycle defines that there
is an install
phase and a script
phase and optionally a deploy
phase
(plus additional phases before and after these steps). Depending on the language
and tools you use you possibly might not have to define anything apart from
the language of the program.
For example, if you choose language: rust
Travis will define
default values for both the install phase (cargo build --verbose
) as well
as for the script phase (cargo build --verbose; cargo test --verbose
).
If you build your Rust project with these commands you can use a
minimal Travis configuration file of:
language: rust
Multiple different versions or variants in Travis are tested with so called
test matrices. Test matrices can either be defined explicitly in a YAML map
named matrix
or implicitly when you set multiple values for options like
python
(=Python version) in a Python build.
Example
language: python
matrix:
include:
- python: 3.6
env:
- TOXENV=py36
install: pip install tox
script: tox
Gitlab CI
The self-hosted Git solution Gitlab also comes with a CI/CD tool. It is
also configured with a YAML configuration file, called .gitlab-ci.yml
.
The Gitlab CI configuration has a lot of options, we will only look at the
most important ones here to understand the general workflow with Gitlab CI.
Gitlab’s continuous integration is based on jobs. A job basically is one
thing you want to do, for example build your program, perform your
unit tests with a specific Linux distribution and so on. When you
create a .gitlab-ci.yml
you will add a collection of jobs to it.
Each job then has a key called script
which defines the command(s) the
job should execute. In all cases I have seen these
commands are plain bash commands, but I can imagine that theoretically it’s
possible to get these commands interpreted by other interpreters than shell.
The most common use case of Gitlab CI is to have the jobs execute within
Docker containers. With an option image
in your job configuration you can
define which Docker image should be used to start the container. You can also
start additional containers with the services
option, e.g. for databases.
It is possible to define conditions for the job execution. E.g. oftentimes you want to execute a job only for the master branch or only for tags (like building and releasing versions). Its also possible to define a job to be run only on scheduled times, but the times have to be defined in the Gitlab GUI.
The execution order of multiple jobs is defined with stages. You can assign
each job to a stage and jobs within the same stage are executed in parallel.
The order of the stages defines the order of execution. First, all jobs from
the first stage are executed in parallel, then all jobs from the second stage
and so on. Common stages are test
, build
and deploy
.
artifacts
and dependencies
are two job options that are used to provide
artifacts from one pipeline job to the next. E.g. your build-linux
job could store the binary to the artifact storage and the release
job can
upload it to a public FTP server.
Example
stages:
- test
test36:
stage: test
image: python:3.6-stretch
before_script:
- python -V
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install tox
script:
- TOXENV=py36 tox
Jenkins pipeline
Jenkins started as a build platform on which you define your workflows in
a graphical user interface independent from your code repository, but more
recently it also received a workflow to perform builds with a Jenkinsfile
in your repository.
Jenkins is built around a plugin system. You usually have a plugin to execute shell commands on Linux or batch commands on windows, but also more advanced plugins for specific compilers or communication with cloud services.
A Jenkinsfile
can be written in two different flavours, either in
declarative or in scripted style. Both styles build upon a list
of stages that get executed (examples for stages are build or deploy) and each
stage consists of a list of steps that will be executed to complete this
stage. Each step is a call to one of the plugins, which can be a shell
command, but also a different plugin. With the git plugin
you’d write git url: 'git://example.com/project.git', branch: 'master'
instead of git clone git://example.com/project.git && git checkout master
.
It’s possible to execute each stage with a different agent
. Agents are the
executors of the commands and could for example be Linux or Windows servers
(you would probably want to test a Windows release on a Windows machine).
As with Gitlab there is a when
directive to limit the execution of a stage
to certain situations and there is a post
directive for scripts that should
execute after build success, failure or always.
Example
Since I do not use Jenkins myself this example does not make use of any specific plugins:
pipeline {
/* agent can be defined either globally for all stages
* or for each stage individually
*/
agent any
stages {
stage('Test') {
steps {
sh 'tox'
}
}
}
}
CircleCI
CircleCI’s configuration file is a mixture between Gitlab and Jenkins.
It’s a YAML file and
the central components are Workflows and Jobs. Workflows are optional if you
have a job named build
. Otherwise, you have to define a workflow.
Similar to Gitlab a job in CircleCI consists of multiple steps that get
executed in order. Probably most of the times people will use the run
step
to execute a command, but there are also other step types available like
checkout
or store_artifacts
. The when
conditional in CircleCI is also
implemented as a step
that takes a condition as its argument and a list of
steps that should be executed if the condition is met.
Similar to Jenkins, CircleCI allows you to create re-usable components that can be used as steps (called Orbs). There are official Orbs and Third-Party Orbs. They are used in the configuration file just like any other step, only with the name of the orb as the step name.
Each job is run inside a specific executor like docker or machine (=VM).
The order of job execution is defined by workflows. A workflow contains a list of jobs that should be executed with optional requirement definitions between jobs. Jobs that require another job to run before will be delayed until that job has finished execution. Workflows can be triggered by a push to the repository or based on a cron schedule. The cron schedule is written directly into the CircleCI configuration file (unlike Gitlab).
Example
Since I do not use CircleCI myself this is a simplfied and adjusted example from the documentation:
version: 2
jobs:
test:
docker:
- image: python:3.6-stretch
steps:
- checkout
- run: |
pip install tox
tox
workflows:
version: 2
test:
jobs:
- test