Continuous Delivery of an e-book with Softcover and Gitlab
In a previous article I have explained how to setup GitLab in Docker behind Nginx. In this article I want to explain how you can use GitLab to do continuous delivery for different kind of projects.
I am currently working on a book about search engine programming (when and if it will be finished is still an open question). For writing the book I use Softcover, because it allows me to generate HTML, PDF and e-book reader formats from one input source. Now, instead of having to create these exports manually each time and having to make sure that the computer I use for writing has all dependencies installed it would be much nicer if the outputs could be generated automatically after each change.
This can be achieved with GitLab’s continuous integration and continuous delivery functionality. Whenever possible I use the Docker executor, because it ensures that each new run starts on a fresh system and will not only work because of old packages or packages from other tests.
Since I use Softcover to write my book, I start with the ruby
docker
image and then install LaTeX on it. This will download several hundred
megabytes of packages on each test run, but since I do not push to my
book repository so often at the moment, it’s not a problem. If I start to push
more often, I might create my own docker image for Softcover.
The .gitlab-ci.yml
basically starts with the installation of the required
packages:
image: ruby:latest
before_script:
- gem install softcover
- gem install therubyracer
# In my tests softcover was only able to find xetex, not latex
- apt-get update -y && apt-get install texlive-xetex texlive-lang-german -y
- softcover check # Run check to show the installed dependencies
I added all the package installation to before_scripts
, because for me it
does not logically belong to the deployment jobs.
After these commands have been executed we could already build the PDF with
softcover build:pdf
. However, only building the PDF is not of any use in
a docker container that gets deleted after test execution. You’d generate
a PDF and then delete it again together with the whole container. Thus, we
have to make sure that the PDF can be uploaded somewhere. At first, I used
Amazon S3 for this, but only a few days later I switched to a self-hosted
SFTP server, because I have some bigger deployments and download transfer rates
from S3 are very expensive.
To allow our build server to upload files to the SFTP server we have to specify
the private key as a Gitlab Secret Variable. These are variables that you
should use for sensitive data that should not go into the repository. With
our SSH private key stored in a secret variable called BUILD_DELIVERY_KEY
we can then re-create the SSH key during a build.
before_script:
# [...]
# Create the credentials file to upload the PDF to the delivery storage
- mkdir -p ~/.ssh
- echo -e "$BUILD_DELIVERY_KEY" > ~/.ssh/id_rsa_build_delivery
- chmod 600 ~/.ssh/id_rsa_build_delivery
I also setup a host entry so that in the following script I can refer to my standard hostnames, but this is only convenience.
before_script:
# [...]
# Allow us to access SFTP server by hostname
- echo "$SERVER_IP kafka" >> /etc/hosts
Next, we can define the deploy stage that builds the book and uploads it to
the SFTP server. I disabled host key checking, because the risk of uploading
our e-book to a foreign server due to a man-in-the-middle attack seemed low
enough for me. Feel free to add the host key in the before_script
phase and
the use host key checking during upload.
Building the book is simple if everything was setup correctly during the
before_script
phase. We just call softcover build:pdf
as normally. This
will generate a PDF which we can then upload to SFTP.
I wanted to upload the
document to a specific folder. Since there is no command to create a folder
if it does not exist, I split the upload into two commands. First, we try to
create the folder and always return true
(because the creation of a folder
will return an error code if the folder already exists). In the second step
we upload the file.
deploy:
script:
- softcover build:pdf
- echo "mkdir books" | sftp -oStrictHostKeyChecking=no -i ~/.ssh/id_rsa_build_delivery -b- build_delivery@kafka || true
- echo "put ebooks/searchengine.pdf" | sftp -oStrictHostKeyChecking=no -i ~/.ssh/id_rsa_build_delivery -b- build_delivery@kafka:books
environment:
name: staging
only:
- master
There is no risk of missing an error during folder creation
if the folder did not exist already (e.g. due to permission errors), because
put
will also fail if the folder does not exist. I.e., possible errors from
the mkdir
command will be experienced in the put
command anyway. Thus, the
pipeline will still fail if something goes wrong.
I called the environment staging
, because the creation of a PDF only for
myself feels like a staging deployment to me. If the ebook would be uploaded
to a public server I would call it production
.
The deployment only gets executed for push operations on the master
branch.
This is a pretty common default for deployment operations.