Using Drone CI/CD to build, test your images and deploy to Kubernetes with Helm


A few days ago I got access to Github Actions beta, and since I had to set up CI/CD for the app I’m working on I decided to try it out. I really like the service; it’s nice to have both source code and CI/CD with the same service and I think it has potential, plus it was very very easy to set up a pipeline because the runner is a virtual machine (2 cores, 8 GB of RAM) with Docker and many useful tools including kubectl and helm for Kubernetes. Unfortunately, while testing Github Actions I found a couple of annoying issues. First, there’s no caching at all when you build a Docker image, meaning that each build will rebuild the image from scratch; so each build can take a long time (depending on the image) even when you push small changes to your code. Second, while the runners themselves aren’t too bad it takes very often ages for Github Actions to start a runner and actually initiate the build. Like, a very long time. A few times I’ve even seen timeouts :( It’s a new service, still in beta and clearly not ready for production use in my opinion.
So I was looking for an alternative, and since I’d prefer having dedicated resources for CI/CD I thought I’d better use something I can self host. One obvious option was Jenkins, which I have used a lot up to a few years ago. But I’ve never liked it and even though I’ve read it has improved and can be used with Kubernetes, I wasn’t very keen to try it now. There’s also GitLab which I know is awesome but it’s also very very heavy. I’ve often heard good things about Drone, so I decided to try that next. I must say I really like it! It’s simple, very lightweight but still very powerful! It’s perfect for pipelines based on containers. I spent a couple of days though to figure out everything I need because lots of documentation and other resources seem to be quite outdated and confusing unfortunately. Hence this post. At the end of this post, you’ll know how to:
- install Drone in your Kubernetes cluster with Helm and link it to your Github account (Drone supports several source code hosting services)
- write a full pipeline to build, publish, test and deploy an app/image to your Kubernetes cluster (staging environment) also using Helm
- leverage caching of layers to speed up building images dramatically
- use a mini K3s cluster to test Kubernetes-related features in your app
- promote a build to production with a single, handy command
Installing Drone with Helm
Before installing Drone, head to your Github account settings and under Developer Settings > OAuth Apps create a new app having the URL https:///login as the authorization callback URL. Once the app is created, you’ll see the ClientID and ClientSecret that you’ll need to link Drone with Github. This integration is required so that Github can notify Drone when new code is pushed to the repositories you have “activated” in Drone, as we’ll see in a minute.
Next, let’s install Drone. Of course I’m assuming you already have a Kubernetes cluster. If you don’t have Tiller (Helm’s server side component) installed yet, I recommend you install it in its own namespace - this makes it easier for example to backup and restore Helm apps with Velero, among other things. To install Tiller (assuming you already have the Helm CLI installed on your machine):
kubectl create ns tiller
kubectl -n tiller create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=tiller:tiller
helm init --tiller-namespace tiller --history-max 200 --service-account tiller --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
kubectl -n tiller rollout status deploy/tiller-deploy
There are some security concerns regarding Tiller that you may want to look into. The above commands will install Tiller in the tiller namespace.
You can then install Drone using the official chart as follows:
helm install \
--tiller-namespace tiller \
--name drone \
--namespace drone \
--set 'service.type=ClusterIP' \
--set 'server.host=<drone hostname>' \
--set ingress.enabled=true \
--set ingress.annotations."certmanager\.k8s\.io/cluster-issuer"=letsencrypt-prod \
--set ingress.annotations."certmanager\.k8s\.io/acme-challenge-type"=dns01 \
--set ingress.annotations."certmanager\.k8s\.io/acme-dns01-provider"=cloudflare \
--set "ingress.hosts[0]=<drone hostname>" \
--set "ingress.tls[0].secretName=drone-tls" \
--set "ingress.tls[0].hosts[0]=<drone hostname>" \
--set sourceControl.provider=github \
--set sourceControl.github.clientID=<github client ID> \
--set sourceControl.github.clientSecretKey=clientSecret \
--set sourceControl.github.clientSecretValue=<github client secret> \
--set server.host="<drone hostname>" \
--set server.kubernetes.enabled=true \
--set server.adminUser=<your github admin user> \
--set persistence.enabled=true \
stable/drone
The command above assumes your are using Github (you can use any of the supported services, check the docs) and that you want to expose Drone with an ingress using a certificate managed by cert-manager - I use the DNS01 verification with Cloudflare so you may have to adapt these settings to your case. You need to expose Drone to the Internet somehow because it must be able to be notified by Github whenever new code is pushed to the repository, via webhooks. You are free to use a LoadBalancer service instead of an ingress if that’s your preference. You can see all the configuration options for the Drone chart here.
Once Drone is installed, open the URL in your browser, login with Github and activate the repository for which you want Drone to handle CI/CD. In the repository settings in Drone you can see that you can add secrets; we’ll need these secrets to access the Docker registry and to deploy the app to Kubernetes.
Configuring the main pipeline
Pipelines in Drone are very simple if you just use containers for each step. By default Drone expects these pipelines to be defined in a .drone.yml file in the root of the repository. Let’s see how we can define a pipeline to build, test and deploy our app to a staging environment first. I’ll paste snippets of my .drone.yml file and explain what each bit does.
First, we want to instruct Drone that this is going to be a Docker based pipeline (I haven’t explored other options since I will unlikely need them anyway):
---
kind: pipeline
type: docker
name: staging
trigger:
branches:
- master
event:
- push
I am naming this pipeline “staging” since that’s the environment we’ll be deploying to if all tests pass; we also tell Drone to start this pipeline whenever code is pushed to the master branch, with a trigger.
Next, we need to define the steps the pipeline is made of. The first two steps I am using are for caching: by default Drone doesn’t do caching of the Docker layers so the image would have to be rebuilt from scratch each time like with Github Actions. To enable caching I’m using the volume cache plugin - yes, Drone also supports plugins albeit not as many as Jenkins (which is a good thing IMO).
steps:
- name: restore-cache
image: drillster/drone-volume-cache
settings:
restore: true
mount:
- target
volumes:
- name: cache
path: /cache
- name: prepare-cache
image: busybox
commands:
- mkdir -p /cache/${DRONE_REPO}/target
- mkdir -p /cache/${DRONE_REPO}/docker
volumes:
- name: cache
path: /cache
In these first two steps we tell the plugin to restore the cache from a Docker volume called cache ; in case the cache volume is empty we create a couple of directories using the busybox image. To define the cache volume add the following at the bottom of the file:
volumes:
- name: cache
host:
path: /var/cache
- name: target
host:
path: /var/cache/${DRONE_REPO}/target
- name: docker
host:
path: /var/cache/${DRONE_REPO}/docker
As you can see, we actually define another couple of volumes using subdirectories of the cache volume, which we’ll be using in the following steps.
Next, we define the build step using the official Docker plugin:
- name: build
image: plugins/docker
settings:
registry: docker.pkg.github.com
repo: docker.pkg.github.com/user/repository/image
tags:
- ${DRONE_COMMIT_SHA:0:7}
- ${DRONE_COMMIT_BRANCH}
force_tag: true
use_cache: true
username:
from_secret: github_registry_username
password:
from_secret: github_registry_password
volumes:
- name: docker
path: /var/lib/docker
With the above, we tell Drone to build the image (using “Docker in Docker”) and push it to a registry using the provided secrets, that you can create from the repository’s settings page in the Drone web app. Here I’m using Github Package Registry because it’s free and allows multiple images within the same repository. If you don’t have access to it yet or prefer something else, you can use whichever registry you wish. Just make sure you add secrets to Drone if it’s a private registry. As you can see, I am tagging each build with the SHA of the commit as well as the name of the branch, which is master. force_tag ensures that any existing tags are overwritten/updated, and use_cache well, should be self-explanatory. Of course we mount the cache volume defined earlier in /var/lib/docker because that’s where Docker will cache layers.
NOTE: if you are concerned about security because of Docker in Docker you may want to try Kaniko since there’s a plugin for Drone, however I tried it and I switched back to Docker in Docker, because I had several issues with builds failing for weird reasons as well as with caching not working properly. So I think it’s just easier to use the default Docker plugin to build and publish images.
Once the image has been built and pushed successfully, we need to rebuild/update the cache to ensure the new layers can be fetched from cache when we reference the image in the next steps:
- name: rebuild-cache
image: drillster/drone-volume-cache
settings:
rebuild: true
mount:
- target
volumes:
- name: cache
path: /cache
The next step is optional. My app needs to create ingresses in Kubernetes to support custom domains, so I need to test this feature. For this I use the awesome k3s by Rancher Labs which allows to create super lightweight clusters super quick. You can run k3s as a container, so it’s a perfect fit for automated testing (you can use K3s on your desktop as well using K3d, which makes creating and managing k3s clusters a little easier). To test our app we’ll need to define some services like with Docker Compose. k3s is one of such services and before we can actually use it to test our app we need to update the kubeconfig generated to fix the host, so that this k3s service can be reached by our app at the hostname k3s.
- name: prepare-k3s-kubeconfig
image: sinlead/drone-kubectl
volumes:
- name: k3s-kubeconfig
path: /k3s-kubeconfig
detach: true
commands:
- sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml
As you can see, we use a volume to store the kubeconfig so we can attach it to our test container. So let’s add this volume definition to the volumes defined earlier at the bottom of the file:
- name: k3s-kubeconfig
temp:
medium: memory
This can be an in memory volume, it doesn’t need to be persisted to disk since we need it just for a single run.
Before we move to the test step, let’s define the services required:
services:
- name: mysql
image: percona
environment:
MYSQL_ROOT_PASSWORD: "root"
ports:
- 3306
- name: redis
image: bitnami/redis
environment:
ALLOW_EMPTY_PASSWORD: yes
ports:
- 6379
- name: redis-sentinel
image: bitnami/redis-sentinel
environment:
REDIS_MASTER_HOST: redis
ports:
- 26379
- name: k3s
image: rancher/k3s:v0.9.1
privileged: true
command:
- server
environment:
K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
K3S_KUBECONFIG_MODE: 777
volumes:
- name: k3s-kubeconfig
path: /k3s-kubeconfig
ports:
- 6443
My app needs MySQL, Redis and k3s, you may need to define the services required by your app. It works in a similar way to Docker Compose. The services you define here will be started automatically after the clone step and before any other step in the pipeline, so they will be available when our test step runs.
The next step is to test the image we’ve just build:
- name: test
image: docker.pkg.github.com/user/repo/image:${DRONE_COMMIT_SHA:0:7}
environment:
...
volumes:
- name: k3s-kubeconfig
path: /k3s-kubeconfig
- name: docker
path: /var/lib/docker
commands:
- cd /home/rails/app
- rm -rf fixtures
- bundle exec rails db:setup
- bundle exec rails test:system test
In my case it’s a Rails app, so the commands above are specific to Rails and you will need to change them with what’s required by your app. As you can see, we mount the cache volume so the test step can quickly fetch the image from cache instead of downloading it from the registry.
If you like me are using a private registry, you need to give Drone the credentials to pull images from it. First, create an auth string with:
echo -n "<username>:<password>" | base64
Then create a Drone secret named dockerconfigjson with the following content:
{
"auths": {
"https://docker.pkg.github.com/": {
"auth": "<auth string>",
"email": "<your email>"
}
}
}
Then add the following to the bottom of the file:
image_pull_secrets:
- dockerconfigjson
The next step is for deployment. I am using Helm for this, so I am assuming that you already have a chart for your app:
- name: deploy-staging
image: quay.io/ipedrazas/drone-helm
settings:
chart: ./helm/chart
release: myapp-staging
namespace: myapp-staging
tiller_ns: tiller
debug: true
wait: true
skip_tls_verify: true
values_files:
- ./helm/values/myapp/staging/values.yaml
values: <a list of secrets>
environment:
API_SERVER: https://kubernetes.default.svc.cluster.local:443
KUBERNETES_TOKEN:
from_secret: kubernetes_token
...
volumes:
- name: docker
path: /var/lib/docker
Also when deploying we use the cache. Here I am deploying to a dedicated namespace for staging. I’m using a standard values.yaml file for normal configuration, and inline values for secrets. Of course you can use Drone secrets so you don’t have to add unencrypted secrets in the pipeline file.
Note that you need to specify both the API server URL (which is a fixed DNS name in Kubernetes) and the TOKEN so that Helm can connect to it. To get the token:
kubectl -n tiller get secret $(kubectl -n tiller get sa tiller -o jsonpath='{.secrets[].name}{"\n"}') -o jsonpath="{.data.token}" | base64 -D
The final step of the staging pipeline is to receive notifications on Slack with the result:
- name: notify-result
image: plugins/slack
settings:
webhook:
from_secret: slack_webhook
channel: <slack channel>
link_names: true
template: >
{{#success build.status}}
Build {{build.number}} succeeded and deployed to Staging! :)
Event: {{build.event}}
Branch: {{build.branch}}
Tag: {{build.tag}}
Git SHA: {{build.commit}}
Link: {{build.link}}
{{else}}
Build {{build.number}} failed and not deployed to Staging :(
Event: {{build.event}}
Branch: {{build.branch}}
Tag: {{build.tag}}
Git SHA: {{build.commit}}
Link: {{build.link}}
{{/success}}
when:
status: [ success, failure ]
You just need to create a Drone secret with the Slack webhook URL.
That’s it! Now whenever you push code to master, a new image will be built (leveraging caching), tested and if all tests pass, deployed to the staging environment.
Promoting to production
With Drone it’s super easy to promote a successful deployment to production. First let’s append another pipeline just for production to .drone.yml:
---
kind: pipeline
type: docker
name: production
trigger:
branches:
- master
event:
- promote
steps:
- name: restore-cache
image: drillster/drone-volume-cache
settings:
restore: true
mount:
- target
volumes:
- name: cache
path: /cache
- name: deploy-production
image: quay.io/ipedrazas/drone-helm
settings:
chart: ./helm/chart
release: myapp-prod
namespace: myapp-prod
tiller_ns: tiller
debug: true
wait: true
skip_tls_verify: true
values_files:
- ./helm/values/myapp/production/values.yaml
values: <a list of secrets>
environment:
API_SERVER: https://kubernetes.default.svc.cluster.local:443
KUBERNETES_TOKEN:
from_secret: kubernetes_token
...
volumes:
- name: docker
path: /var/lib/docker
- name: notify-result
image: plugins/slack
settings:
webhook:
from_secret: slack_webhook
channel: <slack channel>
link_names: true
template: >
{{#success build.status}}
Build {{build.number}} succeeded and deployed to Production! :)
Event: {{build.event}}
Branch: {{build.branch}}
Tag: {{build.tag}}
Git SHA: {{build.commit}}
Link: {{build.link}}
{{else}}
Build {{build.number}} failed and not deployed to Production :(
Event: {{build.event}}
Branch: {{build.branch}}
Tag: {{build.tag}}
Git SHA: {{build.commit}}
Link: {{build.link}}
{{/success}}
when:
status: [ success, failure ]
volumes:
- name: cache
host:
path: /var/cache
- name: target
host:
path: /var/cache/${DRONE_REPO}/target
- name: docker
host:
path: /var/cache/${DRONE_REPO}/docker
image_pull_secrets:
- dockerconfigjson
The production pipeline is smaller because we don’t need to build/publish/test the image again, but just to deploy it to production if staging is verified successfully. So we just read from cache, deploy and notify the result. As you can see the event we are watching here is promote.
To trigger the promotion of a build to production, you can use the Drone CLI. On Mac with Homebrew:
brew tap drone/drone
brew install drone
Then, whenever a build has been deployed successfully to staging and you/your QA team have verified that all works as expected, you can promote the build (you see the build number in the Slack notifications) with:
drone build promote <user>/<repository> <build number> production
And you’re done! The same image that was deployed to staging will now be deployed to production.
Wrapping up
As you can see, not only is Drone super lightweight, it is also super easy to get started with and very powerful as it can do whatever containers can do (mostly). I really like it so far and I’m pleased with the setup. I just wish the docs were more up to date for the current version and less confusing (IMO). I would have preferred to use Kaniko to build images, but because I had issues with it and because it’s just me using the cluster for now, I’m happy with Docker in Docker. I hope you find this post useful and that it can save you some time :)