Jenkins on kubernetes

Up until now I have been running, a by now ancient, version of Jenkins on a virtual machine. The virtual machine on which it was running was so old that it was even unable to download any more newer artifacts from the internet because of its lack of support for the newer TLS versions. Time then, as part of my ongoing project to migrate everything to kubernetes, to move jenkins there as well.

Deployment

The jenkins project provides a helm chart by which Jenkins can be installed on kubernetes. Installing jenkins is than easily done in two steps:

helm repo add jenkins https://charts.jenkins.io
helm install jenkins/jenkins --values values.yaml

Above, values.yaml is a yaml file containing configuration for the helm chart.

Jenkins configuration is quite easy, the values.yaml file I am using is as follows:

controller:
  containerEnv:
    - name: JENKINS_JAVA_OPTS
      value: -Duser.timezone=Europe/Amsterdam
  jenkinsUriPrefix: "/jenkins"
  testEnabled: false

persistence:
  enabled: true
  existingClaim: jenkins-pvc

networkPolicy:
  enabled: true

The required settings are minimal. Just specifying the name of a single PVC to use for storage, then specifying the URI prefix since I am running it under /jenkins, and then enabling network policies defined by the jenkins helm chart. The biggest problem was figuring out how to set the default timezone. I had to inspect the image entrypoint to find out I could use the JENKINS_JAVA_OPTS environment variable.

To get a bit more control on the setup it is possible with helm to generate the yaml by which jenkins will be deployed instead of installing it directly. This allows inspection of the yaml before installation and can be useful to see what it is installing. Also, specifying an explicit version is useful. For example, to generate the yaml file:

helm template --namespace jenkins \
              --create-namespace \
              --version 4.2.4 \
              --release-name example \
              jenkins/jenkins --values values.yaml > jenkins.yaml

The generated jenkins.yaml can then be put under version control and applied to the cluster.

The helm chart for jenkins has a lot more configuration options. Use helm show values jenkins/jenkins to show all supported values.

Post-install setup

After installation, you can use kubectl port-forward to connect to jenkins. Configuration for exposure outside of the cluster is something I usually postpone until it is working.

The jenkins admin password can be obtained using

kubectl get secret example-jenkins -o jsonpath='{.data.jenkins-admin-password}' | 
base64 -

After this, login and perform some final configuration steps:

  • Create another user with admin rights and set the password. Using the standard admin user is problematic since the password is reset every time the container starts up.
  • install any other plugins you need such as blue ocean or subversion. The kubernetes plugin is already installed.
  • Configure the e-mail from jenkins. As admin goto ‘Managed Jenkins/Configure System’ and configure the mail adress used by jenkins under ‘Location/System admin e-mail address’. Also configure the address of the mail server under ‘E-mail notification’. Since I am also running the mailserver on kubernetes, the host name is mail.exposure in my case where mail is the service name and exposure the namespace where my mail server is running. See my earlier post for how I setup the mail server.

With this installation of jenkins the kubernetes plugin is already installed and requires no configuration at all to run jobs inside your jenkins cluster. Using the kubernetes plugin every job runs in its own pod and so isolated from other jobs. This is an advantage compared to my old setup where all jobs would run on the same build slave so that interference between jobs is more likely.

After this, a simple test job can be used to verify the setup. Simply create a free style job with an sh build step like echo “hello world”. After building this job, you should see logging of a POD yaml with the build step running in the pod. For example:

Started by user Jenkins Admin
Running as SYSTEM
Agent default-tkmcr is provisioned from template default
---
apiVersion: "v1"
kind: "Pod"
metadata:
  labels:
    jenkins/label-digest: "b45c4bb73250c74571bbb0910088506744e00a9b"
    jenkins/example-jenkins-agent: "true"
    jenkins/label: "example-jenkins-agent"
  name: "default-tkmcr"
  namespace: "jenkins"
spec:
  containers:
  - args:
    - "********"
    - "default-tkmcr"
    env:
...
Building remotely on default-tkmcr (example-jenkins-agent) in workspace /home/jenkins/agent/workspace/test
[test] $ /bin/sh -xe /tmp/jenkins11271252766766243125.sh
+ echo hello world
hello world
Finished: SUCCESS

After testing, jenkins can be exposed using HTTPD by using the internal DNS name of jenkins. For instance on apache you can add this inside a VirtualHost definition:

ProxyPass /jenkins http://example-jenkins.jenkins:8080/jenkins disablereuse=On
ProxyPassReverse /jenkins http://example-jenkins.jenkins:8080/jenkins

For details on how I am exposing services running in different namespaces, see my earlier post.

Network policies

As part of my setup I am defining some additional network policies since I want to limit ingress and egress traffic by default for any service in the namespace where I am running jenkins.

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-allow-nothing
  namespace: jenkins
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-httpd-example-org
  namespace: jenkins
spec:
  podSelector:
    matchLabels:
      statefulset.kubernetes.io/pod-name: example-jenkins-0
  ingress:
    - ports:
        - port: 8080
      from:
        - podSelector:
            matchLabels:
              app: httpd-example-org
          namespaceSelector:
            matchLabels:
              purpose: exposure
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: jenkins-allow-external-traffic
  namespace: jenkins
spec:
  podSelector:
    matchLabels:
      statefulset.kubernetes.io/pod-name: wamblee-jenkins-0
  egress:
    - ports:
        - port: 53
          protocol: TCP
        - port: 53
          protocol: UDP
        - port: 80
        - port: 443
    - to:  # api server
      - ipBlock:
          cidr: 192.168.178.89/32
      ports:
        - port: 6443
    - to:
      - namespaceSelector:
          matchLabels:
            purpose: exposure
      ports:
        - port: 25
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: jenkins-agent-allow-egress
  namespace: jenkins
spec:
  podSelector:
    matchLabels:
      jenkins/example-jenkins-agent: "true"
  egress:
    - ports:
        - port: 53
          protocol: TCP
        - port: 53
          protocol: UDP
        - port: 80
        - port: 443
        - port: 8080
        - port: 50000

These network policies are required, since the default network policies for jenkins only provide ingress rules and the allow-nothing rule just specified disabled all egress by default. The first network policy is the default allow nothing rule as described in an earlier post. The second allows it to be accessed from the apache server which exposes it. The third, grants standard internet access for DNS and HTTP/HTTPS and it also allows access to the API server and to the mail server running on kubernetes. The latter is required since the jenkins master should be able to create pods running on the cluster. The fourth policy finally allows access to DNS and standard HTTP and HTTPS, required by jobs running inside the pod, such as maven builds that retrieve artifacts from a maven repository (such as nexus). Finally, ports 8080 and 50000 are required for access to the jenkins master. The last configuration can be made more strict by allowing this access only to the jenkins master.

One more network policy is required to allow access from the apache webserver to jenkins:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-access-to-jenkins
  namespace: exposure
spec:
  podSelector:
    matchLabels:
      app: httpd-example-org
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: jenkins
          podSelector:
            matchLabels:
              statefulset.kubernetes.io/pod-name: example-jenkins-0

Migrating jobs

In my old setup I was using free format jobs and these are now also scheduled using the multi-branch pipeline plugin. The basic structure for the Jenkinsfile using a declarative pipeline is as follows:

String cron_string = BRANCH_NAME == "trunk" ? "10 3 * * *" : ""      // A 

pipeline {
  agent {
    kubernetes {
      label "${env.BUILD_TAG}"                                       // B 
      idleMinutes 0  
      yamlFile 'build-pod.yaml'  
      defaultContainer 'build' 
    }
  }
  options {
    disableConcurrentBuilds()                                       // C
  }
  triggers { cron(cron_string) }                                    // A 
  
  stages {
    stage('Main') {
      steps {  
        sh '''
          # the old content of your build job
        '''
      }
    }
  }
  post {
    always {
      junit '**/surefire-reports/*.xml'
      cobertura coberturaReportFile: '**/target/site/cobertura/coverage.xml'
    }
    changed {                                                      // D
      mail to: "jenkins@example.org",
      subject: "jenkins build:${currentBuild.currentResult}: ${env.JOB_NAME}",
      body: "${currentBuild.currentResult}: Job ${env.JOB_NAME}\nMore Info can be found here: ${env.BUILD_URL}"
    }
  }
}
  • # A: Define a cron schedule for running jobs daily in the ‘svn trunk’ (would be ‘master’ for a git job). With the multi-branch pipeline plugin, daily builds can now only be configured in the multi-branch pipeline plugin Jenkinsfile.
  • # B: Run using a kubernetes agent with the given .label. The label corresponds to a kubernetes pod with the given label. Using BUILD_TAG as label, the label is unique for every job so different jobs will use different pods. The idleMinutes parameter describes for how long the pod will remain around after the job is finished. Usually 0 is fine but for debugging one might choose a much longer time. This also refers to a build-pod.yaml template that defines additional containers to be added to the pod and which may also customize the jnlp container in the pod. The jnlp container implements the jenkins agent which connects back to jenkins after it is launched. It is important to realize that some commands always run in the jnlp container. This means that in some case volumes should be mounted in both the jnlp container and in your own additional containers. A basic example of such a build pod is as follows:
apiVersion: v1
kind: Pod
spec:
  terminationGracePeriodSeconds: 0 
  containers:
    - name: build
      image: repo.example.org/java8:latest
      command: ["tail", "-f", "/dev/null"]
      imagePullPolicy: Always

In the above definition, terminationGracePeriodSeconds means that after jenkins tries to delete the pod after the idleMinutes has passed, kubernetes will delete the pod immediately and will not wait for graceful termination of the pod. Without it, you typically have to wait for 30 seconds before the pod disappears.

  • # C: This disables concurrent builds for the same thread.
  • # D: A mail notification when the job status has changed. Unfortunately, the multi-branch pipeline plugin does not allow mail addresses to be defined at the job level. Therefore, it is convenient to use a mailing list mail address. See my earlier post for how I configured mailman on kubernetes. The source e-mail used by jenkins must be configured in mailman to be allowed in the post of non-members to the mailing list section.

Pipeline libraries

Making clever use of pipeline libraries, the definition of the kubernetes section in the Jenkinsfile can be made much more simple such as for instance:

pipeline { 
  agent {
    kubernetes: agentsetup(containers: 'java8')
  } 
  ...
}

See for example here for how to achieve something like this. The pipeline library you define can define other useful defaults for your setup.

Building containers in Jenkins

A post on jenkins running on kubernetes wouldn’t be complete without describing how to build containers as part of jenkins jobs. There are several complications when trying to build jenkins jobs on kubernetes:

  • Docker is not available. Kubernetes these days usually does not run on docker any more and even if docker would be available it would be a hack to use from inside a running pod.
  • Since there is no docker, there is also no caching of previously built images.

To solve this, kaniko can be used. Kaniko does not require a running docker daemon and it uses the (private) docker repository for caching image builds. To use kaniko, we first have to setup the build pod like so:

apiVersion: v1
kind: Pod
spec:
  containers:
    - name: kaniko
      image: gcr.io/kaniko-project/executor:debug
      #imagePullPolicy: Always
      command:
        - /busybox/cat
      tty: true
      resources:
        requests:
          memory: "2048M"
          cpu: "1000m"
          ephemeral-storage: "5Gi"
      volumeMounts:
        - name: docker-config
          mountPath: /kaniko/.docker/config.json
          subPath: config.json
  volumes:
    - name: docker-config
      secret:
        secretName: docker-config

This adds a kaniko container to the pod, to which a docker config.json file is mounted at /kaniko/.docker/config.json. This is required in order to push images (which usually requires authentication).

The Jenkinsfile then becomes something like this:

pipeline {
  agent {
    kubernetes {
      label "${env.BUILD_TAG}"                                       
      idleMinutes 0  
      yamlFile 'build-pod.yaml'  
      defaultContainer 'build' 
    }
  }
  stages {
    stage('Main') {
      steps { 
        container('kaniko') { 
          sh """
            /kaniko/executor --dockerfile Dockerfile \
                             --cache=true \
                             --cache-ttl=100000h \
                             --context \$(  pwd ) \
                             --destination repo.example.org/rockyrocks:${env.BRANCH_NAME}
          """
        }
      }
    }
  }
}

The above job builds a container based on a Dockerfile in the root of the checkout. It uses caching with a very long timeout. The current directory (context) for the build is also the root of the checkout in this example. The destination is the name of the (private) repository and container. It uses the current branch name as version. The multi-branch pipeline plugin can also be configured to build tags, so in this way the same job could be used to build versioned images.

It speaks for itself that in this case also a pipeline library should be used to simplify the pipeline. In fact, assuming a default private repo and default caching settings, the above kaniko sh build step could be simplified to

buildcontainer(context: ".", image = "rockyrocks")

The above setup was used with nexus repository manager, also running on kubernetes.

Final thoughts

This concludes the migration of jenkins to kubernetes. Jenkins is quite well integrated with kubernetes, but some effort is required to get an optimal configuration. Pipeline libraries are a great way to simplify builds.

This entry was posted in Devops/Linux. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *