Jenkins on Kubernetes

Ian Muge
8 min readMay 18, 2020

--

Jenkins and Kubernetes.

Background

I think the lockdown is starting to get to me, anyway I decided to deploy an extensible,persistent Jenkins deployment on Kubernetes. It has a master node which schedules and creates slaves on which to run build jobs. In our sample project we shall run a job that builds, tags and pushes a Docker image.

Setup/Prerequisites

We shall be working with AKS (Azure Kubernetes Services) for this hence some configurations will be custom to AKS and can be substituted on other providers. We shall be working with the LTS image of Jenkins.

Service Deployment

Cluster Setup

We create a basic cluster on AKS; they offer $200 free credits for 30 days to test out their cloud services, which I am determined to utilize very well during this period.

We define the Subscription (In this case “Free Trial”), the Resource Group and a cluster name. We defined the node size and node count. You can continue with defaults on the other values or customize them based on your requirements. On the networking tab, we should ensure that we set “HTTP application routing” to “Yes”, we shall use this in our ingress. The alternative to this, and advisable in production workloads, would be to create an ingress based on the Azure Application Gateway.

Namespace

We create a new namespace to be used for the master deployment and its slave pods. In our setup.yml file:

---
apiVersion: v1
kind: Namespace
metadata:
name: devops

Jenkins

We shall create a service, Persistent volume claim, service accounts, roles, role-binding and a Deployment. All these are held in the jenkins.yml file.

Service

We create and tag a ‘clusterIP’ service bound on port 80 routing requests to port 8080 on the containers(Jenkins UI port), and port 50000, which Jenkins uses to communicates with slave nodes.

---
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
namespace: devops
labels:
app: jenkins
spec:
type: ClusterIP
selector:
app: jenkins
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 50000
targetPort: 50000
protocol: TCP
name: jnlp-port

Persistent Volume Claim

We create a Persistent Volume Claim used to store Jenkins data across deployments, upgrades, updates and restarts. We use the default StorageClass but we can as well use the premium StorageClass ,which is faster, or even still, define our own. We can get the providers storage classes using kubectl get sc and configure the name in the manifest.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
namespace: devops
spec:
# storageClassName: managed-premium-retain
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

RBAC

The Cluster Role Binding, Service Account and Cluster Role provide RBAC to the Kubernetes cluster and its resources. It attempts to implement the principle of least privilege, allowing just the access rights required to perform their function and no more.

Cluster Role

Our Jenkins deployment shall require access/permission to perform some actions on the Kubernetes cluster. In this case it should be able to create,modify and delete pods, execute commands within pods and acquire logs from them. This is what has been shown on the YAML below.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins
labels:
app: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]

Service Account

We create a service account we shall use in our deployment. We tag it as ‘jenkins’ as we shall reference that later.

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: devops
labels:
app: jenkins

Cluster Role Binding

The Cluster Role Binding defines the mapping between the Service Account and Cluster Role.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins
namespace: devops
subjects:
- kind: ServiceAccount
name: jenkins
namespace: devops
roleRef:
kind: ClusterRole
name: jenkins
apiGroup: rbac.authorization.k8s.io

Deployment

We create a master-jenkins-node deployment with one replica and assign adequate resources to it, dependent on what is available in our K8s nodes. Most of the nodes function will involve coordinating the slaves running the actual jobs. The node uses the previously defined service account ‘jenkins’ to access the K8s cluster. We create a volume based on the previously created Persistent Volume Claim, we reference it using its name “pv-claim”. To fix the permissions in the created volume, we use a initContainer that mounts the volume and modifies the ownership to the jenkins user and group both identified by id 1000. This allows the container to properly read and write into the volume. We expose port 8080 (http) and 50000 (jnlp-port) on the container. To ensure the health and uptime of our deployment, we set up a readiness and liveness probe checking port 8080 on the container.

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pv-claim
initContainers:
- name: fix-data-permissions
image: busybox
command: ["/bin/sh", "-c","chown 1000:1000 /var/jenkins_home"]
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
securityContext:
privileged: true
containers:
- name: jenkins
image: jenkins/jenkins:lts
resources:
limits:
cpu: "1"
memory: 1024M
requests:
cpu: "0.5"
memory: 256M
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: jnlp-port
protocol: TCP
volumeMounts:
- name: jenkins-home
mountPath: "/var/jenkins_home"
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 10
failureThreshold: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 10
failureThreshold: 10

Ingress

We use the AKS HTTP routing ingress. It provides a FQDN we can use to expose and reach our service from outside the cluster. We get the FQDN from the azure portal . We use the previously defined Jenkins service ‘jenkins-svc’ as our backend.

It should be noted that, because we are using AKS HTTP routing ingress, we need to add it as an addon in the ingress annotations. We can alternatively use a “type: Load Balancer” service, Application Gateway or an Nginx Ingress to expose the service publicly.

This is contained in the ingress.yml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: devops
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: jenkins.{http_router_dns_name}
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80

Configuration

Once we have the service accessible we need to setup a number of prerequisites.

First, we get the default admin password from the pod logs. We can use the command below:

kubectl logs -n devops $(kubectl get -n devops pods -l app=jenkins --no-headers=true | awk '{print $1}')

Setup the first admin user and allow the initial suggested plugins to install. Once it is done, we move forward to install the actual plugins we need. We go to “Manage Jenkins > Manage Plugins”. In the available plugins, we search for and install:

  • ‘Kubernetes plugin’ and ‘Kubernetes Credential Plugin’ : Helps to interface with the K8s cluster.
  • ‘Strict Crumb Issuer Plugin’ : Helps with CSRF protection, more details can be found from: https://plugins.jenkins.io/strict-crumb-issuer/

We install the plugins without restarting.

We move on to configure the Kubernetes plugin. We go to”Manage Jenkins > Manage Nodes and Clouds > Configure Clouds”. Add a new ‘Kubernetes’ cloud.

Kubernetes Cloud Configurations

As we are outside the default namespace, which hosts the Kubernetes API, we need to reference it using the cluster’s FQDN. This is how we shall interface with Kubernetes to provision and use our slave pods.

Kubernetes URL: https://kubernetes.default.svc.cluster.local/

We use the namespace we had defined at the beginning of the project.

Kubernetes Namespace: devops

We create new credentials of kind “Kubernetes Service Account” and global scope. This instructs Jenkins to use the service account used by the deployment to interface with the K8s cluster i.e. provision and configure its slave pods.

Credentials: Secret text

We add the Jenkins URL, this is the Jenkins master endpoint.

Jenkins URL: http://jenkins-svc.devops.svc.cluster.local/

We can customise the “Pod Retention Policy” : the pod can be destroyed on completion of a job, always retained (it will be up to the cluster admin to remove these pods) or retained only when an error occurs in the build.

We leave the other value with defaults. We do not add a pod template, we define that in the project’s Jenkinsfile. We can test access to the K8s cluster, on success we can move on to using the cluster.

Testing

Jenkins should now be ready, we create a new pipeline project. The pipeline definition should be “Pipeline script from SCM”. In our example we shall use a simple sample project on Git (I had previously worked on). We provide the repository and credentials, if required.

We start by defining the agent pod our build will run on, in this case we use a docker-in-docker container. The container name is used in the job stages. We can define multiple agents based on what is required in the builds.

agent {
kubernetes {
//cloud 'kubernetes'
label 'docker-dind'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:dind
command: ['cat']
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}

The stages: checkout code, build the image and push the image. The checkout stage is done declaratively by Jenkins so we don’t need to define it in the Jenkinsfile. In the build and push sections, note that we enclose and define the agent to be used in the stage as the slave container: container('docker') {}. We build the image and tag it using the current Jenkins Build Number. We are pushing the image to docker hub, hence we define credentials (username and password) in Jenkins credentials and use them in the build within the withCredentials(){} section. Jenkins does not show secure credentials within the output logs, however it might be stored unencrypted within the slave container, this can be remedied by using a docker credentials helper.

stages {
stage('Build Docker image') {
steps {
container('docker') {
sh "docker info"
sh "docker build -t ikmuge/nginx-spa:${BUILD_NUMBER} ."
sh "docker tag ikmuge/nginx-spa:${BUILD_NUMBER} ikmuge/nginx-spa:latest"
}
}
}
stage("Push Image"){
steps{
container("docker"){
withCredentials([usernamePassword(credentialsId: 'docker-hub-credential', passwordVariable: 'password', usernameVariable: 'user')]) {
sh "docker login --username='$user' --password='$password'"
}

sh "docker push ikmuge/nginx-spa:${BUILD_NUMBER}"
sh "docker push ikmuge/nginx-spa:latest"
}
}
}
}

When the build begins we observe that a new pod is created, and used for the build. Based on the configured “Pod Retention Policy”, the pod can be retained or destroyed.

Conclusions

We setup Jenkins on Kubernetes with persistence, configured access to the Kubernetes cluster, created a sample project and used pods as our slaves to build and push the sample project.

TL;DR

Persistent Jenkins on Kubernetes deployment and configuration; using dynamically created pods as build slaves. Tested by building, tagging and pushing an image to docker hub. All manifests available on:

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Ian Muge
Ian Muge

Written by Ian Muge

If I have to do it more than twice, I am automating it. #StayLazy

No responses yet

Write a response