Running Jenkins CI\CD pipeline on a kubernetes cluster

In this tutorial i will go over two procedures:

The first: a Jenkins deployment on RKE cluster.
i will install the Jenkins as a POD, the POD will be ephemeral and Jeknins data will be stored in a volume on the HOST.
i will be using the DIND ‘Docker in Docker’ approach, it will be much more elegant to use the hosts daemon via socket.

The second:
i will create a CI\CD pipeline that will be triggered every time a code is added\changed to GitHub repository.

The stages are as follows:
1. check out the code.
2. build a docker image.
3. push the new built image to DockerHub.
4. update with HELM the application running on RKE.

NOTE: in order to use docker, HELM, Kubectl, on Jenkins container you must mount their binaries (and config) as volumes… take a look at the ‘deployment.yaml’ file volumes and volummounts sections.

First thing let’s create the Jenknis deployment, create a ‘deployment.yaml’ file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: jenkins-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins-server
  template:
    metadata:
      labels:
        app: jenkins-server
    spec:
      securityContext:
#            fsGroup: 1000 
            runAsUser: 0 # initialy it was user 1000, i have changed it to 0 otherwise i get perms error when the POD tries to use the docker socker of the host.
      serviceAccountName: jenkins-admin
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "500Mi"
              cpu: "500m"
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          livenessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          volumeMounts:
            - name: jenkins-data
              mountPath: /var/jenkins_home       
            - name: docker-sock-volume
              mountPath: /var/run/docker.sock
            - name: docker-whatever-volume
              mountPath: /usr/bin/docker
            - name: helm-binary-mount-volume
              mountPath: /usr/local/bin/helm
            - name: kubeconfig-mount-volume
              mountPath: /root/.kube/config
            - name: kubectl-mount-volume
              mountPath: /usr/local/bin/kubectl
      volumes:
        - name: jenkins-data
          persistentVolumeClaim:
              claimName: jenkins-pv-claim
        - name: docker-sock-volume
          hostPath:
            path: /var/run/docker.sock
            type: Socket
        - name: docker-whatever-volume
          hostPath:
            path: /usr/bin/docker
        - name: helm-binary-mount-volume
          hostPath:
            path: /usr/local/bin/helm
        - name: kubeconfig-mount-volume
          hostPath:
            path: /home/rke/.kube/config
        - name: kubectl-mount-volume
          hostPath:
            path: /usr/local/bin/kubectl

Next create a serviceaccount, ClusterRole, and ClusterRoleBimding in ‘serviceaccount.yaml’

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: jenkins-admin
rules:
  - apiGroups: [""]
    resources: ["*"]
    verbs: ["*"]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins-admin
  namespace: jenkins-ns

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jenkins-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkins-admin
subjects:
- kind: ServiceAccount
  name: jenkins-admin
  namespace: jenkins-ns

Create a StorageClass, PersistenVolume, and PersistentVolumeClaim in ‘volume.yaml’

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-pv-volume
  labels:
    type: local
spec:
  storageClassName: local-storage
  claimRef:
    name: jenkins-pv-claim
    namespace: jenkins-ns
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/home/rke/stmp/rke-persistent-volumes/jenkins-pv"

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pv-claim
  namespace: jenkins-ns
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

Create ‘service.yaml’

apiVersion: v1
kind: Service
metadata:
  name: jenkins-service
  namespace: jenkins-ns
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path:   /
      prometheus.io/port:   '8080'
spec:
  selector: 
    app: jenkins-server
  type: NodePort  
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 32000

Create ‘ingress.yaml’

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jenkins-ingress
  namespace: jenkins-ns
spec:
  ingressClassName: nginx
  rules:
    - host: rke.jenkins
      http:
        paths:
          - backend:
              service:
                name: jenkins-service
                port:
                  number: 8080
            path: /
            pathType: Prefix

Apply all the created yaml files

k apply -f .

Navigate to ‘https://rke.jenkins/‘ and you will see ‘Unlock Jenkins’ page, to get the unlock password you will have to take a peak at the Jenkins pod logs:

a.  k get pods -n jenkins-ns
b.  k logs <your pod> -n jenkins-ns

Next you will go through the short initiation process and that’s about it… you got your kubernetes jenkins.

CI\CD pipeline

After Jenkins is all set up and ready we will create a CI\CD procedure to get code from GitHub when new code is added, build a Docker image and push it to DockerHub, and then update the application that is running on Kubernetes.

this is the scenario:
we have our app and Dockerfile on repo NO’1.
HELM chart for app deployment on Kubernetes are on repo NO’2.
when repo NO’1 changes, it triggers a sentinel jenkins job that triggers the ‘real’ pipeline which check out repo NO’1-> builds the code-> pushes the created Docker image to GitHub registry-> checks out repo NO’2 and deploys the HELM charts.

The reason there is a ‘sentinel’ job that triggers the CI\CD pipeline is cuz i didn’t want to use ‘custom webhook’ plugin on Jenkins, and in the ‘Pipeline’ job there is no ‘Source Code Management’ tab, so i had to use freestyle job to trigger a pipeline job when code on repo is changed.

First we need to create a webhook on GitHub:
log in to GitHub and look at the printscreen below for reference.

NOTE: it is important to add a trailing slash ‘/’ on the URL.

Now create the freestyle ‘sentinel’ job, 
configure source code management,
configure build triggers (requires additional moves on GitHub side),
configure build steps and post build actions… use the printscreens below for reference.

Now let’s create the ‘Pipeline’ job, the job where a lot of magic happens.

just like the ‘Freestyle’, the ‘Pipeline’ created the same way.
After creating scroll to ‘Pipeline’ section and select ‘Pipeline script’, there you will be able to write the Jenkins file.

the Jenkinsfile consists of six stages and a post stage:
stage 1: has no real meaning.
stage 2: checks out code from GitHub repo NO’1.
stage 3: builds a Docker image, and taggs it with the build number.
stage 4: authenticate to DockerHub.
stage 5: push the image to DockerHub.
stage 6: checks out the charts from repo NO’2 and deploys the app to kubernetes.
post stage: delete workspace.

pipeline {
  agent any
  stages {
    stage('Stage 1') {
      steps {
        echo "Stage One"
        sh 'pwd'
        sh 'whoami'
        sh 'hostname'
        sh 'cat /etc/os-release'
      }
    }
    stage('Stage 2') {
      steps {
        echo "Stage Two" 
        checkout scmGit(branches: [[name: '*/main']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/EliBukin/dockerized-flask-morse-app']])
      }
    }
    stage('Stage 3') {
      steps {
        echo "Stage Three - build docker image"
        sh 'docker build -t elibukin/dockerized-flask-morse-app:${BUILD_NUMBER} -f Dockerfile_app .'
      }
    }
    stage('Stage 4') {
      steps {
        echo "Stage Four - Authenticate to DockerHub"
        sh 'docker login -u <your-user> -p <your-pass>'
      }
    }
    stage('Stage 5') {
      steps {
        echo "Stage Five - Push docker image to DockerHub"
        sh 'docker push elibukin/dockerized-flask-morse-app:${BUILD_NUMBER}'
      }
    }
    stage('Stage 6') {
      steps {
        echo "Stage Six - Checkout code from the GitHub app (HELM CHARTS) repository, and deploy it to kubernetes."
        sh """git clone https://github.com/EliBukin/helm-flask-morse-app.git && \
            cd helm-flask-morse-app && \
            helm upgrade --install morse-app -n morse-app --create-namespace -f values.yaml \
            --set ingress.hosts[0].host=rke.rancher \
            --set image.repository=elibukin/dockerized-flask-morse-app \
            --set image.tag=${BUILD_NUMBER} . """
      }
    }
  }
  post { 
        always { 
            cleanWs()
        }
    }
}

Now every time code on repo NO’1 changes the CI\CD procedure will be triggered, try it!