Grafana Helm on Kubernetes.

grafana on kubernetes

Monitoring is the heartbeat of any modern infrastructure, providing essential insights into system performance, resource utilization, and application behavior. Grafana stands out as a popular solution for visualizing and analyzing metrics, offering a rich array of dashboards and data sources. When combined with Kubernetes, Grafana becomes an indispensable tool for monitoring the dynamic nature of containerized environments.

Deploying Grafana on Kubernetes manually can be a daunting task, involving several steps like creating deployments, services, and configuring persistent storage. However, with the help of Helm, the package manager for Kubernetes, this process can be significantly streamlined.

In this tutorial, we’ll walk through deploying Grafana on Kubernetes using Helm. By leveraging Helm charts, you’ll be able to deploy Grafana into your Kubernetes cluster, enabling you to visualize and monitor your system’s performance with ease. Whether you’re a DevOps engineer, a system administrator, or a Kubernetes enthusiast, this guide will equip you with the knowledge to deploy Grafana effortlessly and unleash its monitoring magic on your Kubernetes infrastructure. Let’s get started!

Prerequisites:

          Kubernetes cluster   –> kubernetes cluster installation tutorial: https://eli-bukin.com/projects/singlenode-rke-cluster-installation/

          HELM installed   –> installation docs: https://helm.sh/docs/intro/install/

          Kubectl installed   –> installation docs: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

          Load Balancer   –> instructions for deployment: https://eli-bukin.com/deploy-metallb-load-balancer-on-kubernetes/

          Grafana community HELM charts   –> https://github.com/grafana/helm-charts?tab=readme-ov-file

First thing first, lets get the repo locally and make a few changes to the charts.

git clone https://github.com/grafana/helm-charts.git 

          1. in “values.yaml” change the service type to “LoadBalancer”

service:
  enabled: true
  type: LoadBalancer
  loadBalancerIP: ""
  loadBalancerClass: ""
  loadBalancerSourceRanges: []
  port: 80
  targetPort: 3000
    # targetPort: 4181 To be used with a proxy extraContainer

          2. create a namespace and a volume for Grafana server, so your config won’t vanish like a fart in the wind every time the POD restarts.

                    a. create a namespace for grafana

kubectl create namespace grafana-ns

                    b. in “values.yaml” under “persistence” , change to “true” to enable the PVC (you could set a size as well).

persistence:
  type: pvc
  enabled: true
  # storageClassName: default
  accessModes:
    - ReadWriteOnce
  size: 3Gi

                    c. create a PV for the PVC.
                    run this to create a PV with specific claimRef

                    NOTE: the volume folder has to prepared upfront

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 3Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: grafana
    namespace: grafana-ns
  hostPath:
    path: <path/to/folder>
    type: ''
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
EOF

next deploy the chart.

helm upgrade --install grafana helm-charts/charts/grafana -n grafana-ns

then execute “k get all -n grafana-ns” and you will see the IP that Grafana got from the load balancer.

at this stage if all went well you will be able to log in to Grafana UI, but first you have to get the admin password.
execute this to print the password.

kubectl get secret --namespace grafana-ns grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

and then…