Kodekloud Kubernetes Challenge 4 solution | Build a highly available Redis Cluster | redis-cluster-config | PersistentVolume

Ticker

6/recent/ticker-posts

Kodekloud Kubernetes Challenge 4 solution | Build a highly available Redis Cluster | redis-cluster-config | PersistentVolume

Question : Build a highly available Redis Cluster based on the given architecture diagram.

1. Create PersistentVolume - Name: redis01

    Access modes: ReadWriteOnce

   Size: 1Gi

   hostPath: /redis01, directory should be created on worker node

1. Create PersistentVolume - Name: redis01

    Access modes: ReadWriteOnce

   Size: 1Gi

   hostPath: /redis01, directory should be created on worker node

2. Create PersistentVolume - Name: redis02

     Access modes: ReadWriteOnce

    Size: 1Gi

    hostPath: /redis02, directory should be created on worker node

3. Create PersistentVolume - Name: redis03

     Access modes: ReadWriteOnce

    Size: 1Gi

    hostPath: /redis03, directory should be created on worker node

4. Create PersistentVolume - Name: redis04

    Access modes: ReadWriteOnce

   Size: 1Gi

   hostPath: /redis04, directory should be created on worker node

5. Create PersistentVolume - Name: redis05

    Access modes: ReadWriteOnce

   Size: 1Gi

   hostPath: /redis05, directory should be created on worker node

6. Create PersistentVolume - Name: redis06

    Access modes: ReadWriteOnce

   Size: 1Gi

   hostPath: /redis06, directory should be created on worker node

7. Create StatefulSet - Name: redis-cluster

    Replicas: 6

    Pods status: Running (All 6 replicas)

    Image: redis:5.0.1-alpine, Label = app: redis-cluster

    container name: redis, command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]

    Env: name: 'POD_IP', valueFrom: 'fieldRef', fieldPath: 'status.podIP' (apiVersion: v1)

    Ports - name: 'client', containerPort: '6379'

    Ports - name: 'gossip', containerPort: '16379'

    Volume Mount - name: 'conf', mountPath: '/conf', readOnly:'false' (ConfigMap Mount)

    Volume Mount - name: 'data', mountPath: '/data', readOnly:'false' (volumeClaim)

    volumes - name: 'conf', Type: 'ConfigMap', ConfigMap Name: 'redis-cluster-configmap',

    Volumes - name: 'conf', ConfigMap Name: 'redis-cluster-configmap', defaultMode = '0755'

    volumeClaimTemplates - name: 'data'

    volumeClaimTemplates - accessModes: 'ReadWriteOnce'

    volumeClaimTemplates - Storage Request: '1Gi'

8. ConfigMap: redis-cluster-configmap is already created. Inspect it…

9. Create service name  redis-cluster-service

   Ports - service name 'redis-cluster-service', port name: 'client', port: '6379'

   Ports - service name 'redis-cluster-service', port name: 'gossip', port: '16379'

   Ports - service name 'redis-cluster-service', port name: 'client', targetPort: '6379'

   Ports - service name 'redis-cluster-service', port name: 'gossip', targetPort: '16379'

10. Create  redis-cluster-config

 Command: kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')




Solution: 

1. Check the  nodes  & pod status

root@controlplane ~   kubectl get nodes

NAME           STATUS   ROLES                  AGE   VERSION

controlplane   Ready    control-plane,master   30m   v1.23.0

node01         Ready    <none>                 29m   v1.23.0

root@controlplane ~  

root@controlplane ~   kubectl get pods -A

NAMESPACE     NAME                                   READY   STATUS    RESTARTS      AGE

kube-system   coredns-64897985d-57t4r                1/1     Running   0             32m

kube-system   coredns-64897985d-x5mm4                1/1     Running   0             32m

kube-system   etcd-controlplane                      1/1     Running   0             32m

kube-system   kube-apiserver-controlplane            1/1     Running   0             32m

kube-system   kube-controller-manager-controlplane   1/1     Running   0             32m

kube-system   kube-proxy-srj2m                       1/1     Running   0             31m

kube-system   kube-proxy-wvrq7                       1/1     Running   0             32m

kube-system   kube-scheduler-controlplane            1/1     Running   0             32m

kube-system   weave-net-7q9x4                        2/2     Running   1 (31m ago)   32m

kube-system   weave-net-gn82m                        2/2     Running   0             31m

root@controlplane ~  

2.  Created  YAML  manifest files with all the parameters, Kindly clone repo or you can copy from GitLab 

git clone https://gitlab.com/nb-tech-support/devops.git

 Refer Below Video for more clarity )

3.  redis-cluster-configmap is already created. Inspect it

root@controlplane ~   kubectl get configmap

NAME                      DATA   AGE

kube-root-ca.crt          1      26m

redis-cluster-configmap   2      2m10s

root@controlplane ~

4.  Create PersistentVolume - Name: redis01-06 & validate status

root@controlplane ~   kubectl get pv

No resources found

root@controlplane ~   kubectl apply -f devops/kubernetes-challenges/challenge-4/pv-cluster.yaml

persistentvolume/redis01 created

persistentvolume/redis02 created

persistentvolume/redis03 created

persistentvolume/redis04 created

persistentvolume/redis05 created

persistentvolume/redis06 created

root@controlplane ~   kubectl get pv

NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE

redis01   1Gi        RWO            Retain           Available           redis-storage            2s

redis02   1Gi        RWO            Retain           Available           redis-storage            2s

redis03   1Gi        RWO            Retain           Available           redis-storage            2s

redis04   1Gi        RWO            Retain           Available           redis-storage            2s

redis05   1Gi        RWO            Retain           Available           redis-storage            2s

redis06   1Gi        RWO            Retain           Available           redis-storage            2s

root@controlplane ~

5.  Create redis-cluster 

root@controlplane ~   kubectl apply -f devops/kubernetes-challenges/challenge-4/redis-statefulset.yaml

statefulset.apps/redis-cluster created

root@controlplane ~

6.  Create redis-cluster service

root@controlplane ~   kubectl apply -f devops/kubernetes-challenges/challenge-4/redis-cluster-service.yaml

service/redis-cluster-service created

root@controlplane ~ 


7.  Make sure all pods are running status and service are created . Pvc are bound 

root@controlplane ~   kubectl get all

NAME                  READY   STATUS    RESTARTS   AGE

pod/redis-cluster-0   1/1     Running   0          17s

pod/redis-cluster-1   1/1     Running   0          9s

pod/redis-cluster-2   1/1     Running   0          5s

pod/redis-cluster-3   1/1     Running   0          1s

 NAME                            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)              AGE

service/kubernetes              ClusterIP   10.96.0.1      <none>        443/TCP              34m

service/redis-cluster-service   ClusterIP   10.98.37.120   <none>        6379/TCP,16379/TCP   17s

 NAME                             READY   AGE

statefulset.apps/redis-cluster   3/6     17s

 root@controlplane ~   kubectl get pvc

NAME                   STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS    AGE

data-redis-cluster-0   Bound    redis03   1Gi        RWO            redis-storage   44s

data-redis-cluster-1   Bound    redis01   1Gi        RWO            redis-storage   36s

data-redis-cluster-2   Bound    redis04   1Gi        RWO            redis-storage   32s

data-redis-cluster-3   Bound    redis05   1Gi        RWO            redis-storage   28s

data-redis-cluster-4   Bound    redis02   1Gi        RWO            redis-storage   24s

data-redis-cluster-5   Bound    redis06   1Gi        RWO            redis-storage   19s

 root@controlplane ~

8.  Create redis-cluster 

root@controlplane ~   kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')

>>> Performing hash slots allocation on 6 nodes...

Master[0] -> Slots 0 - 5460

Master[1] -> Slots 5461 - 10922

Master[2] -> Slots 10923 - 16383

Adding replica 10.50.192.4:6379 to 10.50.192.1:6379

Adding replica 10.50.192.5:6379 to 10.50.192.2:6379

Adding replica 10.50.192.6:6379 to 10.50.192.3:6379

M: 06a762e259b242d59a8566dca1b3cfb1aeb5bf11 10.50.192.1:6379

   slots:[0-5460] (5461 slots) master

M: ce8499ddf828ba81edca03ca7848d23d0e49d62b 10.50.192.2:6379

   slots:[5461-10922] (5462 slots) master

M: 98cf05dd3ed8c5e8c78979b4a86e25b22d3e04c1 10.50.192.3:6379

   slots:[10923-16383] (5461 slots) master

S: bbd81e8cb2231cf2b9a70596eed708b79cc9c196 10.50.192.4:6379

   replicates 06a762e259b242d59a8566dca1b3cfb1aeb5bf11

S: b6741d41a655ffcbe59773f52f92b116fb9a19e8 10.50.192.5:6379

   replicates ce8499ddf828ba81edca03ca7848d23d0e49d62b

S: 4152253191eb7e4d9e8fbeab92bc5fe6dccf0af2 10.50.192.6:6379

   replicates 98cf05dd3ed8c5e8c78979b4a86e25b22d3e04c1

Can I set the above configuration? (type 'yes' to accept): yes

>>> Nodes configuration updated

>>> Assign a different config epoch to each node

>>> Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join

....

>>> Performing Cluster Check (using node 10.50.192.1:6379)

M: 06a762e259b242d59a8566dca1b3cfb1aeb5bf11 10.50.192.1:6379

   slots:[0-5460] (5461 slots) master

   1 additional replica(s)

S: 4152253191eb7e4d9e8fbeab92bc5fe6dccf0af2 10.50.192.6:6379

   slots: (0 slots) slave

   replicates 98cf05dd3ed8c5e8c78979b4a86e25b22d3e04c1

M: ce8499ddf828ba81edca03ca7848d23d0e49d62b 10.50.192.2:6379

   slots:[5461-10922] (5462 slots) master

   1 additional replica(s)

S: bbd81e8cb2231cf2b9a70596eed708b79cc9c196 10.50.192.4:6379

   slots: (0 slots) slave

   replicates 06a762e259b242d59a8566dca1b3cfb1aeb5bf11

S: b6741d41a655ffcbe59773f52f92b116fb9a19e8 10.50.192.5:6379

   slots: (0 slots) slave

   replicates ce8499ddf828ba81edca03ca7848d23d0e49d62b

M: 98cf05dd3ed8c5e8c78979b4a86e25b22d3e04c1 10.50.192.3:6379

   slots:[10923-16383] (5461 slots) master

   1 additional replica(s)

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

 root@controlplane ~

9.  Validate redis-cluster status

root@controlplane ~   kubectl exec -it redis-cluster-0 -- redis-cli cluster infocluster_state:ok

cluster_slots_assigned:16384

cluster_slots_ok:16384

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:6

cluster_size:3

cluster_current_epoch:6

cluster_my_epoch:1

cluster_stats_messages_ping_sent:15

cluster_stats_messages_pong_sent:17

cluster_stats_messages_sent:32

cluster_stats_messages_ping_received:12

cluster_stats_messages_pong_received:15

cluster_stats_messages_meet_received:5

cluster_stats_messages_received:32

 root@controlplane ~

10. Click on Check & Confirm to complete the task successfully


Happy Learning!!!!


Apart from this if you need more clarity,  I have made a  tutorial video on this , please go through and share your comments. Like and share the knowledge







Post a Comment

0 Comments

Latest Posts