KodeKloud Kubernetes Security CKS Lab Challenge 2 | Kubesec scan tool | Ensure that the pods are immutable | This pod should not accessed using the kubectl exec'command

Ticker

6/recent/ticker-posts

KodeKloud Kubernetes Security CKS Lab Challenge 2 | Kubesec scan tool | Ensure that the pods are immutable | This pod should not accessed using the kubectl exec'command

 Question : A number of applications have been deployed in the dev, staging and prod namespaces. There are a few security issues with these applications.

Inspect the issues in detail by clicking on the icons of the interactive architecture diagram on the right and complete the tasks to secure the applications. Once done click on the Check button to validate your work.


Run as non root(instead, use correct application user)

Avoid exposing unnecessary ports

Avoid copying the 'Dockerfile' and other unnecessary files and directories in to the image. Move the required files and directories (app.py, requirements.txt and the templates directory) to a subdirectory called 'app' under 'webapp' and update the COPY instruction in the 'Dockerfile' accordingly.

Once the security issues are fixed, rebuild this image locally with the tag 'kodekloud/webapp-color:stable'


Use a network policy called 'prod-netpol' that will only allow traffic only within the 'prod' namespace. All the traffic from other namespaces should be denied.


The deployment has a secret hardcoded. Instead, create a secret called 'prod-db' for all the hardcoded values and consume the secret values as environment variables within the deployment.


Ensure that the pod 'staging-webapp' is immutable:

This pod can be accessed using the 'kubectl exec' command. We want to make sure that this does not happen. Use a startupProbe to remove all shells (sh and ash) before the container startup. Use 'initialDelaySeconds' and 'periodSeconds' of '5'. Hint: For this to work you would have to run the container as root!

Image used: 'kodekloud/webapp-color:stable'

Redeploy the pod as per the above recommendations and make sure that the application is up.


Ensure that the pod 'dev-webapp' is immutable:

This pod can be accessed using the 'kubectl exec' command. We want to make sure that this does not happen. Use a startupProbe to remove all shells before the container startup. Use 'initialDelaySeconds' and 'periodSeconds' of '5'. Hint: For this to work you would have to run the container as root!

Image used: 'kodekloud/webapp-color:stable'

Redeploy the pod as per the above recommendations and make sure that the application is up.


Before you proceed, make sure to fix all issues specified in the 'Dockerfile(webapp)' section of the architecture diagram. Using the 'kubesec' tool, which is already installed on the node, identify and fix the following issues:


Fix issues with the '/root/dev-webapp.yaml' file which was used to deploy the 'dev-webapp' pod in the 'dev' namespace.

Redeploy the 'dev-webapp' pod once issues are fixed with the image 'kodekloud/webapp-color:stable'

Fix issues with the '/root/staging-webapp.yaml' file which was used to deploy the 'staging-webapp' pod in the 'staging' namespace.

Redeploy the 'staging-webapp' pod once issues are fixed with the image 'kodekloud/webapp-color:stable'



Solution:  

1. Check the  existing docker images

root@controlplane ~   docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE

busybox                              latest              66ba00ad3de8        15 months ago       4.87MB

nginx                                latest              1403e55ab369        15 months ago       142MB

nginx                                alpine              1e415454686a        16 months ago       40.7MB

k8s.gcr.io/kube-apiserver            v1.23.0             e6bf5ddd4098        2 years ago         135MB

k8s.gcr.io/kube-scheduler            v1.23.0             56c5af1d00b5        2 years ago         53.5MB

k8s.gcr.io/kube-proxy                v1.23.0             e03484a90585        2 years ago         112MB

k8s.gcr.io/kube-controller-manager   v1.23.0             37c6aeb3663b        2 years ago         125MB

k8s.gcr.io/etcd                      3.5.1-0             25f8c7f3da61        2 years ago         293MB

k8s.gcr.io/coredns/coredns           v1.8.6              a4ca41631cc7        2 years ago         46.8MB

k8s.gcr.io/pause                     3.6                 6270bb605e12        2 years ago         683kB

weaveworks/weave-npc                 2.8.1               7f92d556d4ff        3 years ago         39.3MB

weaveworks/weave-kube                2.8.1               df29c0a4002c        3 years ago         89MB

quay.io/coreos/flannel               v0.13.1-rc1         f03a23d55e57        3 years ago         64.6MB

quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        4 years ago         52.8MB

kodekloud/fluent-ui-running          latest              bd30270a8b9a        5 years ago         969MB

kodekloud/webapp-color               latest              32a1ce4c22f2        5 years ago         84.8MB

 root@controlplane ~  


 2. Enter the Webapp folder and modified Dockerfile

root@controlplane ~   ls

dev-webapp.yaml  staging-webapp.yaml  webapp

 root@controlplane ~   cd webapp/

 root@controlplane ~/webapp via 🐍 v3.6.9   ls

app.py  Dockerfile  requirements.txt  templates

 root@controlplane ~/webapp via 🐍 v3.6.9


3. Do the changes as per the task 
  • Run as non root(instead, use correct application user)
  • There is no need access for SSH.

root@controlplane ~/webapp via 🐍 v3.6.9   vi  Dockerfile

 

root@controlplane ~/webapp via 🐍 v3.6.9   cat Dockerfile

FROM python:3.6-alpine

 

## Install Flask

RUN pip install flask

 

## Copy All files to /opt

COPY ./app/ /opt/

 

## Flask app to be exposed on port 8080

EXPOSE 8080

 

## Flask app to be run as 'worker'

RUN adduser -D worker

 

WORKDIR /opt

 

USER worker

 

ENTRYPOINT ["python", "app.py"]

 root@controlplane ~/webapp via 🐍 v3.6.9  


4. Move app.py, requirements.txt, template directory to subdirectory app

root@controlplane ~/webapp via 🐍 v3.6.9   mkdir app

 root@controlplane ~/webapp via 🐍 v3.6.9   mv app.py requirements.txt templates/ app/

root@controlplane ~   cd webapp/

 root@controlplane ~/webapp   ls

app  Dockerfile

 root@controlplane ~/webapp   ls app

app.py  requirements.txt  templates

 root@controlplane ~/webapp  


5. Let's verify existing image for tag and build the Docker image

root@controlplane ~/webapp   docker images |grep color

kodekloud/webapp-color               latest              32a1ce4c22f2        5 years ago         84.8MB

 root@controlplane ~/webapp   docker build -t kodekloud/webapp-color:stable .

Sending build context to Docker daemon  8.192kB

Step 1/8 : FROM python:3.6-alpine

3.6-alpine: Pulling from library/python

59bf1c3509f3: Pull complete

8786870f2876: Pull complete

acb0e804800e: Pull complete

52bedcb3e853: Pull complete

b064415ed3d7: Pull complete

Digest: sha256:579978dec4602646fe1262f02b96371779bfb0294e92c91392707fa999c0c989

Status: Downloaded newer image for python:3.6-alpine

 ---> 3a9e80fa4606

Step 2/8 : RUN pip install flask

 ---> Running in a0e1a61095b0

Collecting flask

  Downloading Flask-2.0.3-py3-none-any.whl (95 kB)

Collecting Werkzeug>=2.0

  Downloading Werkzeug-2.0.3-py3-none-any.whl (289 kB)

Collecting click>=7.1.2

  Downloading click-8.0.4-py3-none-any.whl (97 kB)

Collecting Jinja2>=3.0

  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)

Collecting itsdangerous>=2.0

  Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)

Collecting importlib-metadata

  Downloading importlib_metadata-4.8.3-py3-none-any.whl (17 kB)

Collecting MarkupSafe>=2.0

  Downloading MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl (29 kB)

Collecting dataclasses

  Downloading dataclasses-0.8-py3-none-any.whl (19 kB)

Collecting zipp>=0.5

  Downloading zipp-3.6.0-py3-none-any.whl (5.3 kB)

Collecting typing-extensions>=3.6.4

  Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB)

Installing collected packages: zipp, typing-extensions, MarkupSafe, importlib-metadata, dataclasses, Werkzeug, Jinja2, itsdangerous, click, flask

Successfully installed Jinja2-3.0.3 MarkupSafe-2.0.1 Werkzeug-2.0.3 click-8.0.4 dataclasses-0.8 flask-2.0.3 importlib-metadata-4.8.3 itsdangerous-2.0.1 typing-extensions-4.1.1 zipp-3.6.0

WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.

You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.

Removing intermediate container a0e1a61095b0

 ---> b18cd5a1868b

Step 3/8 : COPY ./app/ /opt/

 ---> e42e046a3eea

Step 4/8 : EXPOSE 8080

 ---> Running in c6d18a019d2f

Removing intermediate container c6d18a019d2f

 ---> 5429cd32108f

Step 5/8 : RUN adduser -D worker

 ---> Running in d88e549b7051

Removing intermediate container d88e549b7051

 ---> 4225f7ebbf7a

Step 6/8 : WORKDIR /opt

 ---> Running in 2af9820b1da9

Removing intermediate container 2af9820b1da9

 ---> 713adc951488

Step 7/8 : USER worker

 ---> Running in 8584a60aacfc

Removing intermediate container 8584a60aacfc

 ---> 422661794454

Step 8/8 : ENTRYPOINT ["python", "app.py"]

 ---> Running in 4e4efbb2d373

Removing intermediate container 4e4efbb2d373

 ---> bd03430468b3

Successfully built bd03430468b3

Successfully tagged kodekloud/webapp-color:stable

 root@controlplane ~/webapp   docker images |grep color

kodekloud/webapp-color               stable              bd03430468b3        25 seconds ago      51.9MB

kodekloud/webapp-color               latest              32a1ce4c22f2        5 years ago         84.8MB

 root@controlplane ~/webapp  


6. Using the 'kubesec' tool, which is already installed on the node. 
    Scan /root/dev-webapp.yaml & /root/staging-webapp.yaml

root@controlplane ~/webapp    kubesec scan /root/dev-webapp.yaml

[

  {

    "object": "Pod/dev-webapp.dev",

    "valid": true,

    "fileName": "/root/dev-webapp.yaml",

    "message": "Failed with a score of -34 points",

    "score": -34,

    "scoring": {

      "critical": [

        {

          "id": "CapSysAdmin",

          "selector": "containers[] .securityContext .capabilities .add == SYS_ADMIN",

          "reason": "CAP_SYS_ADMIN is the most privileged capability and should always be avoided",

          "points": -30

        },

        {

          "id": "AllowPrivilegeEscalation",

          "selector": "containers[] .securityContext .allowPrivilegeEscalation == true",

          "reason": "",

          "points": -7

        }

      ],

      "passed": [

        {

          "id": "ServiceAccountName",

          "selector": ".spec .serviceAccountName",

          "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",

          "points": 3

        }

      ],

      "advise": [

        {

          "id": "ApparmorAny",

          "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",

          "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",

          "points": 3

        },

        {

          "id": "SeccompAny",

          "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",

          "reason": "Seccomp profiles set minimum privilege and secure against unknown threats",

          "points": 1

        },

        {

          "id": "LimitsCPU",

          "selector": "containers[] .resources .limits .cpu",

          "reason": "Enforcing CPU limits prevents DOS via resource exhaustion",

          "points": 1

        },

        {

          "id": "RequestsMemory",

          "selector": "containers[] .resources .limits .memory",

          "reason": "Enforcing memory limits prevents DOS via resource exhaustion",

          "points": 1

        },

        {

          "id": "RequestsCPU",

          "selector": "containers[] .resources .requests .cpu",

          "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",

          "points": 1

        },

        {

          "id": "RequestsMemory",

          "selector": "containers[] .resources .requests .memory",

          "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",

          "points": 1

        },

        {

          "id": "CapDropAny",

          "selector": "containers[] .securityContext .capabilities .drop",

          "reason": "Reducing kernel capabilities available to a container limits its attack surface",

          "points": 1

        },

        {

          "id": "CapDropAll",

          "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",

          "reason": "Drop all capabilities and add only those required to reduce syscall attack surface",

          "points": 1

        },

        {

          "id": "ReadOnlyRootFilesystem",

          "selector": "containers[] .securityContext .readOnlyRootFilesystem == true",

          "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",

          "points": 1

        },

        {

          "id": "RunAsNonRoot",

          "selector": "containers[] .securityContext .runAsNonRoot == true",

          "reason": "Force the running image to run as a non-root user to ensure least privilege",

          "points": 1

        },

        {

          "id": "RunAsUser",

          "selector": "containers[] .securityContext .runAsUser -gt 10000",

          "reason": "Run as a high-UID user to avoid conflicts with the host's user table",

          "points": 1

        }

      ]

    }

  }

]

 root@controlplane ~/webapp  


7. Remove the Critical issues in /root/dev-webapp.yaml file by editing YAML line that faced the security issues.  similar steps do for  /root/staging-webapp.yaml file 

root@controlplane ~/webapp   vi /root/dev-webapp.yaml

 root@controlplane ~/webapp   cat /root/dev-webapp.yaml

apiVersion: v1

kind: Pod

metadata:

  labels:

    name: dev-webapp

  name: dev-webapp

  namespace: dev

spec:

  containers:

  - env:

    - name: APP_COLOR

      value: darkblue

    startupProbe: 

      exec:

        command:

        - /bin/rm

        - /bin/sh

        - /bin/ash

      initialDelaySeconds: 5

      periodSeconds: 5   

    image: kodekloud/webapp-color:stable

    imagePullPolicy: Never

    name: webapp-color

    resources: {}

    securityContext:

      runAsUser: 0

      allowPrivilegeEscalation: false

      capabilities:

        add:

        - NET_ADMIN

    terminationMessagePath: /dev/termination-log

    terminationMessagePolicy: File

    volumeMounts:

    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount

      name: kube-api-access-z4lvb

      readOnly: true

  dnsPolicy: ClusterFirst

  enableServiceLinks: true

  nodeName: controlplane

  preemptionPolicy: PreemptLowerPriority

  priority: 0

  restartPolicy: Always

  schedulerName: default-scheduler

  securityContext: {}

  serviceAccount: default

  serviceAccountName: default

  terminationGracePeriodSeconds: 30

  tolerations:

  - effect: NoExecute

    key: node.kubernetes.io/not-ready

    operator: Exists

    tolerationSeconds: 300

  - effect: NoExecute

    key: node.kubernetes.io/unreachable

    operator: Exists

    tolerationSeconds: 300

  volumes:

  - name: kube-api-access-z4lvb

    projected:

      defaultMode: 420

      sources:

      - serviceAccountToken:

          expirationSeconds: 3607

          path: token

      - configMap:

          items:

          - key: ca.crt

            path: ca.crt

          name: kube-root-ca.crt

      - downwardAPI:

          items:

          - fieldRef:

              apiVersion: v1

              fieldPath: metadata.namespace

            path: namespace

 root@controlplane ~/webapp  


Created  YAML  files with all the parameters, Kindly clone repo or you can copy from GitLab 

git clone https://gitlab.com/nb-tech-support/devops.git

Refer Below Video for more clarity )

8. redeploy the  dev-webapp & staging-webapp  both  pods by force 

root@controlplane ~/webapp   kubectl replace --force -f /root/dev-webapp.yaml -n dev

pod "dev-webapp" deleted

pod/dev-webapp replaced

 root@controlplane ~/webapp   kubectl get pods -n dev

NAME         READY   STATUS    RESTARTS   AGE

dev-webapp   0/1     Running   0          9s

 root@controlplane ~/webapp

root@controlplane ~/webapp   kubectl replace --force -f /root/staging-webapp.yaml -n staging

pod "staging-webapp" deleted

pod/staging-webapp replaced

 root@controlplane ~/webapp   kubectl get pods -n staging

NAME             READY   STATUS    RESTARTS   AGE

staging-webapp   1/1     Running   0          11s

 root@controlplane ~/webapp

9. There is default secret config. Now, let's create new secret with the given name prod-db under prod namespace.

root@controlplane ~/webapp   kubectl get secrets -n prod

NAME                  TYPE                                  DATA   AGE

default-token-l7q2z   kubernetes.io/service-account-token   3      44m

 root@controlplane ~/webapp  

 root@controlplane ~/webapp   kubectl -n prod create secret generic prod-db --from-literal DB_Host=prod-db --from-literal DB_User=root --from-literal DB_Password=paswrd

secret/prod-db created

 root@controlplane ~/webapp   kubectl get secrets -n prod

NAME                  TYPE                                  DATA   AGE

default-token-l7q2z   kubernetes.io/service-account-token   3      49m

prod-db               Opaque                                3      44s

root@controlplane ~/webapp  


10. Edit the prod-web deployment and replace environment type created .

root@controlplane ~/webapp   kubectl get all -n prod

NAME                           READY   STATUS    RESTARTS   AGE

pod/prod-db                    1/1     Running   0          44m

pod/prod-web-8747d8c84-6mgtz   1/1     Running   0          44m

 

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE

service/prod-db    ClusterIP   10.108.124.250   <none>        3306/TCP         44m

service/prod-web   NodePort    10.96.61.74      <none>        8080:30081/TCP   44m

 

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/prod-web   1/1     1            1           44m

 

NAME                                 DESIRED   CURRENT   READY   AGE

replicaset.apps/prod-web-8747d8c84   1         1         1       44m

 

root@controlplane ~/webapp  

 root@controlplane ~/webapp   kubectl edit deployment prod-web -n prod

deployment.apps/prod-web  edited

 root@controlplane ~/webapp  


11. Create Network Policy from the given question under prod namespace.

root@controlplane ~/webapp   vi netpol.yaml

 root@controlplane ~/webapp   cat netpol.yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: prod-netpol

  namespace: prod

spec:

  podSelector: {}

  policyTypes:

  - Ingress

  ingress:

  - from:

      - namespaceSelector:

            matchLabels:

              kubernetes.io/metadata.name: prod

 root@controlplane ~/webapp   kubectl apply -f netpol.yaml

networkpolicy.networking.k8s.io/prod-netpol created

 root@controlplane ~/webapp   kubectl get all -n prod

NAME                           READY   STATUS    RESTARTS   AGE

pod/prod-db                    1/1     Running   0          59m

pod/prod-web-8747d8c84-6mgtz   1/1     Running   0          59m

 

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE

service/prod-db    ClusterIP   10.108.124.250   <none>        3306/TCP         59m

service/prod-web   NodePort    10.96.61.74      <none>        8080:30081/TCP   59m

 

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/prod-web   1/1     1            1           59m

 

NAME                                 DESIRED   CURRENT   READY   AGE

replicaset.apps/prod-web-8747d8c84   1         1         1       59m

 root@controlplane ~/webapp  



12. Click on Check & Confirm to complete the task successfully


Happy Learning!!!!


Apart from this if you need more clarity,  I have made a  tutorial video on this , please go through and share your comments. Like and share the knowledge













Post a Comment

0 Comments

Latest Posts