c4i-web
c4i-web
c4i-web
c4i-web
  • Home
  • About Us
    • Our Team
  • Services
    • Mobile Application Development
      • iPhone apps Development
      • Android Apps Development
      • Hire Android Developer
      • iPad application development
    • Cloud Based Application Development
    • Web Development
  • Solutions
    • Cloud Migration
    • Data Center
    • Big Data Analytics
    • Educational Solutions
  • Technologies
    • Java
    • Internet of Things
    • Cloud Computing
    • Data Science
  • Careers
    • Open Positions
    • Blogs
      • Blog

        • Technology
        • SEO
        • Social Media
  • Contact
    • Terms and Conditions
    • Privacy Policy

Configure Pod Anti-affinity and Pod Disruption Budget in Kubernetes deployment for High availability

Home » Technology » Configure Pod Anti-affinity and Pod Disruption Budget in Kubernetes deployment for High availability

High availability (HA) is the ability to make our application continuously operational for a desirably long length of time. By configuring Pod Anti-affinity and Pod disruption budget together will make stateful or stateless application pods to highly available during any of the below scenarios:

1) Any one node is unavailable or under maintenance
2) cluster administrator deletes kubernetes node by mistake
3) cloud provider or hypervisor failure makes kubernetes node disappear
4) Cluster administrator/User deletes your application pods by mistake

In this blog, We are going to configure pod anti-affinity and Pod disruption budget for kubernetes deployment

Prerequisite:

1) kubectl cli should be installed in your local
2) Connect to the kubernetes cluster
3) Configmaps related to deployment should be deployed before deployment

Pod Anti-affinity:

Pod Anti-affinity: It help us not to schedule same type of pod in one node which means you will have only one pod in one node and other same type pod in another node. Hence scheduler will never co-locate the same type of pod in the same node

For Example, if three pods of same deployment(replicas=3) are in one node and that node has crashed or unavailable, then our application will be impacted. When pod anti-affinity is configured, one node failure/unavailable will not impact the whole application.

1) Configure the below snippet under template.spec in the application deployment yaml

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - myapp
       topologyKey: "kubernetes.io/hostname"

Note: Above Pod Anti-affinity snippet uses Selector to match with key app in value myapp and uses node labels

2) Sample deployment yaml with pod anti-affinity:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    apptype: java
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      apptype: java
  template:
    metadata:
      labels:
        apptype: java
        app: myapp
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - myapp
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: [container-name]
        image: [image from any container registry]
        command: ["/bin/sh"]
        args: ["java -Xmx2g -Xss256k -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/myAppgc.log -jar [application.jar] --spring.config.location=[absolute path of the config map eg. /config/application/[application.properties]] "]
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: appconfig
          mountPath: "/config/application"
          readOnly: true
      volumes:
      - name: appconfig
        configMap:
        name: [configmap-name]

3) Save the above mentioned deployment yaml and deploy it in kubernetes cluster by executing the below command

kubectl apply -f [deployment.yaml]

Pod Disruption Budget:

Pod Disruption Budget help us to define how many minimum available and maximum unavailable pods must meet the condition even during disruption for stateful or stateless application. You can define minAvailable/maxUnavailable values either as integer or percentage.

1) You can configure Pod disruption budget in two ways:

minAvailable – Specified value/percentage of pod should be available always

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb-myapp
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

maxUnavailable – Specified value/percentage of pod can be acceptable during disruption

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb2-myapp
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: myapp

Note: minAvailable and maxUnavailable cannot be both set in a single yaml. You need to create two pod disruption budget if you are configuring both minAvailable and maxUnavailable for your application deployment

2) We have already deployed our application with label app: myapp. To configure pod disruption budget, save the above two PodDisruptionBudget yamls and deploy it in the kubernetes cluster by executing the below commands:

kubectl apply -f pdbmin.yaml
kubectl apply -f pdbmax.yaml

Now Pod Disruption Budget is applied for the label app: myapp

3) Verify the pod disruption budgets by executing the below command:

kubectl get poddisruptionbudgets
407
Technology
Prev PostAnalyse Logs
Next PostHTTP Liveness and Readiness Probe in Kubernetes Deployment

Related items

Technology

Create TCP/UDP Internal Load Balancer (ILB) on Google Cloud Platform (GCP) using managed and unmanaged instance groups

Creating Internal Load Balancer in Google Cloud Platform is easy and it is used to distrib

522
Technology

HTTP Liveness and Readiness Probe in Kubernetes Deployment

Liveness and Readiness probes are required in Kubernetes to prevent deadlock of your appli

439
c4i-web
We are on a mission to energize everyone to embrace technology for profitable employment.
Contact Us
Corporate Office
26606 Cook Field Rd Suite 400 Katy TX-77494
713-565-1199
346-447-6884
Hyderabad
Hyderabad
Spacion Business Centre, Level 5, Spacion Towers, HITEC City, Hyderabad, Telangana 500081
+91 836 737 5363
Copyright © 2021. All Rights Reserved.