c4i-web
c4i-web
c4i-web
c4i-web
  • Home
  • About Us
    • Our Team
  • Services
    • Mobile Application Development
      • iPhone apps Development
      • Android Apps Development
      • Hire Android Developer
      • iPad application development
    • Cloud Based Application Development
    • Web Development
  • Solutions
    • Cloud Migration
    • Data Center
    • Big Data Analytics
    • Educational Solutions
  • Technologies
    • Java
    • Internet of Things
    • Cloud Computing
    • Data Science
  • Careers
    • Open Positions
  • Blogs
    • Blog

      • Technology
      • SEO
      • Social Media
  • Contact

Home » Archives for October 2020
Technology
510

Create TCP/UDP Internal Load Balancer (ILB) on Google Cloud Platform (GCP) using managed and unmanaged instance groups

Creating Internal Load Balancer in Google Cloud Platform is easy and it is used to distribute the load across VM instances. In this guide we will see how to create TCP/UDP Load Balancer using managed and unmanaged instances.

 

Prerequisites:

1) Gcloud SDK must be installed in your local

2) Login to your designated gcp project

3) User or Service Account must have required IAM roles to create ILB

Create TCP/UDP ILB using Managed Instance group

Google’s managed instance group are the group of identical VM instances created using an instance template. These VM instances can be in different zones (regional) but with same region when multi zone is enabled while creating managed instance group(MIG). As by the name managed, this group has some important features like autoscaling, auto-healing, Rolling out updates.

To Create ILB, Follow the below steps sequentially

1) Create Managed Instance group using commandline or GUI or Google deployment manager or rest API

2) Create TCP heath check using below command line

gcloud compute health-checks create tcp <health-check-name> --description="Health check: TCP <port>" --check-interval=5s --timeout=5s --healthy-threshold=2 --unhealthy-threshold=2 --port=<port> --proxy-header=NONE --region=<health-check-region>

3) Create a backed service using below command line

gcloud compute backend-services create <backend-service-name> --load-balancing-scheme internal --health-checks <health-check-name> --protocol tcp --region <backend-service-region>

4) Assign the created managed instance group to the created backend service

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name> --instance-group-region=<instance-group-region> --region <backend-service-region>

5) Create a forwarding rule using below command line

gcloud compute forwarding-rules create <forwarding-rule-name> --load-balancing-scheme internal --address <ILB ip address> --ports <port> --subnet <full path of subnet> --region <forwarding rule region> --backend-service <backend-service-name>

Note: Managed instance group, backend service and forwarding rule must be in same region

Create TCP/UDP ILB using Unmanaged Instance group

An unmanaged instance group is a collection of user created/managed VM instances that are created in a same zone, VPC network, and subnet. Unmanaged instances will not have same instance template. We need to manually add user created/managed VM instances into unmanaged instance groups

To Create ILB, Follow the below steps sequentially

1) Assumed User created/managed instances are up and running. Then create Unmanaged instance group using below command line

gcloud compute instance-groups unmanaged create <instance-group-name-1> --zone=<zone1>

gcloud compute instance-groups unmanaged create <instance-group-name-2> --zone=<zone2>

2) Add User created/managed instances to the created instance groups

gcloud compute instance-groups unmanaged add-instances <instance-group-name-1> --instances <instance-name-1>,<instance-name-2> --zone=<zone1>
gcloud compute instance-groups unmanaged add-instances instance-group-name-2> --instances <instance-name-3>,<instance-name-4> --zone=<zone2>

Note: Unmanaged instance group will be created only with same zone instances

3) Verify User created/managed instances are grouped under unmanaged instance group by using below command line

gcloud compute instance-groups unmanaged list-instances <instance-group-name-1> --zone=<zone1>

gcloud compute instance-groups unmanaged list-instances <instance-group-name-2> --zone=<zone2>

4) Create TCP heath check using below command line

gcloud compute health-checks create tcp <health-check-name> --description="Health check: TCP <port>" --check-interval=5s --timeout=5s --healthy-threshold=2 --unhealthy-threshold=2 --port=<port> --proxy-header=NONE --region=<health-check-region>

5) Create a backed service using below command line

gcloud compute backend-services create <backend-service-name> --load-balancing-scheme internal --health-checks <health-check-name> --protocol tcp --region <backend-service-region>

6) Assign the created unmanaged instance groups to the created backend service

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name-1> --instance-group-zone <instance-group-zone-1> --region <backend-service-region>

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name-2> --instance-group-zone <instance-group-zone-2> --region <backend-service-region>Create a forwarding rule using below command line

7) Create a forwarding rule using below command line

gcloud compute forwarding-rules create <forwarding-rule-name> --load-balancing-scheme internal --address <ILB ip address> --ports <port> --subnet <full path of subnet> --region <forwarding rule region> --backend-service <backend-service-name>

Note: Unmanaged instance group, backend service and forwarding rule must be in same region

devops, gcp, google cloud
Technology
428

HTTP Liveness and Readiness Probe in Kubernetes Deployment

Liveness and Readiness probes are required in Kubernetes to prevent deadlock of your application deployed and zero missing request while pod is initializing. When probe is configured in deployment, each pod will go through probe conditions.

Liveness and readiness probes will be applicable to new pod which is created by horizontal pod autoscaling (hpa)

We are going to learn how to configure probes in kubernetes deployment:

Liveness probe: It will take care of the container when it is in deadlock or application not running by restarting container
Readiness probe: It will take care of the pod when to join the service to serve traffic

Prerequisite:

1) kubectl cli should be installed in your local
2) Connect to the kubernetes cluster
3) /health API should be enabled in application
4) Configmaps related to deployment should be deployed before deployment

Step1:

Configure the liveness and readiness probe in the deployment yaml

Liveness probe:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 3
  periodSeconds: 3

Liveness probe should be configured under template.spec.containers section

Readiness probe:

readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 3
  periodSeconds: 3

Readiness probe configuration is similar to liveness probe and should be configured under template.spec.containers section

Sample Deployment Yaml with probes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: Myapp-deployment
  labels:
    apptype: java
spec:
  replicas: 3
  selector:
    matchLabels:
      apptype: java
      tier: web
  template:
    metadata:
      labels:
        apptype: java
        tier: web
    spec:
      containers:
      - name: [container name]
        image: [image from any container registry]
        command: ["/bin/sh"]
        args: ["java -Xmx2g -Xss256k -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/myAppgc.log -jar [application.jar] --spring.config.location=[absolute path of the config maps mounted] "]
        ports: 
        - containerPort: 8080
        volumeMounts:
        - name: appconfig
          mountPath: "/config/application"
          readOnly: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 3
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 3
      volumes:
      - name: appconfig
        configMap:
        name: [configmap-name]

Note: InitialDelaySeconds need to be higher value than application start up time

Here are the few fields we need to consider while configuring probes:

  1. initialDelaySeconds: Number of seconds after probe will start
  2. periodSeconds: gap between probe start
  3. timeoutSeconds: probe time out
  4. successThreshold: number of success probes after failure
  5. failureThreshold: Waiting time to perform restart container(Liveness Probe) and marking container unready(Readiness Probe)
  6. scheme: HTTP or HTTPS Defaults to HTTP.
  7. path: Path to access on the endpoint
  8. port: port to access on the container

Step2:

Save the above mentioned deployment yaml and deploy it in kubernetes cluster by executing the below command

kubectl apply -f [deployment.yaml]
Technology
399

Configure Pod Anti-affinity and Pod Disruption Budget in Kubernetes deployment for High availability

High availability (HA) is the ability to make our application continuously operational for a desirably long length of time. By configuring Pod Anti-affinity and Pod disruption budget together will make stateful or stateless application pods to highly available during any of the below scenarios:

1) Any one node is unavailable or under maintenance
2) cluster administrator deletes kubernetes node by mistake
3) cloud provider or hypervisor failure makes kubernetes node disappear
4) Cluster administrator/User deletes your application pods by mistake

In this blog, We are going to configure pod anti-affinity and Pod disruption budget for kubernetes deployment

Prerequisite:

1) kubectl cli should be installed in your local
2) Connect to the kubernetes cluster
3) Configmaps related to deployment should be deployed before deployment

Pod Anti-affinity:

Pod Anti-affinity: It help us not to schedule same type of pod in one node which means you will have only one pod in one node and other same type pod in another node. Hence scheduler will never co-locate the same type of pod in the same node

For Example, if three pods of same deployment(replicas=3) are in one node and that node has crashed or unavailable, then our application will be impacted. When pod anti-affinity is configured, one node failure/unavailable will not impact the whole application.

1) Configure the below snippet under template.spec in the application deployment yaml

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - myapp
       topologyKey: "kubernetes.io/hostname"

Note: Above Pod Anti-affinity snippet uses Selector to match with key app in value myapp and uses node labels

2) Sample deployment yaml with pod anti-affinity:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    apptype: java
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      apptype: java
  template:
    metadata:
      labels:
        apptype: java
        app: myapp
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - myapp
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: [container-name]
        image: [image from any container registry]
        command: ["/bin/sh"]
        args: ["java -Xmx2g -Xss256k -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/myAppgc.log -jar [application.jar] --spring.config.location=[absolute path of the config map eg. /config/application/[application.properties]] "]
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: appconfig
          mountPath: "/config/application"
          readOnly: true
      volumes:
      - name: appconfig
        configMap:
        name: [configmap-name]

3) Save the above mentioned deployment yaml and deploy it in kubernetes cluster by executing the below command

kubectl apply -f [deployment.yaml]

Pod Disruption Budget:

Pod Disruption Budget help us to define how many minimum available and maximum unavailable pods must meet the condition even during disruption for stateful or stateless application. You can define minAvailable/maxUnavailable values either as integer or percentage.

1) You can configure Pod disruption budget in two ways:

minAvailable – Specified value/percentage of pod should be available always

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb-myapp
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

maxUnavailable – Specified value/percentage of pod can be acceptable during disruption

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb2-myapp
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: myapp

Note: minAvailable and maxUnavailable cannot be both set in a single yaml. You need to create two pod disruption budget if you are configuring both minAvailable and maxUnavailable for your application deployment

2) We have already deployed our application with label app: myapp. To configure pod disruption budget, save the above two PodDisruptionBudget yamls and deploy it in the kubernetes cluster by executing the below commands:

kubectl apply -f pdbmin.yaml
kubectl apply -f pdbmax.yaml

Now Pod Disruption Budget is applied for the label app: myapp

3) Verify the pod disruption budgets by executing the below command:

kubectl get poddisruptionbudgets
c4i-web
We are on a mission to energize everyone to embrace technology for profitable employment.
Contact Us
Corporate Office
26606 Cook Field Rd Suite 400 Katy TX-77494
713-565-1199
346-447-6884
Hyderabad
Hyderabad
Spacion Business Centre, Level 5, Spacion Towers, HITEC City, Hyderabad, Telangana 500081
+91 836 737 5363
Copyright © 2021. All Rights Reserved.