c4itechnologies
c4itechnologies
c4itechnologies
c4itechnologies
  • Home
  • About Us
    • Our Team
  • Services
    • Mobile Application Development
      • iPhone apps Development
      • Android Apps Development
      • Hire Android Developer
      • iPad application development
    • Games Development
    • Cloud Based Application Development
    • Web Development
    • SEO Services
      • SEO eCommerce
      • Free SEO Audit
  • Solutions
    • Cloud Migration
    • Data Center
    • Big Data Analytics
    • Educational Solutions
    • Data mining
  • Technologies
    • Java
    • Internet of Things
    • Cloud Computing
    • Data Science
  • Careers
    • Open Positions
  • Blogs
    • Blog

      • Technology
      • SEO
      • Social Media
  • Contact

October, 2020

Home » Archives for October 2020
Technology
35

Create TCP/UDP Internal Load Balancer (ILB) on Google Cloud Platform (GCP) using managed and unmanaged instance groups

Creating Internal Load Balancer in Google Cloud Platform is easy and it is used to distribute the load across VM instances. In this guide we will see how to create TCP/UDP Load Balancer using managed and unmanaged instances.

 

Prerequisites:

1) Gcloud SDK must be installed in your local

2) Login to your designated gcp project

3) User or Service Account must have required IAM roles to create ILB

Create TCP/UDP ILB using Managed Instance group

Google’s managed instance group are the group of identical VM instances created using an instance template. These VM instances can be in different zones (regional) but with same region when multi zone is enabled while creating managed instance group(MIG). As by the name managed, this group has some important features like autoscaling, auto-healing, Rolling out updates.

To Create ILB, Follow the below steps sequentially

1) Create Managed Instance group using commandline or GUI or Google deployment manager or rest API

2) Create TCP heath check using below command line

gcloud compute health-checks create tcp <health-check-name> --description="Health check: TCP <port>" --check-interval=5s --timeout=5s --healthy-threshold=2 --unhealthy-threshold=2 --port=<port> --proxy-header=NONE --region=<health-check-region>

3) Create a backed service using below command line

gcloud compute backend-services create <backend-service-name> --load-balancing-scheme internal --health-checks <health-check-name> --protocol tcp --region <backend-service-region>

4) Assign the created managed instance group to the created backend service

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name> --instance-group-region=<instance-group-region> --region <backend-service-region>

5) Create a forwarding rule using below command line

gcloud compute forwarding-rules create <forwarding-rule-name> --load-balancing-scheme internal --address <ILB ip address> --ports <port> --subnet <full path of subnet> --region <forwarding rule region> --backend-service <backend-service-name>

Note: Managed instance group, backend service and forwarding rule must be in same region

Create TCP/UDP ILB using Unmanaged Instance group

An unmanaged instance group is a collection of user created/managed VM instances that are created in a same zone, VPC network, and subnet. Unmanaged instances will not have same instance template. We need to manually add user created/managed VM instances into unmanaged instance groups

To Create ILB, Follow the below steps sequentially

1) Assumed User created/managed instances are up and running. Then create Unmanaged instance group using below command line

gcloud compute instance-groups unmanaged create <instance-group-name-1> --zone=<zone1>

gcloud compute instance-groups unmanaged create <instance-group-name-2> --zone=<zone2>

2) Add User created/managed instances to the created instance groups

gcloud compute instance-groups unmanaged add-instances <instance-group-name-1> --instances <instance-name-1>,<instance-name-2> --zone=<zone1>
gcloud compute instance-groups unmanaged add-instances instance-group-name-2> --instances <instance-name-3>,<instance-name-4> --zone=<zone2>

Note: Unmanaged instance group will be created only with same zone instances

3) Verify User created/managed instances are grouped under unmanaged instance group by using below command line

gcloud compute instance-groups unmanaged list-instances <instance-group-name-1> --zone=<zone1>

gcloud compute instance-groups unmanaged list-instances <instance-group-name-2> --zone=<zone2>

4) Create TCP heath check using below command line

gcloud compute health-checks create tcp <health-check-name> --description="Health check: TCP <port>" --check-interval=5s --timeout=5s --healthy-threshold=2 --unhealthy-threshold=2 --port=<port> --proxy-header=NONE --region=<health-check-region>

5) Create a backed service using below command line

gcloud compute backend-services create <backend-service-name> --load-balancing-scheme internal --health-checks <health-check-name> --protocol tcp --region <backend-service-region>

6) Assign the created unmanaged instance groups to the created backend service

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name-1> --instance-group-zone <instance-group-zone-1> --region <backend-service-region>

gcloud compute backend-services add-backend <backend-service-name> --instance-group <instance-group-name-2> --instance-group-zone <instance-group-zone-2> --region <backend-service-region>Create a forwarding rule using below command line

7) Create a forwarding rule using below command line

gcloud compute forwarding-rules create <forwarding-rule-name> --load-balancing-scheme internal --address <ILB ip address> --ports <port> --subnet <full path of subnet> --region <forwarding rule region> --backend-service <backend-service-name>

Note: Unmanaged instance group, backend service and forwarding rule must be in same region

KARTHICK SHANMUGAMOORTHY
October 21, 2020
devops, gcp, google cloud
Technology
33

HTTP Liveness and Readiness Probe in Kubernetes Deployment

Liveness and Readiness probes are required in Kubernetes to prevent deadlock of your application deployed and zero missing request while pod is initializing. When probe is configured in deployment, each pod will go through probe conditions.

Liveness and readiness probes will be applicable to new pod which is created by horizontal pod autoscaling (hpa)

We are going to learn how to configure probes in kubernetes deployment:

Liveness probe: It will take care of the container when it is in deadlock or application not running by restarting container
Readiness probe: It will take care of the pod when to join the service to serve traffic

Prerequisite:

1) kubectl cli should be installed in your local
2) Connect to the kubernetes cluster
3) /health API should be enabled in application
4) Configmaps related to deployment should be deployed before deployment

Step1:

Configure the liveness and readiness probe in the deployment yaml

Liveness probe:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 3
  periodSeconds: 3

Liveness probe should be configured under template.spec.containers section

Readiness probe:

readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 3
  periodSeconds: 3

Readiness probe configuration is similar to liveness probe and should be configured under template.spec.containers section

Sample Deployment Yaml with probes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: Myapp-deployment
  labels:
    apptype: java
spec:
  replicas: 3
  selector:
    matchLabels:
      apptype: java
      tier: web
  template:
    metadata:
      labels:
        apptype: java
        tier: web
    spec:
      containers:
      - name: [container name]
        image: [image from any container registry]
        command: ["/bin/sh"]
        args: ["java -Xmx2g -Xss256k -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/myAppgc.log -jar [application.jar] --spring.config.location=[absolute path of the config maps mounted] "]
        ports: 
        - containerPort: 8080
        volumeMounts:
        - name: appconfig
          mountPath: "/config/application"
          readOnly: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 3
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 3
      volumes:
      - name: appconfig
        configMap:
        name: [configmap-name]

Note: InitialDelaySeconds need to be higher value than application start up time

Here are the few fields we need to consider while configuring probes:

  1. initialDelaySeconds: Number of seconds after probe will start
  2. periodSeconds: gap between probe start
  3. timeoutSeconds: probe time out
  4. successThreshold: number of success probes after failure
  5. failureThreshold: Waiting time to perform restart container(Liveness Probe) and marking container unready(Readiness Probe)
  6. scheme: HTTP or HTTPS Defaults to HTTP.
  7. path: Path to access on the endpoint
  8. port: port to access on the container

Step2:

Save the above mentioned deployment yaml and deploy it in kubernetes cluster by executing the below command

kubectl apply -f [deployment.yaml]
KARTHICK SHANMUGAMOORTHY
October 21, 2020
Technology
31

Configure Pod Anti-affinity and Pod Disruption Budget in Kubernetes deployment for High availability

High availability (HA) is the ability to make our application continuously operational for a desirably long length of time. By configuring Pod Anti-affinity and Pod disruption budget together will make stateful or stateless application pods to highly available during any of the below scenarios:

1) Any one node is unavailable or under maintenance
2) cluster administrator deletes kubernetes node by mistake
3) cloud provider or hypervisor failure makes kubernetes node disappear
4) Cluster administrator/User deletes your application pods by mistake

In this blog, We are going to configure pod anti-affinity and Pod disruption budget for kubernetes deployment

Prerequisite:

1) kubectl cli should be installed in your local
2) Connect to the kubernetes cluster
3) Configmaps related to deployment should be deployed before deployment

Pod Anti-affinity:

Pod Anti-affinity: It help us not to schedule same type of pod in one node which means you will have only one pod in one node and other same type pod in another node. Hence scheduler will never co-locate the same type of pod in the same node

For Example, if three pods of same deployment(replicas=3) are in one node and that node has crashed or unavailable, then our application will be impacted. When pod anti-affinity is configured, one node failure/unavailable will not impact the whole application.

1) Configure the below snippet under template.spec in the application deployment yaml

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - myapp
       topologyKey: "kubernetes.io/hostname"

Note: Above Pod Anti-affinity snippet uses Selector to match with key app in value myapp and uses node labels

2) Sample deployment yaml with pod anti-affinity:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    apptype: java
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      apptype: java
  template:
    metadata:
      labels:
        apptype: java
        app: myapp
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - myapp
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: [container-name]
        image: [image from any container registry]
        command: ["/bin/sh"]
        args: ["java -Xmx2g -Xss256k -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/my-heap-dump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/myAppgc.log -jar [application.jar] --spring.config.location=[absolute path of the config map eg. /config/application/[application.properties]] "]
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: appconfig
          mountPath: "/config/application"
          readOnly: true
      volumes:
      - name: appconfig
        configMap:
        name: [configmap-name]

3) Save the above mentioned deployment yaml and deploy it in kubernetes cluster by executing the below command

kubectl apply -f [deployment.yaml]

Pod Disruption Budget:

Pod Disruption Budget help us to define how many minimum available and maximum unavailable pods must meet the condition even during disruption for stateful or stateless application. You can define minAvailable/maxUnavailable values either as integer or percentage.

1) You can configure Pod disruption budget in two ways:

minAvailable – Specified value/percentage of pod should be available always

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb-myapp
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: myapp

maxUnavailable – Specified value/percentage of pod can be acceptable during disruption

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: pdb2-myapp
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: myapp

Note: minAvailable and maxUnavailable cannot be both set in a single yaml. You need to create two pod disruption budget if you are configuring both minAvailable and maxUnavailable for your application deployment

2) We have already deployed our application with label app: myapp. To configure pod disruption budget, save the above two PodDisruptionBudget yamls and deploy it in the kubernetes cluster by executing the below commands:

kubectl apply -f pdbmin.yaml
kubectl apply -f pdbmax.yaml

Now Pod Disruption Budget is applied for the label app: myapp

3) Verify the pod disruption budgets by executing the below command:

kubectl get poddisruptionbudgets
KARTHICK SHANMUGAMOORTHY
October 21, 2020
Open Positions
49

SOFTWARE ENGINEER

(C4i Technologies, Inc. has opening in Houston, TX)

Software Engineer: Design, develop, and modify software systems, using scientific analysis and mathematical models to predict and measure outcomes and consequences of design. Determine system performance standards. Develop and direct software system testing, validation procedures, programming, and documentation. Coordinate software system installation and monitor equipment functioning to ensure specifications are met. Analyze user requirements, procedures, and problems to automate and improve existing systems and review computer system capabilities.

Test, maintain, and monitor computer programs and systems, including coordinating the installation of computer programs and systems. Modify existing software to correct errors, allow it to adapt to new hardware, and improve its performance. Analyze user needs and software requirements to determine feasibility of design within time and cost constraints. Create technical specifications. Write systems to control the scheduling of jobs and to control the access allowed to users and remote systems.

Write operational documentation with technical authors. Consult clients and colleagues concerning the maintenance and performance of software systems with a view to write and modify current operating systems. Utilize Agile, Scrum Sprint, Hadoop, Teradata, Java, UI, Python, Shell script, Microsoft Azure, and AWS. Will work in unanticipated locations.

Requires Master’s in Computer Science, Engineering, or related and 1 year experience or Bachelor’s in Computer Science, Engineering, or related and 5 years progressive experience.

Send resume to C4i Technologies Inc, 12000 Westheimer Rd, Ste 108, Houston, TX 77077.

admin
October 19, 2020
Open Positions
40

SOFTWARE ARCHITECT

(C4i Technologies Inc has opening in Houston, TX) Software Architect: Develop cloud architecture and implement enterprise stack on public cloud leveraging the cloud native features including Operating System (OS), multi-tenancy, virtualization, orchestration, elasticity, scalability, containerization, and serverless functions.

Act as a Subject Matter Expert for cloud end-to-end architecture, including AWS, Azure, Google Cloud Platform (GCP), networking, provisioning, and management. Develop solutions architecture and evaluate architectural alternatives for private, public, and hybrid cloud models, including IaaS, PaaS, and cloud services.

Develop a library of deployable and documented cloud design patterns based on the application portfolio as a basis for deploying services to the cloud. Develop a Point of View for demonstrating the concept in rapid development mode on AWS, Azure, and GCP. Mentor and guide the junior engineers in developing the minimum viable product (MVP) solutions. Collaborate with account teams, competency teams, and business partners to understand and analyze business requirements and processes and recommend solutions. Suggest technical solutions to meet business requirements efficiently with greater reusability and long term vision. Collaborate with other solution architects to leverage existing technologies. Translate the business requirements into technical requirements and develop the cloud architecture to meet the technical and business requirements. Drive scope definition, requirements analysis, functional and technical design, application build, product configuration, unit testing, and production deployment. Ensure delivered solutions meet technical and functional/non-functional requirements.

Provide technical expertise and ownership in the diagnosis and resolution of an issue, including the determination and provision of workaround solution or escalation to service owners. Ensure delivered solutions are realized in the time frame committed and work in conjunction with project sponsors to size and manage scope and risk. Provide support, technical governance, and expertise related to cloud architectures, deployment, and operations.

Perform project planning, risk management, issue analysis, and budget forecasting. Act as a leader to make decisions, research and articulate the options and the pros and cons for each, and provide recommendations. Provide thought leadership through whitepapers, blogs, and presentations. Maintain overall industry knowledge on latest trends and technology. Utilize Agile, Jira, Scrum, Sprint, AWS, Azure, GCP, Databases, Security, Networking, Storage, Infra, SaaS, PaaS, IaaS, Python, CFT, Terraform, Serverless Computing, and Integration. Will work in unanticipated locations.

Requires Master’s in Computer Science, Engineering, or related and 1 year experience or Bachelor’s in Computer Science, Engineering, or related and 5 years progressive experience.

Send resume to C4i Technologies Inc, 12000 Westheimer Rd Ste 108, Houston, TX 77077.

admin
October 19, 2020
c4itechnologies
We are on a mission to energize everyone to embrace technology for profitable employment.
Contact Us
Head Office
12000 Westheimer Rd STE 108, Houston, TX 77077
248-513-2965
Contact Us
Michigan Office
27620 Farmington Road, Suite 208 MI-48334
+1 248 987 1187
Copyright © 2021. All Rights Reserved.