Table of Contents
GKE is a google cloud service which provides managed environment to deploy, manage and scale containerized applications. It becomes easier to create a cluster using GKE with required number of nodes with just few clicks. Nodes in the cluster are VM instances which can be created using another service called Compute Engine. In this tutorial, We will create a cluster, deploy a microservice and play with it using kubectl utility.
Step by Step Guide to Create a GKE Cluster in Google Cloud
Create a Kubernetes cluster with the default node pool using below steps:-
- Click on
"My First Project"
- Click on
"New Project"
- Name the project as
"first-demo" (you can give any name )
- Click on "Create"
Project is created successfully. Now open the project and copy the project id for further steps. But before going any further, we first need to enable the Kubernetes service and create cluster. To do so, follow below steps:-
- Go to project
"first-demo"
. - Search service
"Kubernetes engine"
. - Click
"enable"
to enable Kubernetes Engine API. - Once enabled, click
"Create"
to create the cluster. - Create the cluster with all default settings and name the cluster as
"my-first-cluster"
.
Cluster is created successfully with 3 default nodes, 6 CPUs(2 for each node) and 12GB memory.
NOTE:
Step 1: Login to cloud shell
Next we will login to cloud shell. To do so, go to cloud shell and type below command.
gcloud config set project first-demo-311705
Step 2: Connect to the Kubernetes cluster
Now that the cluster is create, we will connect to the cluster for deploying microservice. To do so, follow below steps:-
- Go to project
"first-demo"
. Click on 3 dots. - Click on "connect". copy the command which pop up shows.
- Go back to cloud shell and type below command.
cyberithub@cloudshell:~ (first-demo-311705)$ gcloud container clusters get-credentials my-first-cluster --zone us-central1-c --project first-demo-311705 Fetching cluster endpoint and auth data. kubeconfig entry generated for my-first-cluster.
Where,
my-first-cluster
is the cluster name.
us-central1-c
is the zone name.
first-demo-311705
is the project name.
When a cluster is created, certain directories also gets created which stores the cluster information.
cyberithub@cloudshell:~ (first-demo-311705)$ ls -la total 40 drwxr-xr-x 5 cyberithub cyberithub 4096 Apr 24 06:38 . drwxr-xr-x 4 root root 4096 Apr 23 11:49 .. -rw------- 1 cyberithub cyberithub 2810 Apr 24 18:06 .bash_history -rw-r--r-- 1 cyberithub cyberithub 220 Apr 18 2019 .bash_logout -rw-r--r-- 1 cyberithub cyberithub 3564 Apr 17 07:24 .bashrc drwxr-xr-x 3 cyberithub cyberithub 4096 Apr 17 07:04 .config drwxr-xr-x 2 cyberithub cyberithub 4096 Apr 23 11:49 .docker drwxr-xr-x 3 cyberithub cyberithub 4096 Apr 24 18:05 .kube -rw-r--r-- 1 cyberithub cyberithub 807 Apr 18 2019 .profile -rw-r--r-- 1 cyberithub cyberithub 913 Apr 24 17:45 README-cloudshell.txt
Kube config file is stored on path /home/ankur123sh/.kube/config
.This config has the information of all the clusters created. Open this file and check the content to get more insight.
cyberithub@cloudshell:~/.kube (first-demo-311705)$ ls -lhtr total 8.0K drwxr-x--- 4 cyberithub cyberithub 4.0K Apr 24 14:40 cache -rw------- 1 cyberithub cyberithub 2.7K Apr 24 18:05 config
Step 3: Deploy microservice to Kubernetes
Let's create deployment using kubectl
utility. I am using one of the image from docker hub which can be referred on Docker Hub website.
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl create deployment first-deployment --image=openwhisk/python2action:nightly deployment.apps/first-deployment created
where,
first-deployment
is the deployment name.
openwhisk/python2action:nightly
is the docker image (openwhisk -> docker id, python2action ->image name, nightly image tag we want to use ).
Command to view running deployment:-
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE first-deployment 1/1 1 1 117s
Next we will expose the deployment to outside world by using below command. What we get from exposing deployment to outside world is a service. The type of service we are creating here is LoadBalancer
.
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl expose deployment first-deployment --type=LoadBalancer --port=8080 service/first-deployment exposed
Below is the command to view running services:-
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE first-deployment LoadBalancer 10.76.7.140 35.222.153.200 8080:31910/TCP 4m15s kubernetes ClusterIP 10.76.0.1 <none> 443/TCP 8h
Increase number of instances of microservice using below command. We will create 4 instances of our microservice using flag replicas.
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl scale deployment first-deployment --replicas=4 deployment.apps/first-deployment scaled cyberithub@cloudshell:~ (first-demo-311705)$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE first-deployment 4/4 4 4 36m
To see the status of each instances(also called as pod) use below command:-
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl get pods NAME READY STATUS RESTARTS AGE first-deployment-87f979d87-72f98 1/1 Running 0 36m first-deployment-87f979d87-gjdfj 1/1 Running 0 44s first-deployment-87f979d87-n4w46 1/1 Running 0 44s first-deployment-87f979d87-sng8z 1/1 Running 0 44s
Step 4: Increase/Decrease number of nodes in the cluster using gcloud utility
We can also scale the nodes used in our cluster based on usage. If the no of pods increases, it will require more no of resources. Let's resize the node pool to 2 using below command.
cyberithub@cloudshell:~ (first-demo-311705)$ gcloud container clusters resize my-first-cluster --node-pool default-pool --num-nodes=2 --zone=us-central1-c Pool [default-pool] for [my-first-cluster] will be resized to 2. Do you want to continue (Y/n)? y Resizing my-first-cluster...done. Updated [https://container.googleapis.com/v1/projects/first-demo-311705/zones/us-central1-c/clusters/my-first-cluster].
Where,
my-first-cluster
is the cluster name.
default-pool
is the Node pool name.
us-central1-c
is the zone name.
Step 5: Setup auto scaling for our microservice
Instead of scaling microservice or resources every time, we can use auto-scaling
for same job. Let's auto-scale microservice using below command:-
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl autoscale deployment first-deployment --max=5 --cpu-percent=60 horizontalpodautoscaler.autoscaling/first-deployment autoscaled cyberithub@cloudshell:~ (first-demo-311705)$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE first-deployment Deployment/first-deployment <unknown>/60% 1 5 3 19m
Step 6: Setup auto scaling for our cluster
Let's auto-scale cluster using below command:-
cyberithub@cloudshell:~ (first-demo-311705)$ gcloud container clusters update my-first-cluster --enable-autoscaling --zone=us-central1-c --min-nodes=1 --max-nodes=6 Updating my-first-cluster...done. Updated [https://container.googleapis.com/v1/projects/first-demo-311705/zones/us-central1-c/clusters/my-first-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-c/my-first-cluster?project=first-demo-311705
Step 7: To display cluster info
If you want to check the cluster info then you need to use kubectl cluster-info command as shown below.
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl cluster-info Kubernetes control plane is running at https://34.122.170.70 GLBCDefaultBackend is running at https://34.122.170.70/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy KubeDNS is running at https://34.122.170.70/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://34.122.170.70/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
.
Step 8: To display current working context
If you want to check the current working context then you need to use kubectl config current-context command as shown below.
cyberithub@cloudshell:~ (first-demo-311705)$ kubectl config current-context gke_first-demo-311705_us-central1-c_my-first-cluster