18.3.1 Deploying to Kubernetes
Kubernetes is an amazing container-orchestration platform that runs images, handles scaling containers up and down as necessary, and reconciles broken containers for increased robustness, among many other things.
Kubernetes is a powerful platform on which to deploy applications—so powerful, in fact, that there’s no way we’ll be able to cover it in detail in this chapter. Instead, we’ll focus solely on the tasks required to deploy a Spring Boot application, built into a container image, into a Kubernetes cluster. For a more detailed understanding of Kubernetes, check out Kubernetes in Action, 2nd Edition, by Marko Lukša.
Kubernetes has earned a reputation of being difficult to use (perhaps unfairly), but deploying a Spring application that has been built as a container image in Kubernetes is really easy and is worth the effort given all of the benefits afforded by Kubernetes.
You’ll need a Kubernetes environment into which to deploy your application. Several options are available, including Amazon’s AWS EKS and the Google Kubernetes Engine (GKE). For experimentation locally, you can also run Kubernetes clusters using a variety of Kubernetes implementations such as MiniKube https://minikube.sigs.k8s.io/docs/, MicroK8s https://microk8s.io/, and my personal favorite, Kind https://kind.sigs.k8s.io/.
The first thing you’ll need to do is create a deployment manifest. The deployment manifest is a YAML file that describes how an image should be deployed. As a simple example, consider the following deployment manifest that deploys the Taco Cloud image created earlier in a Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: taco-cloud-deploy
labels:
app: taco-cloud
spec:
replicas: 3
selector:
matchLabels:
app: taco-cloud
template:
metadata:
labels:
app: taco-cloud
spec:
containers:
- name: taco-cloud-container
image: tacocloud/tacocloud:latestThis manifest can be named anything you like. But for the sake of discussion, let’s assume you named it deploy.yaml and placed it in a directory named k8s at the root of the project.
Without diving into the details of how a Kubernetes deployment specification works, the key things to notice here are that our deployment is named taco-cloud-deploy and (near the bottom) is set to deploy and start a container based on the image whose name is tacocloud/tacocloud:latest. By giving “latest” as the version rather than “0.0.19-SNAPSHOT,” we can know that the very latest image pushed to the container registry will be used.
Another thing to notice is that the replicas property is set to 3. This tells the Kubernetes runtime that there should be three instances of the container running. If, for any reason, one of those three instances fails, then Kubernetes will automatically reconcile the problem by starting a new instance in its place. To apply the deployment, you can use the kubectl command-line tool like this:
$ kubectl apply -f deploy.yamlAfter a moment or so, you should be able to use kubectl get all to see the deployment in action, including three pods, each one running a container instance. Here’s a sample of what you might see:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/taco-cloud-deploy-555bd8fdb4-dln45 1/1 Running 0 20s
pod/taco-cloud-deploy-555bd8fdb4-n455b 1/1 Running 0 20s
pod/taco-cloud-deploy-555bd8fdb4-xp756 1/1 Running 0 20s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/taco-cloud-deploy 3/3 3 3 20s
NAME DESIRED CURRENT READY AGE
replicaset.apps/taco-cloud-deploy-555bd8fdb4 3 3 3 20sThe first section shows three pods, one for each instance we requested in the replicas property. The middle section is the deployment resource itself. And the final section is a ReplicaSet resource, a special resource that Kubernetes uses to remember how many replicas of the application should be maintained.
If you want to try out the application, you’ll need to expose a port from one of the pods on your machine. To do that, the kubectl port-forward command, shown next, comes in handy:
$ kubectl port-forward pod/taco-cloud-deploy-555bd8fdb4-dln45 8080:8080In this case, I’ve chosen the first of the three pods listed from kubectl get all and asked to forward requests from the host machine’s (the machine on which the Kubernetes cluster is running) port 8080 to the pod’s port 8080. With that in place, you should be able to point your browser at http:/ /localhost:8080 to see the Taco Cloud application running on the specified pod.
