This series will give you an overview of Kubernetes, the popular open-source cloud computing platform developed by Google. Kubernetes enables the development of cloud-based platforms using completely open specifications, so you’re never tied to a specific vendor. Many cloud providers, such as AWS, have proprietary methods for developing scalable web applications (like their Lambda system). The problem is that it ties your app to their system and, as we saw with Parler, Amazon gives and Amazon takes away. Therefore, it’s wise not to tie your infrastructure too much to a single vendor. Kubernetes lets you build scalable, large-scale applications in the cloud in a way that’s transferable to a variety of vendors.
This tutorial will walk you through setting up a real internet-facing Kubernetes cluster. We’ll use Linode to create the cluster, because Linode’s Kubernetes service is simple, inexpensive, and reliable. You will need to create an account on Linode.com to start. You will probably also want to have read the Mind Matters series on Docker before starting this walkthrough.
Setting up your first cluster
Setting up a Kubernetes cluster in Linode is incredibly simple. In the Linode dashboard, click on the “Create” button and choose Kubernetes from the drop-down menu. This will bring up a screen that will ask you some basic setup questions to get your Kubernetes cluster up and running:
Cluster label
This is the name of the cluster, but it can only take certain characters. I called mine my-test-cluster
.
Region
This is the physical data center where you want your cluster to run. You can choose where you want, but I used “Dallas, TX” for mine.
Kubernetes version
We will use the version 1.22
.
The next section is about the machines that will run your cluster. These are organized into “node pools”. We are only going to create one node pool. You can find the cheapest machines under the “shared CPU” tab. For this demo, I recommend adding 3 of the cheapest machines available. At the time of writing this was a “Linode 2 GB” which cost $10 per month (each) or $0.015 per hour (each). Click the “Add” button to add these machines to your cluster.
At the bottom of your page, it will give you a monthly total and a button that says “Create cluster”. Click this button and Linode will start creating your cluster!
It may take a few minutes to fully create your cluster. Once done, the dashboard will have a downloadable “Kubeconfig” file. The image below shows what it looks like. Click the download link to download the Kubeconfig file to your local machine.
Linode Kubernetes clusters come pre-equipped with a Kubernetes application called the Kubernetes Dashboard, which gives you a web interface to manage your Kubernetes cluster. To access it, simply click on the “Kubernetes Dashboard” link displayed in the image above. This will prompt you for an authentication method. Choose “Kubeconfig”, then, in the box below, upload the Kubeconfig file you previously downloaded to your computer. Then click “Connect”.
The starting dashboard is shown in the image below. On the left are the different types of resources (called “objects” in Kubernetes terms) that Kubernetes can manage. At the top is a “namespace” selector, which is currently set to default
. The main part of the screen says, “There is nothing to display here” because we haven’t deployed anything yet.

To see the machines that are included in your cluster, in the left menu, scroll down to the “Cluster” section and click on “Nodes”. This should give you a list of three entries, representing the machines we originally added to the cluster. With Kubernetes, however, we don’t care too much about physical hardware, as long as there is enough of it for everything we want to do. The distribution of applications on this hardware is handled by Kubernetes itself. However, this section will show you the view of Kubernetes on your hardware.
Deploying your first application
Normally in Kubernetes, deployments are managed by YAML files which may themselves be under source control. However, to get started, the Kubernetes Dashboard provides a web interface to deploy a simple application.
To get started, click on the “+” icon at the top right of the screen. This will bring up a new screen that will give you three options: “Create from Input”, “Create from File”, or “Create from Form”. Choose “Create from a form”.
We will define the following.
App name
We will set this to my-test-app
.
Container picture
We will set this to johnnyb61820/simple-web-server
. This is the container image that will be used on the cluster.
Number of pods
We will set this to 4
. Note that this number may be higher than the number of instances we assigned to this cluster.
Service
Set this to External
. This will allow access to the service from outside the cluster.
Harbor
Set this to 80
. This is the port that will be exposed to the outside world.
Target Port
Set this to 8070
. This is the port our containers are listening on. The container specified here listens on port 8070.
Protocol
Set this to TCP
.
The screenshot below shows what it looks like when populated.

Once all this is defined, click on the “Deploy” button. As of this writing, there is a bug in the dashboard that says you have unsaved changes and asks if you really want to leave. Just click “Yes”.
That’s all we can say about it! You now have a (small) web application deployed on Kubernetes. To access your web application, click on the “Services” tab on the left side. One of the services listed should be your my-test-app
service, and to the right there should be a column labeled “External Endpoints” which contains a link to where your application was deployed. Click that link, and it should say “Hello from Docker!”
A Look Around the Kubernetes Dashboard
Now that you’ve deployed an application to Kubernetes, it’s time to take a quick tour of the Kubernetes dashboard. All of this information is accessible from the command line, but the visual dashboard helps guide you on what’s available.
Our deployment in the previous section essentially created three types of objects: Services, Deployments, ReplicaSets, and Pods. The figure below shows how they relate to each other.

At the top of the graph we have a “Deployment”. Deployments have one basic task: controlling “ReplicaSets”. A ReplicaSet is essentially an image that the cluster wants multiple identical instances of. When we’ve specified our container image, that’s what the ReplicaSet replicates. It will create many containers from this image. Deployed containers are called “Pods”. We requested 4 Pods, which are created and managed by the ReplicaSet. Note that a pod can technically have more than one container, but for now we can think of them as pretty much the same thing.
You might wonder why we need both Deployments and ReplicaSets. The reason for this is that if we change versions of our application, the deployment will handle creating the new ReplicaSet (we will need a new one as it will be a different container image) and then shut down the old ReplicaSet once the new is operational. Typically, we interact directly with deployments in Kubernetes, and deployments then create, manage, and delete ReplicaSets as needed.
So, all of this manages the container instances (Pods) themselves. A “service” is a named endpoint (internal or external) in your cluster. Services can load balance between multiple Pods that implement the same service. Therefore, if someone requests a service endpoint, the service then routes that request to an appropriate pod for processing.
So how does the service know which pods can handle which requests? Imagine if you had several different deployments running, each doing different things. How would the service know which one to point to? Most objects in Kubernetes have “tags” associated with them. Tags are basically arbitrary key/value pairs attached to objects. Although they can be arbitrary, there are also standard ones. The service implements a “selector” that finds pods based on their tags. Pod labels are initially defined in the deployment, copied to the ReplicaSet, and then copied to the Pod. The service then uses the tag to know which pods can respond to requests.
Now that you know what has been deployed, you can browse the Kubernetes dashboard and find those items. You can find deployments, replica sets, and pods in the “Workloads” section, and services in the “Service” section. Feel free to click around these different areas.
When you click, notice that almost everything has a name and a set of tags. The name only needs to be unique for the object type. Note that in our current deployment, the service and the deployment are named my-test-app
. Personally, I don’t like naming things the same, but that’s what form-created service deployments do automatically.
Every object in Kubernetes can be modified through a YAML file. To the right of each object is a button with three dots. By clicking on this button, you will have the possibility to modify the object. If you click on the “Edit” button, you can see what the YAML of this object looks like. In an upcoming episode, we’ll look at some of these files in more depth.
In case you missed it:
Getting Started with Kubernetes: A Brief History of Cloud Hosting. A history lesson to better understand why web infrastructure hosting is what it is. In the early days of the Internet, web applications were hosted on specific servers. Much has changed since then. (Jonathan Bartlett)