Working with Kubernetes II

So, we have 3 images. Time to think about how these images must be running inside a Kubernetes cluster. As mentioned in my previous post we have a single webapp which must be communicating with 2 api’s:

To be more specific: we have a single accessible webapp which are powered by 2 api which are not accessible from outside. This is important for our mini Kubernetes architecture. Ofcourse every application can be accessible from outside but that’s not how it should work.

Designing the Kubernetes setup

To make applications run inside Kubernetes, we must provide a template, also called a manifest file which is a json or yaml file. This file tells Kubernetes what needs to be done. It’s actually a way to describe what our desired state must be. For me, this shows the real power of Kubernetes. The system is actually doing just a single job: maintain the desired state as designed by the manifest.

If the system detects a anomaly in the state, Kubernetes will try to recover itself to it’s desired state!

We only need to create a manifest 😉

The manifest tells Kubernetes what our desired state should be. We have 3 images, all of these images must be running and only one image needs to be accessible from outside. Time to setup a deployment. A deployment is our first description of how and what we would like to setup. A deployment is responsible for setting up one or more containers based upon an image.

These containers run inside a Pod.

Let’s first list the Kubernetes terms we are using:

  1. Namespace, this is used to group our setup. This will make it easier filter or remove all our resources in a single command.
  2. Deployments, this describes our desired state
  3. Services, we need to define a service to make our applications accessible.

A Pod can run multiple containers. A Pod is the smallest unit in Kubernetes to run stuff in. A Pod is a working unit which houses one or more containers. Our deployment will result in a single container per Pod. If we need to scale, we can scale on Pod level.

[Example of a Pod running mulitple containers]

Deployment file

I’ve created 3 deployment files, this can be done in a single file but it’s better to define these in separate file. I’ll show later.

Important to know are:

  1. Type, this tells Kubernetes that this is a deployment
  2. Namespace, this groups our deployment
  3. Replicas, this tells Kubernetes how many Pods must be created
  4. Template, this is the contents of what must be inside the Pod
  5. Containers, this is a list of containers inside the Pod, every container needs to have a pointer to an image

Namespace file

We can run this file but it will fail because the used namespace isn’t there yet. Let’s create a namespace then:

This is our yaml file which will create a namespace.

creating a yaml doesn’t bring us any further so we need to tell Kubernetes to do something with this file. Well, we can do this by tell Kubernetes to use this file:

This can also be done using the Kubernetes CLI:

Both options will create a new namespace in our cluster.

Now we can start our first deployment:

This will create the defined resources from our deployment_kubernetes_web.yaml file which is a container using our Mvc application inside a Pod defined in our namespace. We can repeat this task for the other 2 api’s as well which will result in 3 deployments and 3 Pods:

You’ll see 6 pods because we have set the replicas to 2, remember? Right now we have:

  • 2 pods serving a webapp
  • 2 pods serving the profile api
  • 2 pods serving the dictionary api

These are running within seconds!!!

Next step is to

  • make our application accessible from the outside.
  • make our api’s accessible from our mvc application

Within Kubernetes we can setup Services to make this happen. In our example we’ll need 3 services:

  1. MVC app which exposes the endpoint
  2. Profile API service which exposes the enpoint internally
  3. Dictionary API service which exposes the enpoint internally

We can setup our services yaml file like this:

The best post I could find about the different kinds of services is:

Take a note of the different types of services. I’ve used:

  1. Nodeport, this is used for the webapp and will expose the application to the outside
  2. CusterIP, this will expose both API’s to be accessible but only within the cluster

Now we need to create the services by a kubectl command:

After executing this command, our cluster will have 3 services:

When it’s all properly setup, take a look at the details of the web service:

You can see the internal IP’s as well as the IP which we can use to access the webapp within Kubernetes: 31760. So we can browse to http://localhost:31760 et voila!

What you see is a running mvc app using both services!

There is only one thing left to explain and that’s the way we can access the api’s within our mvc application. This was actually quite easy to setup because all communication within the cluster is done by using the name of the resource.

In our MVC application we need to setup a proper configuration to access environment variables, like:

Now, our application is looking for an environment variable which can be set in the settings.json or can be defined as a environment variable within the container running our mvc image:

Because the name of the profile service is ‘kubernetes-profile’, a request to this name is internally resolved to the api itself serving the profile api. This is all loadbalanced ofcourse, so all traffic will be managed across all available pods serving the profile api.


I'm a webdeveloper, looking for the best experience, working between development and design. Just a creative programmer. When I'm getting tired of programming C#, i'd love to create 3D images in 3D Studio Max, play the guitar, create an app for Android or crush some plastics on a climbing wall or try to stay alive when i´m descending some nice white powdered snowy mountains on my snowboard.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.