Once in a while there pops up really impressive technology. Technology which will be responsible for a total movement within web development. For the last years, for me, these technologies are:
- Net Core
And this weekend, a new technology is added to the list: Kubernetes.
I did read about Kubernetes but I didn’t actually have any time to get a real deep dive into this ‘abstract Docker orchestration thingy’. Fortunately, in the project where I’m working on at the moment, we’ve decided to move some parts from Azure webapps to AWS. The first plan was to move from Azure webapp to Amazon S3 to host some static NodeJS applications and some .Net Core API’s. However, during these architecture meetings, we decided to use Docker. To orchestrate all these containers, we decided to use Kubernetes.
After the meeting we had:
- A great new master plan
- A new problem.
Back to the drawing board
So, instead of building the applications on our buildserver (Gitlab), package the files(Gitlab/Artifactory) and deploy the files(Octopus Deploy) we needed to create images.
I’ve been working with Docker for some time but only in some sort of a ‘consuming’ way. I think most of the developers are using it like this, just the ease the way he/she is working. for example, you must have a new Redis Cache service, spin up a Docker Container based upon an image from the Docker hub, grab the endpoint and you’re ready.
Our new process will be more like:
- Create new images from our applications
- Push these images to a Docker repository
- Do some magic in Kubernetes
Major problem was that I didn’t have any experience with Kubernetes so, this week I took some time to read, learn and try. The more and more I learned about Kubernetes, the more I was impressed by this awesome technology. I will share everything I’ve learned so far.
To use Kubernetes, I ve installed the Edge Releases. This release will make Kubernetes local available. You can get these installs here: https://www.docker.com/kubernetes
The Kubernetes Dashboard was helpful for me to get a better overview from what Kubernetes is actually doing. It’s not required to run Kubernetes but it will help you to understand the different parts of Kubernetes better. To install the Dashboard, have a look at here: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Up and running
To verify if Kubernetes is running, check the icon in the taskbar:
When both Docker and Kubernetes is running, check if the cli is working:
Now it’s time to check the Kubernetes Dashboard, open a shell and type:
This will activate the Kubernetes Dashboard which is running on port 8001:
Yes!!! We are ready to rock and roll!
My first steps
Whatever process you’re using, it must be possible to automate every step in the process. If you need to do some steps manually, you are doing it wrong. Automate Everything!
So, Kubernetes must also be automated. In Kubernetes this is called the manifest file. Using this manifest, every part of the setup can be templated using a manifest file. These manifest files can be yaml or json files. Just as Azure, Kubernetes can be setup using a template! Also, just as Azure, an existing setup has a ‘source’ and can be viewed and/or copied:
Kubernetes is just an orchestration tool to manage Docker containers. So, we need some containers, hence, Docker images. I’ve created a minimal setup which must run in a Kubernetes cluster. This setup contains:
- Asp.net Core 2.0 MVC Webapp
- Asp.net Core 2.0 API
- Asp.net Core 2.0 API
I think this mimics just a basic ‘microservice’ setup where there is a single consuming webapp where traffic is directed to and at least one API. The challenge for me in this setup is the communication between the services and the webapp. In other words: how can the webapp access API’s in Kubernetes?
The API’s are just called Profile and Dictionary. These are just names. The profile service will return a profile object and the dictionary service will just service some key/value items.
Make sure to use versioning when building images. It’s possible to use the tag ‘latest’ but this is very confusing! Have a read here about tagging:
For this article I will use semantic versioning (semver) but there are some other (better?) ways to tag the images properly:
To build and tag the images, I ve created a simple batch file, this can be placed in the root of every project.
docker build -t kubernetes.web:2.0.0 .
This will start a Docker build and will use the Docker file in this folder:
FROM microsoft/aspnetcore:2.0 AS base
FROM microsoft/aspnetcore-build:2.0 AS build
COPY Kubernetes.Web.csproj .
RUN dotnet restore Kubernetes.Web.csproj
COPY . .
RUN dotnet build Kubernetes.Web.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish Kubernetes.Web.csproj -c Release -o /app
FROM base AS final
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Kubernetes.Web.dll"]
It’s important to understand what this file is doing:
There are 2 images used for this build
- aspnetcore tag 2.0
- aspnetcore-build tag 2.0
This is done to make a difference in the image to build the final docker image and the image which is used to run my image. Keep in mind that every Docker image can have a single purpose. In this case the build image is optimized for building and is not meant to use as runtime image. Our final image will use the aspnetcore image for the runtime. This image is not able to build the project because it’s missing the SDK hence it’s better/smaller for running the image itself.
Also note the Copy actions. This action is copying the files to the Virtual Docker filesystem. All actions are running on this filesystem, not on your own local filesystem. So keep in mind that of your solution needs more projects to build, you’ll need to copy all these files to this location.
If everything is in place and all packages have been restored, you can start dotnet build. This will do a first build using all the downloaded packages. After the build dotnet publish will publish the application and stores all files to run the application in the publish folder. These files are copied (-o = outputdirectory) to the ‘/app’ folder.
Finally the ‘/app’ folder is used to define the final image where the project assembly is used as Entrypoint to start the application itself.
When this is correctly setup, we can start to build the image by clicking the cmd file.
Now the image is created, so let’s check.
Open powershell to list your images:
This can be done for all the projects, so our final result will be 3 images:
Now, the Kubernetes magic can start! Up to the next post for this is getting too long 😉