In a previous blogpost, we explored a concept and mindset of how to build bridges between systems (and teams) in agile development environments. The proposed way has its pros and cons but is a convenient way for smooth and rapid PoC and MVP development. However, just mocking something locally (on your own machine), is not going to help bridge gaps between teams; so, we somehow need to make our mock service accessible.
In this part of the series, we are going to dockerize our application and host it on a Kubernetes cluster. For this we have multiple options. While all following concepts apply to each of the big cloud providers, we are going to use Microsoft Azure (in this case Azure Kubernetes Service – AKS and Azure Container Registry – ACR).
- API First with Zalando, Python and OpenAPI (Building Bridges Part I)
- Python microservices with Docker, Kubernetes & Azure (Building Bridges Part II)
- Part III (coming soonTM)
Docker, oh Docker.
The first step is to dockerize our application. The following is specific for our application, but there are a lot of great tutorials on how to dockerize different kind of applications across all languages and technologies. (If you are a German reader, you can follow this post by my colleague!) Anyways, let’s create the Dockerfile.
FROM python:3.8.5 RUN mkdir /code WORKDIR /code ADD . /code/ RUN pip install -r requirements.txt RUN groupadd -r appuser && useradd -r -g appuser USER appuser EXPOSE 9090 CMD ["python", "/code/app.py"]
We use a simple python 3.8.5 base image (based on ubuntu). For this context this is absolutely okay but depending on the use case we might want smaller images like python-alpine or Debian-slim. However, python docker images are used a lot in the machine learning world (esp. w/ libraries like sklearn and xgboost) and there we might need certain OS-libraries that would not be included on a tiny alpine image (at least not out-of-the-box). Our Dockerfile is not the most sophisticated ever, but it does the job. To learn more, see the official docs at docker docs and the docker blog.
Ok – let’s walk through our Dockerfile. It creates a Linux image with python 3.8.5 installed, copies our application code onto it and then installs dependencies via
pip install. To not run the docker container as root user, we create a new system user (
RUN groupadd -r appuser && useradd -r -g appuser appuser) and tell docker to use this user to run the container (
USER appuser). The last step is to expose our application on port 9090 and specify the entry point to start the app (
CMD ["python", "/code/app.py"]).
Mix in the cloud
We could build and run the docker image locally, but we wanted to make it accessible. An easy way could be to use Azure Container Instances. If we just want to run one single container, this would be the easiest solution. For this blogpost, we take another route of creating our own cloud cluster. This way we can deploy multiple containers there (e.g. multiple ideas for a prove of concept, different versions in parallel etc.). Well, let’s create the cluster!
There are endless ways of working with infrastructure. Whenever we are new to a cloud provider or to any specific resource, usually we use the providers web portal (e.g. Azure Portal, Google Cloud Console, AWS Portal) to explore our options. If we know how to integrate our cloud resources and have multiple similar or short-lived environments, Infrastructure as Code (e.g. w/ terraform, k8s operator pattern or provider specific solutions: Azure: ARM templates, AWS: CloudFormation, Google: CDM) is the way to go.
For everything in between, I love to use CLIs whenever possible. To create an AKS cluster the CLI of our desire is the azure cli (short: az). As the CLI is cross-platform, you can choose your environment. A lot of colleagues go with the PowerShell (also available on Linux and mac!) or the native mac terminal. I prefer the new windows terminal + wsl2 + ubuntu 20.04 on windows.
If you do not have an azure subscription, start a free trail. The 200$ free credit is more than enough than we need for this. Make sure to install the Kubernetes CLI (kubectl) before you continue. Let’s spin up a console!
First, we log in to Azure with our Azure account.
$ az login You have logged in. Now let us find all the subscriptions to which you have access... CloudName IsDefault Name State TenantId ----------- ----------- ------------------------------ ------- ------------------------------------ AzureCloud True Visual Studio Premium mit MSDN Enabled XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
We need to create a resource group to place our cluster in. (Microsoft’s way of grouping multiple Azure resources together is called resource groups. There are no cloud resources outside of resource groups.)
$ az group create \ --name tkr-blog-aks-we \ --location westeurope Location Name ---------- --------------- westeurope tkr-blog-aks-we
Then, we provision a cluster into this group. After some minutes we have a cluster called
tkr-blog-k8s-we01in a resource group
tkr-blog-aks-we. While we only added two nodes to the cluster, it’s quite impressive how fast the provisioning is.
$ az aks create --resource-group tkr-blog-aks-we --name tkr-blog-k8s-we01 --node-count 2 --generate-ssh-keys Succeeded tkr-blog-aks-we
Now we fetch the credentials to connect to the cluster and store them in the local
.kube-config (this requires kubectl to be installed).
$ az aks get-credentials \ --name tkr-blog-k8s-we01 \ --resource-group tkr-blog-aks-we Merged "tkr-blog-k8s-we01" as current context in /home/username/.kube/config
We are done! We could start exploring our cluster now.
$ kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5m24s
Just a tiny bit more infrastructure
Ok, let’s recap. We now have:
- a cluster somewhere in the cloud and
- source code locally (or in a git repository).
In the container world (or esp. in the docker world) we bridge this gap with container registries and CI/CD pipelines. To not expand the scope of this blog post too much, we’ll emit the CI/CD part and build and deploy from our local machine (spoiler: automating this is going to be in part III of the series). That said, we still need the registry. Of course, every cloud provider has their own registry and there are other popular alternatives like obviously docker.hub or JFrog artifactory. For the sake of this blog, let’s stick with Azure, so Azure Container Registry (ACR) it is.
$ az acr create \ --resource-group tkr-blog-aks-we \ --name tkrblog \ --sku Basic
Our cluster needs access to this (to pull images), so we grant the permission via CLI. If we don’t use two resources that integrate that well, we simply use ImagePullSecrets.
$ az aks update \ --name tkr-blog-k8s-we01 \ --resource-group tkr-blog-aks-we \ --attach-acr tkrblog
After all the infrastructure work, we have:
- a connexion application (previous blog post),
- means to wrap it into a docker image (Dockerfile,
- a registry to store the image (ACR) and
- a cluster to run the container (AKS).
Building the Docker image
Usually this should be done in the CI/CD tool of your choice; likely something Jenkins, Github workflows, CircleCI or TeamCity. Other great contenders are Azure DevOps‘ Pipelines (seamless integration with the other azure services like aks and acr) or my personal favorite GoCD that you could easily run inside the AKS cluster via the GoCD helm chart. The hottest challenger is probably Argo CD or other GitOps tools. Anyways, let’s build locally.
$ docker build -t tkrblog.azurecr.io/wellroom:1.0.0-dev.1 . ... $ docker login tkrblog.azurecr.io ... $ docker push tkrblog.azurecr.io/wellroom:1.0.0-dev.1 ...
If you happen to use Azure DevOps the build pipeline would look like this (otherwise just ignore):
trigger: - master resources: - repo: self variables: dockerRegistryServiceConnection: 'XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX' imageRepository: 'wellroom' containerRegistry: 'tkrblog.azurecr.io' dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile' tag: '1.0.0-dev.1' vmImageName: 'ubuntu-latest' stages: - stage: Build displayName: Build and push stage jobs: - job: Build displayName: Build pool: vmImage: $(vmImageName) steps: - task: Docker@2 displayName: Build and push an image to container registry inputs: command: buildAndPush repository: $(imageRepository) dockerfile: $(dockerfilePath) containerRegistry: $(dockerRegistryServiceConnection) tags: | $(tag)
Deployments and Services in Kubernetes
So here we are, docker image in registry, cluster with registry access. Now we need to bring this image into our cluster. There are multiple ways to deploy workloads to Kubernetes, but without yaml-files this wouldn’t be a post about kubernetes. So, let’s declare what we want Kubernetes to do with our image (
apiVersion: apps/v1 kind: Deployment metadata: name: wellroom-room-depl labels: app: wellroom component: wellroom-room spec: replicas: 1 selector: matchLabels: app: wellroom-room-micro template: metadata: labels: app: wellroom-room-micro spec: containers: - name: wellroom-room image: tkrblog.azurecr.io/wellroom:1.0.0-dev.1 ports: - name: http containerPort: 3000 resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "256Mi" cpu: "500m"
To apply this configuration to kubernetes (as we are still connected to the cluster):
kubectl apply -f ./deployment.yaml
Essentially, we told Kubernetes to deploy our application. The work Kubernetes does after this command is quite impressive, but out of scope of the post. To make it short – Kubernetes runs the image as a container in a so called pod.
Read about Kubernetes Deployments, Replicasets and Pods, if you want to dig deeper.
Since pods are isolated workloads, they are not accessible by default. We need to expose it with a service. For us, this is easily done by creating a service definition (
service.yaml) for the deployment, that exposes the deployment. Behind that we could explore things like the impressive kubernetes service discovery.
apiVersion: v1 kind: Service metadata: name: wellroom-room labels: app: wellroom component: wellroom-room spec: # type: LoadBalancer ports: - port: 443 targetPort: 3000 protocol: TCP selector: app: wellroom-room-micro
$ kubectl apply -f ./service.yaml
Finalizing the puzzle: the benefit of the cloud.
We’ll, there we have it. Our running mock service in a cloud-based kubernetes cluster. Before we uncomment the line
type: LoadBalancer, we’ll explore what it does. With the line commented like this, Kubernetes exposes our running pod (created by the
deployment.yaml) with the service (
service.yaml). Since we did not specify where to expose it, Kubernetes makes it available cluster-internally only (default for
ClusterIp). That’s what you usually want if have a micro service architecture: only services inside the cluster can call each other. But that does not fit our use case, we want to show our service to the world! There are a couple of ways to achieve this in Kubernetes (and esp. on a cloud instance!).
- port-forwarding: this is for the development process only and no real option here.
- NodePort: short answer – don’t use it (maybe if you have zero budget).
- LoadBalancer: exposes one service to the world.
- Ingress: actually, not a service, but the most powerful way (multiple services with same IP address/Routing, SSL, Authentication, …).
If you want to dig deeper into this, this compact article provides great visuals to explain the differences. We stick with the LoadBalancer (could also use NodePort – but as I said, don’t use it…).
apiVersion: v1 kind: Service metadata: name: wellroom-room labels: app: wellroom component: wellroom-room spec: type: LoadBalancer ports: - port: 443 targetPort: 3000 protocol: TCP selector: app: wellroom-room-micro
$ kubectl apply -f ./service.yaml $ kubectl get svc wellroom-room -w NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default wellroom-room LoadBalancer 10.0.39.110 pending 443:3000/TCP 52s ... default wellroom-room LoadBalancer 10.0.39.110 184.108.40.206 443:3000/TCP 52s
The last shell command shows our service as
pending while Azure provisions an actual load blancer for us. Once this is done (parameter
-w watches for changes to the resource), we’ll see the external IP address where we can finally reach our mock service. So, if we browse to
https://220.127.116.11, we can see our publicly accessible API!
The final folder structure looks like this.
wellroom-room/ ├── Dockerfile ├── README.md ├── api │ ├── business_controller │ │ └── rooms.py │ └── tech_controller │ └── health.py ├── app.py ├── deployment.yml ├── requirements.txt ├── service.yml └── swagger.yaml
There is always more: what we did cover and what not.
While we did cover a lot of topics in this post, we also leapt over a bunch. For example, our service now has a dynamic IP address. As soon as we want to modify the service (i.e.
service.yml, not the
deployment.yml), the public IP address is going to change. Usually we either want a static IP and/or a DNS name. Other mandatory topics we skipped in this run were things like X-API-Keys, token mechanisms, certificate authentication with ingress etc. Nevertheless, we
- learned how to create an Azure Kubernetes Service, an Azure Container Registry and how we can connect them.
- We accessed the cluster on the console of our choice and
- containerized our mock application and made it Kubernetes-ready.
Sneak peak at Part III
We did an initial set-up, but a lot of the steps we did were rather manual. The next post focuses on automating the build and release process. We are going to learn how Azure DevOps Pipelines integrate with Kubernetes and how we can facilitate Kubernetes namespaces as environments for our staging from development over testing to production.