We run 100’s of containers that are short lives and have to run in multiple regions around the world.
Consider the following Kubernetes Architecture where we can deploy pods and cronjobs to kubernetes using HELM or YAML that allows this to. occur.
Traditional Kubernetes uses physical nodes, so this is really not serverless. However we have containers that are orchestrated by Kubernetes and deployed to Azure Container Instances around the globe. This is now a serverless solutions where we can tell kubernetes to deploy our containers on Virtual Nodes. The virtual node provider will then take care of deploying the pod.
This solutions works really well where we can combine serverless solutions with the benefit of traditional container orchestration. We can still work with a HELM template or kubectl apply -f myCronJob.yaml
Here is a sample cronjob being scheduled on a Google Kubernetes Cluster in Australia where the ACTUAL Job will run in the East US.
apiVersion: batch/v1beta1 kind: CronJob metadata: labels: app: MyApp ... spec: ... spec: template: ... spec: nodeSelector: kubernetes.io/role: agent beta.kubernetes.io/os: linux type: virtual-kubelet kubernetes.io/hostname: virtual-node-east-us tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule containers: ... image: scacraae.azurecr.io/myimage:0.1.0-165 imagePullPolicy: IfNotPresent imagePullSecrets: - name: docker-registry-acr-secret ... schedule: '* * * * *'
The above solution works really well but there are some improvements,
Optimise Container Image Size
Ensure your docker images are as small as possible. If you want the fastest possible solution when ACI pulls the image, you will require the smallest Linux or “NON Linux kernels” as possible.
Linux Alpine is a great image but we have restrictions due to the C library MUSL Libc. So for example if you want to run node applications that use Google Electron, you will find this impossible to do in Alpine. What we need is a Linux image on glibc that is super small.
Solutions are building microkernels with just the files you need such as https://vorteil.io/ or stripping your containers to bare minimum and during container bootstrapping you can unzip your debian/pkg files on startup. This is something we are looking into where we can reduce our image size substantially from GBs to MBs.
Use a Global Replication Container Registry
Use a container registry that supports 1 URL but is replicated across geographic locations. We use Azure Container Registry, since our serverless architecture is Azure Container Instances managed by Google Kubernetes. This means we pay ZERO costs for downloading the image, and each region where serverless containers spawn will download from a regional registry.
Even though Azure Container Instances uses a local Container Registry (Premium ACR + Geo Replication). There is still an opportunity to support caching, why?
Each ACI is in a dedicated VNET and Resource Group. This means there is scope for the Microsoft ACI product team to provide the capability to WHITELIST images you want to cache. This is not yet implemented but would ensure images are downloaded from within the same physical datacentre.
This has not yet been developed but I suspect MS to build this technology soon.
Throw away HTTP for Image Pulling
The current protocols we use to pull images are way over the top with overhead. We need a dedicated protocol to pull images. Microsoft is working on a technology called Teleport.
We are now entering a new chapter with Serverless where we can combine Kubernetes orchestration to control images all over the world with very simple deployment models. As soon as we can tick off all the challenges above, the skies are the limit.
Checkout https://inspectant.io where we run serverless across the globe to provide our customers with state of the art technology to monitor and test their systems.
I am looking forward to Microsoft pushing out TelePort for ACI and ACI Image Caching at the Resource Group / VNET level in the near future.