Navigating Running Containers in Azure

Despite the popularity of container applications, many people are still not using them in production. The good news is that a complex Kubernetes setup is not the only option to run and host containers in a production environment. Azure offers many different options to make the process of hosting and running containers simpler.
That said, it can sometimes be hard to work out which the best option is. In this post we’re going to have a brief look at the various options available and how to start your investigation into the right one for your needs. Each option is feature rich and is constantly being updated so this is not a comprehensive guide, but rather a starting point for your own research.
Trade Offs
Before we start looking at each of the services features, there are some common tradeoffs between the options worth considering.
With all of the options, the hosting style range from completely Serverless through to full self managed IaaS solutions. The trade off between these hosting options are the flexibility and complexity of IaaS against the ease of use of options like Serverless. PaaS services sit in between these giving a balance of both.
The cost trade offs are harder to quantify. With Serverless solutions you may pay more for compute per minute, however, it may actually be cheaper overall as you don’t have to pay for the underlying infrastructure when you are not using it. You also have to consider the cost of the time it takes to manage the infrastructure if this is needed.
Azure Container Registry Tasks
Azure Container Registry (ACR) is a great service for storing and managing your container images. What you might not know, is that you can actually run containers through it.
ACR Tasks have no initial infrastructure setup, with options of running containers via quick tasks, multi-step or auto-triggered tasks. The only requirement is that the container image is located in your ACR.
The tasks run on either a Windows or Linux worker, much like a CI/CD process and are billed per second. You can also scale up and attach VNets using your own dedicated Linux Agent Pools (currently in preview).
These would typically be used to run CI/CD process tasks such as building containers. However you can also use them to run a container that has a specific task to run, such as a maintenance job.
To run a quick task you can use the Azure CLI or Azure PowerShell commands, but not the Azure Portal.
az acr run -r blueboxes --cmd 'blueboxes.azurecr.io/samples/cowsay ""Hi Everyone""' /dev/null
The output is then shown in the terminal and the task is run in a container. The task is also shown in the Azure Portal under the Tasks section of the ACR.
From all the options this is the quickest to get started with, however it is not a full container hosting solution. It is a way to run tasks in containers, so it is not suitable for running long lived applications.
Azure Container Instances
Azure Container Instances (ACI) is one of the oldest container hosting options in Azure, and supports both Windows and Linux containers. It is a fully managed service that allows you to run containers without having to manage any underlying infrastructure.
You first create the Container Group, (which is the compute layer) and then add one or more containers to it. If you wish to change the amount of CPU or RAM you can only do this when you create the Container Group.
Personally I found the documentation and commands confusing as you often see the terms Container Group and Container Instances used interchangeably. This single Azure CLI command will create a Container Group with a single container in it. This can also be done with Terraform, Bicep or in the Azure Portal.
az container create `
--resource-group $resourceGroupName `
--name $appName `
--image mcr.microsoft.com/mcr/hello-world:latest `
--dns-name-label aci-helloworld `
--os-type 'Linux' `
--cpu 1 --memory 1 `
--ports 80
ACI is great for one off tasks, short lived applications or for running Azure DevOps build agents. By default, when the container exits, it will auto restart however you can configure it to not restart or to only restart on failure. The Azure portal support for ACI is good, giving visibility of the container group and container logs even giving the option to connect to the container. The main drawbacks are that it is not possible to scale an Azure container group once deployed and there is no easy way to have SSL connections.
Overall these are a quick cheap way to run containers in Azure.
Specialist Container Support
Many Azure PaaS services offer the option to provide a container to extend the service. This can be a great option for starting to use containers or bringing customization to a PaaS or serverless service. These include services such as Azure Machine Learning, Azure Functions and Azure DataBricks.
Container on Azure App Service Plan
Azure App Service Plan (ASP) is another service that has been around for some time, however a few years ago it was extended to support containers. In fact if you are running a Linux App service plan, you will find that your app is automatically being placed inside a container with KUDU running in a sidecar.
This means you can get all the benefits of the App Service Plan such as scaling, custom domains, deployment slots and custom SSL support, but with the added flexibility of running your applications a Windows or Linux container. Much like ACI you provision the compute layer first, which is the App Service Plan, and then you can deploy your containers into it.
As containers have been added to App Service plans later, the service lacks some container features. Containers in the app service plan are also all public by default and it has no DAPR support currently. They do however support container SideCars which should bring many more options in the future.
The biggest drawback overall is that all apps in an app service plan scale together, so it is not well suited to microservices. App service plans also do not scale to zero, so you will be alway charged for the App Service Plan even if you are not using it.
Overall these are a good option for running containers in Azure if you are already using App Service Plans and want to run containers without having to manage the underlying infrastructure.
I have a longer post on App Service Web App for Containers vs Azure Container Apps that goes into more detail on this topic.
Azure Container Apps
Azure Container Apps (ACA) are a relatively newer service that is designed to run containers built on top of Azure Kubernetes Service (AKS).
Unlike AKS it is a fully managed service and hides the complexity that often comes with Kubernetes. For example where in AKS you hav complex ingress and egress to configure in ACA each container can have 3 simple options for ingress: None, Internal or External. Overall this service means you get the benefits of app service plans (ease of use, custom SSL, deployment options etc.) but with the power of Kubernetes.
Using ACA, you once again create the compute layer first, which is the Container App Environment, and then you can deploy your containers to it. This can either be serverless or dedicated.
ACA supports KEDA scaling giving many more options on how and when to scale than with an App Service Plan. You can also scale each container independently and scale all the way down to zero if you wish. This makes it a great option for running microservices in Azure.
There are a few minor drawbacks such as ACA only supporting Linux containers and the terminology being different to both ASP and AKS. Overall however, it is a great option for running containers in Azure when you don’t need fine grain control.
Azure Kubernetes Services
Azure Kubernetes Service (AKS) is the most powerful and flexible option for running containers in Azure. Although it gives you the most flexibility, you are responsible for managing the underlying infrastructure, which can be complex and time consuming.
You interact with AKS using the Kubernetes API, CLI and helm charts, the same as any other Kubernetes cluster. Behind the scenes it creates Azure infrastructure such as Azure Virtual Machine scale sets, various load balancers and more that reflect the configuration you have set in the Kubernetes cluster.
To address some of the drawbacks there are many additional features that are being added to help with the maintenance and management such as Azure Kubernetes Fleet Manager and the Automatic SKU that is currently in preview.
Overall AKS is a huge topic and a whole ecosystem in itself that is beyond the scope of this post. However, it is worth noting that it can be a great option for running containers in Azure if you need the control that it offers and can manage the maintenance.
Container Workloads on Azure Batch
In this post I wanted to briefly mention Container Workloads on Azure Batch as it is a service that can run container jobs. It is not a full container hosting solution and is more suited for running batch jobs or tasks that can be run in parallel.
Azure Batch allows you to take a task and run it in parallel across multiple nodes. It is driven not via the portal but through a set of APIs and SDKs. Batch is typically used for HCI and other compute intensive tasks such as rendering, simulations and data processing. Custom applications can split the tasks and pass them to Azure Batch. The underlying infrastructure is based on Azure VM scale sets, and it can scale up and down based on the number of tasks you have.
It is now recommended to use AKS for running rendering and other GPU intensive tasks so these have less use cases.
Containers on VMs
As with any cloud provider, you can always run containers on virtual machines (VMs). This gives you the most flexibility and control over your environment, but also requires the most management and maintenance as it is your full responsibility to patch, secure, configure and maintain every aspect of the solution.
Where this could be useful is if you have existing VMs that you wish to move up into the cloud or have a specific requirement that is not met by any of the other services.
Conclusion
There are many options for running containers in Azure, each with its own trade-offs and benefits. The best option for you will depend on your specific requirements, such as the level of control you need, the complexity of your application, and your team’s expertise. Hopefully this post has given you a good overview of the options available and how to start your investigation into the right one for your needs.
Title Photo by Pat Whelen on Unsplash