Docker Container Alternatives for 2022
The word “Docker” has been ubiquitous in the technology industry for the better part of a decade. Docker is the company that introduced the world to a new and unique way to package applications that run on Linux systems and use kernel-level functionality to provide extra isolation (as compared to normal processes). They named this new technology “Docker containers” (after their company), and since then, everything that’s related to containers has been called “Docker,” just like tissues are often called “Kleenex” and using a search engine is called “Googling.”
Docker (the company) expanded beyond this core technology to include enterprise-friendly features, orchestration, and Docker Hub, the world’s most widely used container registry. They also began selling products that focused once more on what Docker does best – developer tooling.
As with other early innovators, Docker may have been first to the market, but their products are no longer required to actually build and maintain a containerized production environment. This article will explore alternatives to Docker in all major container technologies, particularly open source. It will also explain what, if anything, you need to change in your processes to be able to adopt the alternative.
Container Build Tools
Most people use `docker build` as their default way to build container images. This relies on a file named “Dockerfile,” which resides in the root directory of the application that is to be packaged in a container. Dockerfile contains the other container images to start the build, as specified by the FROM command. From that base image, a series of commands are executed using RUN and CMD to build the layers of the application within the container.
There are several Docker alternatives that can build container images within a Kubernetes cluster, including Kaniko from Google. The easiest direct replacement is buildah, which mirrors all of the build commands that exist in the Docker CLI. In addition, buildah allows you to build an image layer by layer from the command line, so you don’t need to maintain a single Dockerfile. You can find step-by-step building instructions in this tutorial.
Anyone who is familiar with Docker will recognize the following commands, which make a directory to host an application and the world’s simplest container (which will just ping localhost):
[[email protected] ~]$ mkdir container [[email protected] ~]$ cd container/ [[email protected] container]$ cat >> Dockerfile << EOF > FROM alpine:latest > CMD ping localhost > EOF [[email protected] container]$ buildah bud . STEP 1: FROM alpine:latest Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull docker.io/library/alpine:latest... Getting image source signatures Copying blob 59bf1c3509f3 done Copying config c059bfaa84 done Writing manifest to image destination Storing signatures STEP 2: CMD ping localhost STEP 3: COMMIT Getting image source signatures Copying blob 8d3ac3489996 skipped: already exists Copying blob 5f70bf18a086 done Copying config 944addf7c4 done Writing manifest to image destination Storing signatures --> 944addf7c4f 944addf7c4f494d11645e5e4e2d0a8ae3c70789aa283f9c4bc03c88cb453ec09
Now we’ve built a container, but it doesn’t have a name or any tags. The second container image listed is the image that was automatically downloaded to use as a base.
Code language: HTML, XML (xml)
[[email protected] container]$ buildah images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 944addf7c4f4 5 seconds ago 5.87 MB docker.io/library/alpine latest c059bfaa849c 4 days ago 5.87 MB
Note: All build tools will look for Dockerfile as the main source file for a container build, but certain groups in the industry are starting to push for “Containerfile” to be the standard instead. Nonetheless, Dockerfile will continue to work in all tools for the foreseeable future.
The two most popular runtimes used to spawn containers are runC and containerd, both of which are open source, and neither of which is controlled by a single company. These run as daemons on a machine and control the launching and shutdown of containers as required. Docker itself actually uses runC as its backend, so you don’t actually need a Docker alternative since it’s all open source and managed by the Open Container Initiative (OCI).
You may hear mention of other things when it comes to runtimes, like podman and CRI-O, which provide interfaces that call an OCI-compliant runtime in the background. CRI-O is extremely lightweight and built to be used within a Kubernetes cluster. Podman is built to be a replacement for Docker’s command line tooling and works hand-in-hand with buildah to provide that functionality.
This is where podman becomes a viable Docker alternative for the management of containers on a development machine or for workloads that don’t need multi-node orchestration capabilities.
Podman has all the command line capabilities that Docker has, and it’s available on Linux, Mac, and Windows. It has a better security profile, as its default runtime (crun) does not require a separate daemon to run as root. The Windows and Mac versions are remote clients that can use a local VM like Docker or connect to a remote host.
Podman is essentially a drop-in replacement on the command line. For example, this is running the container that we built previously:
[[email protected] container]$ podman run 944addf7c4f4 PING localhost (::1): 56 data bytes 64 bytes from ::1: seq=0 ttl=64 time=0.027 ms 64 bytes from ::1: seq=1 ttl=64 time=0.131 ms 64 bytes from ::1: seq=2 ttl=64 time=0.087 ms 64 bytes from ::1: seq=3 ttl=64 time=0.066 ms 64 bytes from ::1: seq=4 ttl=64 time=0.063 ms 64 bytes from ::1: seq=5 ttl=64 time=0.064 ms ^C --- localhost ping statistics --- 6 packets transmitted, 6 packets received, 0% packet loss round-trip min/avg/max = 0.027/0.073/0.131 ms
The method to launch a container in the background and see what’s running is also very similar to what you do on the Docker CLI.
If you operate within more complex scenarios or want to simulate production environments using Kubernetes without using KinD (Kubernetes in Docker), then there are micro-distributions of Kubernetes like microk8s and minikube, Rancher Desktop, and CodeReady Containers (CRC) that will work very well. They all support Mac, Windows, and Linux. You might want to match your desktop to production as closely as possible, especially if you are testing things like operators.
Registries and Repositories
You will hear both of these terms In the container space. Simply put, a repository is where you actually store images, and a registry acts as an index across multiple repositories. In most cases (as with Docker Hub), they are one in the same (meaning that the registry only accesses its own repository).
If you want to get away from just using Docker Hub (whether it’s to avoid its quotas or because you need something that can be run on-premises for security reasons), there are multiple options available.
If you want a hosted service, every major public cloud has an offering – like the Container Registry on Google Cloud. If you want to be cloud-agnostic, then building a registry into your source code management platform might be ideal. GitHub has a container registry available through its packages feature (which you can find on GitHub.com).
For on-premises solutions, the long-time industry leader is SonaType Nexus, which has both open source and professional versions. Artifactory by JFrog is another strong contender in this space. Artifactory is available in both self-hosted and cloud device forms.
Registering an OCI-compliant alternative container repository can be as easy as executing one command:
$ podman login acme-dockerv2-virtual.jfrog.io Username: myusername Password: Login Succeeded!
If the container registry has anonymous access, you can just pull directly from it without logging in:
$ podman pull acme-dockerv2-virtual.jfrog.io/hello-world
This is the easiest place to find an alternative to Docker’s technology, as even Docker sold off Swarm and has been adopting Kubernetes. While other orchestration engines exist, the vast majority of the industry has chosen Kubernetes, and it has no real competition in this space. Moving into 2022, there are actually 55 certified Kubernetes distributions and 48 certified hosted services.
There is a hosted service or distribution for every need – from bare-bones offerings like Azure AKS, to easy-to-use and developer-focused services like CIVO, and full-blown enterprise-focused distributions like SUSE Rancher and Red Hat OpenShift.
Container Foundations and Open Standards
Multiple companies started building Docker-compliant technologies almost as soon as containers were introduced. This was possible since the specification that Docker created leveraged existing functionality within the Linux kernel. Docker joined these groups over time, since cross-functionality helps everyone move forward.
For example, Docker donated their container specification and their core runtime engine (runC) to the OCI. They also joined the Cloud Native Computing Foundation (CNCF). The CNCF provides structure and governance for many of the most widely used container technologies – the most famous being Kubernetes. If you want to see just how expansive the space has become, you can find hundreds of things in the CNCF Landscape.
Since Docker is a core contributor to these open standards and foundations, containers that are built with Docker’s tools can be mixed and matched with projects and products from other organizations.
The key to a good container strategy is to pick the set of tools that works best for you – as long as those tools support OCI-compliant containers and ideally use Kubernetes to run in production. Then, you will have no problem meeting your organization’s requirements. Whether it’s security scanning, serverless applications, or enterprise-friendly monitoring, you will be able to find products and services in this ecosystem.