Trending keywords: security, cloud, container,

Docker Developer Tools

SHARE:

The Docker Developer Tools are a set of tools provided by Docker for creating, managing, and sharing images (which are immutable compilations of an environment configured by a text file usually named Dockerfile).

The Docker integrated toolset gives you everything you need to manage images and containers locally, along with other goodies like security checks, composable Docker images, and extension management.

Setting up a development environment with Docker

Installing Docker on your machine

There are two main ways to use Docker:

  1. Directly to the server, the core functionality of Docker, usually accessed through the command line interface.
  2. Through the desktop client, which masks the core with a graphical user interface and offers an intuitive way to access the tools. Desktop versions are available for each major operating system platform (Linux, macOS, and Windows).

This article will concentrate on the server version.

Docker provides binaries for all major Linux distributions, and the installation process is usually the same for each one:

  1. Install the dependencies.
  2. Add the Docker binaries repository
  3. And install the server toolset.

On Ubuntu, for example, you start by installing the required dependencies using your shell terminal:

$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg lsb-release     Code language: JavaScript (javascript)

Once they are installed, you can import the GPG keys and Docker PPA repository:

$ sudo mkdir -m 0755 -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullCode language: PHP (php)

Then, update your repository index and install Docker Engine and the utilities:

$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-pluginCode language: JavaScript (javascript)

Finally, test your installation with:

$ docker pull hello-world
$ docker run hello-world

The Docker daemon will connect to the Docker Hub and download a pre-built image called hello-world. Then, the run command will start a container using that image, which echoes a standard message before exiting.

Creating and Managing Docker Images

The Docker toolset compiles local images from image definitions (which are usually text files conventionally named Dockerfile without an extension). These files contain the sequence of commands that Docker will execute to assemble and compile an image.

For example, the following Dockerfile defines an image based on Ubuntu 22.04 and, after updating the local apt repositories, installs the popular NGINX HTTP server:

FROM ubuntu:22.04
RUN apt update -y  &&  apt upgrade -y && apt-get update
RUN apt install -y --no-install-recommends nginx
RUN service nginx start
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Code language: Dockerfile (dockerfile)

The docker build command takes the specification (Dockerfile) located in the current directory (hence the dot at the end), assembles it, and compiles a runnable local image named sample_nginx:

$ docker build -t sample_nginx .

In Docker, objects (images, containers, volumes, networks, etc.) are managed using specialized commands. In this case, you can list all available subcommands for the image type-objects by using:

$ docker image –help

This gives you the basic management tools for local images. The most common tasks include listing, removing, and pruning unused images, (remember that each image consumes space on your local disk.). It also has the capability of pulling and pushing images from the public Docker Hub repository.

Running and Connecting to Docker Containers

Once you have a local image, running it is as simple as executing a command. Docker is flexible enough to let you run specific tagged versions of the images or attach specific volumes or networks to the actual container that will be created based on the image.

In the following example, the sample_nginx image that was built earlier will be used to create a container. The exposed port 80 will be linked to local port 80 so that you can access the NGINX from your local browser:

$ docker run -p 80:80 sample_ngnix

You can check the running container by using the docker ps command to get its ID, then you can get into the running instance by running the exec command with the interactive argument over the specific container:

$ docker ps
CONTAINER ID   IMAGE      	COMMAND              	CREATED     	STATUS     	PORTS                           	NAMES
3970c6698cfa   sample_nginx   "nginx -g 'daemon of…"   6 seconds ago   Up 4 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp   crazy_bhabha 
$ docker exec -it 3970c6698cfa  /bin/bashCode language: JavaScript (javascript)

This executes the /bin/bash command inside the running container and connects it to your local shell terminal. This way, you can run commands that will be executed by the container operating system instead of your host.

Types of Docker Developer Tools

Once you understand the potential benefits of containerization for the development cycle, you’ll find some common instances where the basic creation and running of images will not be enough.

Fortunately, the Docker ecosystem is full of integrated and open source solutions for the most common use cases.

Docker Integrated Developer Tools

  • Docker Compose – Separation of concerns and single responsibility are design concepts that are actually pretty simple to follow, and they can be powerful when you aggregate them to build solutions to complicated problems. In the context of software development, having a monolithic container with all of the components deployed inside of it is not an optimal solution.

    You don’t want your relational database engine running on the same machine as your HTTP proxy, for example. In such cases, you can use different containers with specific roles to interact as a single solution to orchestrate those containers.

    The docker compose command takes a yml file that declares the available resources, containers, volumes, networks, and dependencies between them to be launched with a single command.
  • Docker Scan – Security is always a foremost concern in any software development cycle. To help you assess the security of locally-created images, Docker Scan finds the CVEs in a target image with the help of the Snyk security tool.

    You can also group them and make a dependency tree of your images. To use this tool, you need a Docker Hub account and ideally a free Snyk.io account.

Open Source Developer Tools

Syft. The SBOM (Software Bill of Materials) of a container image lists the dependencies used to build the image. The Syft utility produces SBOMs from Docker images in several formats and includes version numbers along with other metadata about the dependencies.

You can install Syft using the CLI:

curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/binCode language: JavaScript (javascript)

For example, you can see which version of the NGINX package is installed on the sample image as follows (the jq utility allows us to process the JSON output of Syft):

$ syft -o json sample_nginx | jq '.artifacts[] | select (.name == "nginx")'"metadata": {
   "package": "nginx", "source": "", "version": "1.18.0-6ubuntu14.3", "sourceVersion": "", "architecture": "amd64", "maintainer": "Ubuntu Developers <[email protected]>", "installedSize": 49, "files": …   
} Code language: JavaScript (javascript)

Modus. Dockerfile text files are not the only way to define Docker images. The Modus language replaces the basic configuration standard with a more flexible (yet non-standard) configuration syntax. This language parallelizes some image-building tasks and allows you to use parameterized decisions inside the image definition.

For example, compare the following standard Docker configuration image definition with a Modus image definition:

FROM gcc:bullseye AS app  
COPY program.c program.c
ARG PROFILE
RUN if [ "$PROFILE" = "debug" ] ; then \
      CFLAGS=-g make -e program ; \
    else \
      make program ; \
    fiCode language: PHP (php)
 app(profile) :-  
    from("gcc:bullseye"),
    copy("program.c", "program.c"),
    make(profile).

make("debug") :- run("make -e program")::in_env("CFLAGS", "-g").

make("release") :- run("make program").Code language: JavaScript (javascript)

Envd – The Dockerfile text configuration file for defining images is usually sufficient for almost any use case. However, specialized users sometimes require more flexibility or structure. The envd CLI tool can create images for ML/AI defined through standard Python code.

For example, the following script lets you create an image that runs the popular Jupyter lab development environment along with the NumPy Python library:

def build():
	base(os="ubuntu22.04", language="python3")
	# Configure the pip index if needed.
	# config.pip_index(url = "https://pypi.tuna.tsinghua.edu.cn/simple")
	install.python_packages(name = [
    		"numpy",
	])
	shell("zsh")
	config.jupyter()Code language: PHP (php)

Envd’s approach to image building takes advantage of some nice Python utilities (like pip) that result in faster image compilation, among other benefits.

Miniboss. Composing containers is beneficial for development environments in several ways, but the YAML (Yet Another Markup Language) standard used to define docker-compose stacks is sometimes not as flexible as a full programming language.

Miniboss allows developers to create stacks and define them with Python scripts. It also provides container lifecycle hooks that enable reactions to container state changes based on logic. For example, the following script configures a Postgres database container and a dependent web application:

#! /usr/bin/env python3
import miniboss

miniboss.group_name('readme-demo')

class Database(miniboss.Service):
	name = "appdb"
	image = "postgres:10.6"
	env = {"POSTGRES_PASSWORD": "dbpwd",
       	"POSTGRES_USER": "dbuser",
       	"POSTGRES_DB": "appdb" }
	ports = {5432: 5433}

class Application(miniboss.Service):
	name = "python-todo"
	image = "latinxpower/python-todo:0.0.1"
	env = {"DB_URI": "postgresql://dbuser:dbpwd@appdb:5432/appdb"}
	dependencies = ["appdb"]
	ports = {8080: 8080}
	stop_signal = "SIGINT"

if __name__ == "__main__":
	miniboss.cli()

Ctop. Container performance is kind of difficult to measure on local environments. While Linux developers are well-versed in the use of top/htop to get process metrics, they can also use the ctop utility to see the basic CPU, memory, and network usage per running container as well as individual graphic indicators for a specific container in the shell terminal.

Docker-volume-backup. Local containers lose their filesystem status every time they are created. In Docker jargon, volumes are an abstraction over local disks to persist container data. You can attach local folders as volumes for specific containers and keep the data across several container lifecycle events.

The docker-volume-backup utility lets you backup/restore those volumes to local directories, cloud services like AWS S3, or MinIO-compatible locations.

Watchtower – During development, it is common to start a container and let it run in the background as a dependency for another, more active one. (Database containers are a good example of this.) Watchtower periodically checks whether a container is running with the latest available image, and if it’s not, it automatically restarts it using the latest one available.

It has “pre-” and “post-” hooks, which execute scripts that react to certain kinds of events by running code inside the affected container.

Sidekick – Debugging code running inside a container is not a straightforward task, given that containers are isolated environments. This problem can be solved using open source Sidekick agents inside your containers to collect logs, traces, and error stacks without affecting the running environment.

The current implementation is compatible with Java, Python, and Node.js applications. There are also plugins for several IDEs that are widely used by developers (like VS Code).

DockerSlim – Once your images are built and running properly, you can check additional details like image size and known vulnerabilities. The open source DockerSlim tool checks Docker images to optimize their size (it’s a good idea to keep your images small since small images consume fewer resources and take less time to build/transfer/instantiate).

It also checks for vulnerabilities along the possible attack surface at every layer. You can inspect changes made to your images with the X-ray utility.

Registry Pruner. Local images are not the end of the road. You will usually want to publish your images to public or private registries like Docker Hub to make them available for execution. However, these images also usually have a business-defined lifecycle that requires you to depreciate or remove certain images according to certain policies.

You can automate this task with the registry-pruner tool, which lets you codify the rules and apply them to standard Docker image registries.

Docker Developer Best Practices and Considerations

Security and Access Management

There are several common Dockerfile security practices that developers can easily implement to secure their environment against common attacks (like supply chain attacks):

  • Keep your host and Docker installations up to date.
  • Avoid running Docker as a root user.
  • Use non-root users inside your containers.
  • Scan your images and downloads for CVEs before releasing or using them.
  • Set maximum resource usage quotas for the host.
  • Use read-only volumes.
  • Create multi-step image-building Dockerfiles.
  • Use metadata to facilitate image registry administration.

Conclusion

Containers are now a key piece of the development cycle. The Docker toolset enables developers to integrate the concept of containers into any kind of application design. There are also several Docker developer tools that allow you to implement DevSecOps best practices in a breeze.

The Docker ecosystem is rich and provides many complementary tools that automate and/or enhance the developer experience when working with containers.