Docker: Backup and restore

Docker data is distributed across different Docker objects and files. Unfortunately, there is no universal approach for creating backups of individual Docker data. However, several different commands can used in combination. We’ll show you how to create and restore Docker backups.

We have already covered; ‘what is a backup?’, different backup strategies, and explained how to create database backups. Next up, we’ll deal with creating Docker backups.

Cloud Backup powered by Acronis
Mitigate downtime with total workload protection
  • Automatic backup and easy recovery
  • Intuitive scheduling and management
  • AI-based threat protection

What should be considered when creating Docker backups?

There is no dedicated Docker backup tool available, such as the MySQL backup with mysqldump or a PostgreSQL backup with pg_dump. Although a ‘docker backup volume’ or ‘docker backup container’ command search would be unsuccessful, there are alternative approaches which include individual Docker components in backups. Firstly, we should look at which Docker components should be backed up.

Which Docker components should be backed up?

Generally, backups should be made for all data which cannot be recovered if it somehow is lost. There are at least three types of data in Docker’s case:

  1. Docker objects: These are managed by the Docker Daemon and the data is stored under a special directory. There is no one-to-one correspondence between the files located there and the data in the Docker container. We’ll create backups of the following Docker objects:
    • Docker container
    • Docker volumes
    • Docker image
  2. Docker build files: These are managed by the user and can be found in arbitrary folders on the host system. The build files can be easily copied and archived. We’ll create backups of the following Docker build files:
    • Docker Compose project folder
    • Dockerfiles
  3. Databases within a container: These are exported from the container as dump files. We’ll create backups of databases for the following systems:
    • MySQL databases
    • PostgreSQL database

Creating Docker backups involves writing the data in archive files on the host system. The archive files are then copied from the local system to a backup system afterwards.

It may be worth automating the process if a lot of Docker backups are being created. Sophisticated scripts which simplify the creation of Docker backups are available on GitHub. Developer Andreas Laub provides a number of practical Docker backup tools under an open source license.

Quote

‘With this script you are able to back up your docker environment. There is one for the compose project, for mysql or mariadb, for postgres SQL and for normal docker volumes.’ Source: https://github.com/alaub81/backup_docker_scripts

Note

It is advised to periodically back up Portainer data if you use the Portainer software to administer your Docker environment.

Where is Docker container data stored?

Data must be copied to create a backup. Firstly, we need to know where the data is located. In the Docker universe, docker volumes serve as storage locations for exchange between Docker containers and the host system, or alternatively between multiple containers.

A Docker container is created from a read-only Docker image. Making changes to a running container can be risky and lead to losses when the container is removed. Data can be permanently backed up by exporting from a running container.

Container data can also be stored in the running container along with Docker volumes. This data ends up in the writeable storage layer of the container file system. This is not ideal in most cases. Therefore, it is a good idea to configure the system wisely and use a Docker volume.

It is complex to determine where exactly container data is stored, as there is more than one type of Docker volume. There are subtle differences in the location and fate of the data when the container is removed depending on the type of volume used. Below is an overview of the main Docker storage components:

Docker storage component Location When removing the container
Writable container layer Union file system Data is lost
Bind mount Folder in host file system Data remains
Named volume Inside the Docker environment Data remains
Anonymous volume Inside the Docker environment Data is removed

Step-by-step guide to creating Docker backups

A tarball archive is usually created for a Docker backup using the tar command. The general command to create a tar archive file can be found below:

tar -cvf <path/to/archive-file>.tar <path/to/file-or-folder>

However, the entire path of the archived folder is included in the archive when enabled. This can be problematic when restoring. Therefore, an alternative approach is available:

cd <path/to/file-or-folder> && tar -cvf <path/to/archive-file>.tar .

Firstly, we must change to the folder to be archived and apply the tar command to the current folder. The current folder is referred to with a dot (.) on the command line. Combining these two steps with the && operator ensures that archiving is only carried out if changing the destination folder was successful.

The .tar files created when archiving the Docker data can be found on the host system. The archive files must be transferred to a backup system to create a backup. This can be done with external storage or cloud storage. Special tools can used for the transfer, such as Rsync or S3.

Tip

We explain how to create a server backup with Rsync in our detailed article.

The following tables gives an overview of the steps required to create Docker backups and the corresponding commands. There is more than one approach available to create a backup of a Docker object in some cases.

Docker object Create backup Example command
Docker container Save as Docker image docker container commit <container-id> <backup-name>
Docker image Export as tarball archive / Push to registry docker image save --output <image-backup>.tar <image-id> / docker image push <image-id>
Docker volume Mount volume in container; then create tarball archives from the container docker run --rm --volumes-from <container-id> --volume <host:container> <image> bash -c "tar -cvf <volume-backup>.tar <path/to/volume/data>"
Docker Compose project folder Create tarball archive/Versioning with Git tar -cvf <compose-backup>.tar </path/to/compose-dir>
Dockerfile Save file/Versioning with Git tar -cvf <dockerfile-backup>.tar <path/to/dockerfile>

Assuming that backup files are in the /backup/ folder in the home directory once they have been created, create the folder as a preparatory step:

mkdir -p ~/backup/

Create a backup Docker container

Data found in the Docker container is stored in read-only layers in a union file system. A Docker container is based on a read-only image and contains an additional, writable layer in addition to the layers of the image.

A new image must be created from the running container to permanently back up the data in the writable layer. The ‘docker commit’ command must be used for this. The contents of the image as a tarball archive must be backed up:

  1. Write Docker file system in image
docker container commit <container-id> <backup-name>
  1. Export resulting image as tarball archive file
docker image save --output ~/backup/<backup-name>.tar <backup-name>

Alternatively, push the resulting image to a repository using the ‘docker image push’ command:

docker image push <backup-name>
Note

The ‘docker container export’ command can also be used to export the container file system. However, this reduces all layers to a single layer. Therefore, this approach is not suitable for creating a backup.

Create Docker image backup

We use a familiar approach to back up a Docker image available on the local host. The image data must be written into a tarball archive:

docker image save --output ~/backup/<image-name>.tar <image-name>

Create a Docker volume backup

Creating a Docker volume backup is a complex process. We must distinguish between ‘bind mounts’, ‘named’, and ‘anonymous’ Docker volumes. Accessing a bind mount from the host file system is straightforward, so it is simple to create a tarball archive of this folder:

cd <path/to/docker-mount/> && tar -cvf ~/backup/<volume-backup>.tar .

It is a different situation creating a backup of a named or anonymous volume from Docker, as volumes can only be accessed within a running Docker container. The trick for archiving the data contained in a Docker volume is to start a container with access to the volume. The volume’s data is then archived from within the running container. Let’s look at the individual steps of the process:

  1. Stop container with access to the volume
docker stop <container-id>
  1. Start temporary container and extract volume data

The temporary container has access to the desired volumes as well as to the backup folder on the host.

docker run --rm --volumes-from <container-id> --volume ~/backup:/backup ubuntu bash -c "cd <path/to/volume/data> && tar -cvf /backup/<volume-backup>.tar ."
  1. Restart container with access to the volume
docker start <container-id>

The command to extract the volume data is complex. Let’s look at the individual components in detail:

Command component Explanation
--rm Instructs Docker to remove the new container after creating the volume backup.
--volumes-from <container-id> Mounts the volumes of the specified container in the new container and makes the data inside accessible.
--volume ~/backup:/backup Creates bind mount between the ~/backup/ folder on the host system and the /backup/ folder inside the container.
ubuntu Specifies that the new container should load an Ubuntu Linux image.
bash -c "cd …" Creates a tarball archive of the volume data inside the /backup/ folder in the newly started container; linking this folder to the ~/backup/ folder on the host system makes the archive file accessible outside of Docker.

Backup the Docker Compose project folder

A tarball archive of the folder must be created to back up the Docker Compose project folder:

cd <path/to/docker-compose-dir> && tar -cvf ~/backup/<compose-backup>.tar .
Note

It is a good idea to version the docker-compose.yaml file with Git. It is important to store sensitive data such as passwords in a separate .env file and exclude them from version checks with .gitignore.

Create a Dockerfile backup

Change to the folder that contains the Dockerfile and create a tarball archive of the Dockerfile for a full backup of a Dockerfile:

cd <path/to/dockerfile-dir> && tar -cvf ~/backup/<dockerfile-backup>.tar ./Dockerfile
Note

It is a good idea to version the file named ‘Dockerfile’ with Git. It is important to store sensitive data such passwords in a separate .env file and exclude it from version checks with .gitignore.

Create backup from database in Docker container

Databases are usually containerised nowadays. Database-specific backup tools are commonly used to create a backup of a database running in the Docker container, such as mysqldump and pg_dump.

The respective backup tool operates within the Docker container. The ‘docker exec’ command must be used for this, which is shown below using an example from the MySQL database. The ‘mysqldump’ command operates inside the container. The resulting dump is sent to the standard output of the host shell. A SQL dump file should be written in the backup folder of the local host using redirection:

docker exec <container-id> /usr/bin/mysqldump --user=root --password=<password> <dbname> > ~/backup/<dbname>.sql

Creating a backup of a PostgreSQL database from a running Docker container is similar. Assuming that the database username and password are stored in the .pgpass file inside the container. An overview of how to create a backup in custom dump format is outlined below:

docker exec <container-id> pg_dump --format=custom --dbname=<dbname> > ~/backup/<dbname>.dump

A classic plain text SQL dump can also be created. Simply add the options ‘—clean’ and ‘--if-exists’. The aim is to clean the target database before importing, so the dump can be imported on the source system without error messages:

docker exec <container-id> pg_dump --format=plain --clean --if-exists --dbname=<dbname> > ~/backup/<dbname>.dump

Step-by-step guide to restoring Docker backups

Creating backups of Docker data have been covered. We will now be turning our attention to restoring data from Docker backups for the remainder of this article. We assume that all backups are available on the local host in the ~/backup/ folder. The backups might need to be copied from the backup media to the backup folder first.

Restore Docker containers from backup

Quick reminder; we create a new image from the running container and then save it as a tarball archive file to create a backup of a Docker container. Use the ‘docker image load’ command to restore a Docker image from a tarball archive:

docker image load --input ~/backup/<image-name>.tar

Start a new container from the resulting image. We use the ‘-detach’ option to start the container in the background:

docker run --detach <image-id>
Tip

Use the ‘docker image ls’ command to display a list of available Docker images along with their names and IDs.

Restore Docker image from backup

We have already described the recovery of a Docker image from a tarball archive. The ‘docker image load’ command should be used:

docker image load --input ~/backup/<image-name>.tar

Restore to Docker host backup of a volume

Restoring the data of a Docker volume from a backup is a complex process. The exact procedure depends on the specific deployment scenario. We’ll demonstrate how to overwrite the volume data from a backup. We assume that the system is the same as the one the backup was created on, whereby the containers, volumes, and other Docker objects involved are all present. The process is more complex if the system is freshly set up.

Restoring a Docker volume backup requires a temporary container with access to the volume. We have already described how the Docker command works exactly. We’ll illustrate the individual steps of the process in detail:

  1. Stop containers that use the volume
docker stop <container-id>
  1. Start temporary container with access to volume and copy Docker data from backup to volume
docker run --rm --volumes-from <container-id> --volume ~/backup:/backup ubuntu bash -c "cd <path/to/volume/data> && tar -xvf /backup/<volume-backup>.tar"
  1. Restart containers which use the volume
docker start <container-id>

Restore a Docker Compose project folder from a backup

Restoring a Docker Compose project folder from a backup is straightforward. Change to the Docker Compose folder and unpack the tarball archive there. The existing data will be overwritten in the process.

cd <path/to/docker-compose-dir> && tar -xvf ~/backup/<compose-backup>.tar

Restore a Dockerfile from a backup

Restoring a Dockerfile from a tarball archive is simple. Simply unpack the tarball archive inside the Dockerfile folder. The existing Dockerfile will be overwritten.

cd <path/to/dockerfile-dir> && tar -xvf ~/backup/<dockerfile-backup>.tar

Restore database to Docker container from backup

Use the ‘docker exec’ command to restore a database located in the Docker container from a backup. The ‘—interactive’ option starts the container in interactive mode and keeps the standard input open.

Output the contents of the MySQL dump using the cat command and pipe the output to the Docker command on the local host. The mysql command is carried out inside the container and processes the SQL statements which rebuilds the database.

cat ~/backup/<dbname>.sql | docker exec --interactive <container-id> /usr/bin/mysql --user=root --password=<password> <dbname>

The process of restoring a PostgreSQL database is somewhat more complex. One of two available tools can be used depending on the format of the database dump. Run the pg_restore tool inside the container to restore a database dump in PostgreSQL ‘custom’ format. Use input redirection to feed the dump file as input:

docker exec --interactive <container-id> pg_restore --dbname=<dbname> < ~/backup/<dbname>.dump

Restoring a plain text PostgreSQL database dump is similar to restoring a MySQL dump. We issue the dump file with the cat command and pipe the output to the ‘docker exec’ command with option ‘—interactive’. The psql command is carried out inside the container, which processes the SQL commands and rebuilds the database.

cat ~/backup/<dbname>.sql | docker exec --interactive <container-id> psql --user=root --password=<password> <dbname>
Was this article helpful?
Page top