#How to Share Data Between Docker Containers – CloudSavvy IT

Table of Contents
“#How to Share Data Between Docker Containers – CloudSavvy IT”

Docker containers are intentionally isolated environments. Each container has its own filesystem which can’t be directly accessed by other containers or your host.
Sometimes containers may need to share data. Although you should aim for containers to be self-sufficient, there are scenarios where data sharing is unavoidable. This might be so a second container can access a combined cache, use a file-backed database, create a backup, or perform operations on user-generated data, such as an image optimizer container that processes profile photos uploaded via a separate web server container.
In this guide, we’ll look at a few methods for passing data between your Docker containers. We’ll assume you’ve already got Docker set up and are familiar with fundamental concepts such as containers, images, volumes, and networks.
Using Volumes to Share a Directory
Volumes are the de facto way to set up data sharing. They’re independent filesystems that store their data outside any individual container. Mounting a volume to a filesystem path within a container provides read-write access to the volume’s data.
Volumes can be attached to multiple containers simultaneously. This facilitates seamless data sharing and persistence that’s managed by Docker.
Create a volume to begin:
docker volume create --name shared-data
Next create your containers, mounting the volume to the filesystem path expected by each image:
docker run -d -v shared-data:/data --name example example-image:latest docker run -d -v shared-data:/backup-source --name backup backup-image:latest
In this example, the backup
container will gain effective access to the example
container’s /data
directory. It’ll be mounted as /backup-source
; changes made by either container will be reflected in the other.
Quickly Starting Containers With Matching Volumes
The example above could be simplified using the docker run
command’s --volumes-from
flag. This provides a mechanism to automatically mount volumes that are already used by an existing container:
docker run -d --volumes-from example --name backup backup-image:latest
This time the backup
container will receive the shared-data
volume mounted into its /data
directory. The --volumes-from
flag pulls in all the volume definitions attached to the example
container. It’s particularly ideal for backup jobs and other short-lived containers which act as auxiliary components to your main service.
Improving Safety With Read-Only Mounts
Volumes are always mounted in read-write mode by default. All your containers with access to a volume are permitted to change its contents, potentially causing unintended data loss.
It’s best practice to mount shared volumes in read-only mode when a container isn’t expected to make modifications. In the above example, the backup
container only needs to read the content of the shared-data
volume. Setting the mount to read-only mode enforces this expectation, preventing bugs or malicious binaries in the image from deleting data used by the example
container.
docker run -d -v shared-data:/backup-source:ro --name backup backup-image:latest
Adding ro
as a third colon-separated parameter to the -v
flag indicates the volume should be mounted in read-only mode. You can also write readonly
instead of ro
as a more explicit alternative.
Sharing Data Over A Network
You can use network exchanges as an alternative approach to data sharing via filesystem volumes. Joining two containers to the same Docker network lets them seamlessly communicate using auto-assigned hostnames:
docker network create demo-network docker run -d --net demo-network --name first example-image:latest docker run -d --net demo-network --name second another-image:latest
Here first
will be able to ping second
and vice versa. Your containers could run an HTTP API service enabling them to interact with each others’ data.
Continuing the backup example, now your backup
container could make a network request to http://example:8080/backup-data
to acquire the data to backup. The example
container should respond with an archive containing all the data that needs to be stored. The backup container then has responsibility for persisting the archive to a suitable storage location.
Enforcing that data sharing occurs over a network often aids decoupling efforts. You end up with clearly defined interfaces that don’t create hard dependencies between services. Data access can be more precisely controlled by exposing APIs for each data type, instead of giving every container total access to a volume.
It’s important to consider security if you use this approach. Make sure any HTTP APIs that are designed for internal access by your other Docker containers don’t have ports exposed on your Docker host’s bridge network. This is the default behavior when using the network options shown above; binding a port with -p 8080:8080
would allow access to the backup API via your host’s network interfaces. This would be a security issue.
Summary
Docker containers are isolated environments that can’t access each others’ filesystems. Nonetheless you can share data by creating a volume that’s mounted into all participating containers. Using a shared Docker network is an alternative option that provides stronger separation in scenarios where direct filesystem interactions aren’t necessary.
It’s good practice to limit inter-container interactions as far as possible. Cases where you need data sharing should be clearly defined to avoid tightly coupling your services together. Containers that have a rigid dependency on data from another container can be trickier to deploy and maintain over time, eroding the broader benefits of containerization and isolation.
If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.
For forums sites go to Forum.BuradaBiliyorum.Com
If you want to read more like this article, you can visit our Technology category.