#How and Why to Use A Remote Docker Host – CloudSavvy IT

Table of Contents
“#How and Why to Use A Remote Docker Host – CloudSavvy IT”

The docker
CLI program is independent of the Docker daemon which runs your containers. Although both components usually run on your local machine, you can run docker
commands against a remote Docker host.
Using a remote host can be helpful in a few scenarios. You might set up a shared Docker Engine installation for a small development team. Each developer could then connect to the remote containers with their local docker exec
command.
Remote hosts are more frequently valuable when you’ve got a powerful server going unused. If your laptop’s slow or running out of storage, using a dedicated Docker host on your network can greatly increase performance. You still get all the convenience of the local docker
CLI in your terminal.
Setting Up The Remote Host
Make sure you’ve got Docker installed on the system which will be your remote host. You only need the docker-cli
package on your local machine, as you won’t be running Docker Engine.
A fresh Docker installation provides a Unix socket by default. Remote access requires a TCP socket. Run dockerd
(the Docker daemon executable) with the -H
flag to define the sockets you want to bind to.
sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
This command will bind Docker to the default Unix socket and port 2375 on your machine’s loopback address. You can bind to additional sockets and IP addresses by repeating the -H
flag.
The flags need to be passed each time you run dockerd
. If you want them to persist after reboots, either create a shell alias or modify the Docker service definition. Here’s how you can achieve the latter with systemd
, which most Linux distributions use for service management.
Edit /etc/systemd/system/docker.service.d/options.conf
(or create it if it doesn’t exist). Find the [Service]
section and change the ExecStart
line:
[Service] ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
Reload your systemd
configuration to apply the changes:
sudo systemctl daemon-reload
If Docker’s already running, use sudo systemctl restart docker
to restart the service. The Docker daemon will now bind to TCP port 2375 each time it starts. Make sure traffic to the port is permitted by your firewall configuration. If you’re using ufw, run ufw allow 2375
to open the port.
Connecting To The Remote Host
The Docker CLI uses the DOCKER_HOST
environment variable to determine the host to connect to. The local daemon’s Unix socket will be used when the variable isn’t set.
You can use a remote host for a single docker
command by prepending the DOCKER_HOST
variable:
DOCKER_HOST=tcp://192.168.0.1:2375 docker run httpd:latest -d
This will start a new container from the httpd:latest
image using the Docker engine at 192.168.0.1:2375
.
If you’re going to be running multiple commands in one session, export the DOCKER_HOST
variable into your shell:
export DOCKER_HOST=tcp://192.168.0.1:2375 docker run httpd:latest -d --name httpd docker ps docker rm httpd --force
You can make docker
always use a remote host by setting DOCKER_HOST
globally in your shell’s configuration file. Here’s how you’d do that in Bash:
echo "export DOCKER_HOST=tcp://192.168.0.1:2375" >> ~/.bashrc
Now the DOCKER_HOST
environment variable will be set each time your shell starts.
Enhancing Security
The basic TCP socket is unprotected. Anyone who can reach your machine over the network can use the Docker socket to control your containers.
Docker supports SSH instead of TCP. This is usually a better option if the host has an SSH server available. It prevents unauthenticated users from gaining access. Using SSH requires no extra configuration. DOCKER_HOST
lets you pass in an SSH connection string:
DOCKER_HOST=ssh://user@hostname docker run -d --name httpd
Alternatively, you can use SSH bindings to directly bind the remote host’s Docker Unix socket to your local machine:
ssh -L /var/run/docker.sock:/var/run/docker.sock
Now you don’t need to use DOCKER_HOST
at all. The remote docker.sock
will be bound to its local counterpart. Docker will auto-detect this as its standard Unix socket.
Using one of the SSH-based solutions is the preferred way to approach Docker daemon security. Docker also supports TLS if you supply a certificate authority and server and client keys:
dockerd --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=0.0.0.0:2375
Now clients will be able to connect on port 2375 if they present a valid SSL certificate trusted by the certificate authority ca.pem
.
RELATED: What Is a PEM File and How Do You Use It?
Creating Contexts
Docker lets you set up several “contexts” for connecting to different hosts. Contexts can be used instead of the DOCKER_HOST
environment variable. They make it easier to switch between multiple remote hosts.
docker context create --docker host=tcp://192.168.0.1:2375 --description remote docker context create --docker host=unix:///var/run/docker.sock --description local
These commands create two different contexts – one for your local docker.sock
and one for a remote connection.
You can switch between contexts using the docker context use
command:
docker context use remote # Container is started on "remote" docker run httpd:-latest -d docker context use local # Lists containers running on "local" docker ps
Contexts are useful when you work with several Docker hosts. They’re less hassle than continually resetting the DOCKER_HOST
variable as you move betwen hosts.
Drawbacks of Remote Hosts
We noted earlier that a remote host can improve build performance. This statement’s only true if the machine running Docker Engine is quicker than your local hardware. The biggest drawback of a remote host is the extra overhead of interacting over the network. You also become dependent on the network – if you lose connectivity, you won’t be able to manage your containers.
You should have a reliable high-speed network connection if you’re going to use a remote host as your main build server. The first docker build
stage sends the contents of your image’s build context (usually your working directory) to Docker Engine. This is quick when Docker’s running locally but might take much longer to upload to a remote machine.
Exposing a Docker daemon instance over the network is a security risk. You need to make sure access is restricted to authorised users and devices. Unintentional exposure of a Docker daemon socket could give attackers limitless access to the host. Docker usually runs as root
so it’s critical that only trusted individuals can start containers.
Conclusion
Setting up a remote Docker host lets you separate your container instances from your local development machine. A dedicated Docker build server can offer improved performance and greater image storage space.
You should take care to audit the security of your implementation. A plain TCP socket might be safe on a private network but shouldn’t be deployed in any sensitive environment. Using SSH helps mitigate the risks if you practice good SSH security hygiene, such as mandatory key-based authentication.
If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.
For forums sites go to Forum.BuradaBiliyorum.Com
If you want to read more like this article, you can visit our Technology category.