“How to Set a Memory Limit for Docker Containers”
Docker containers default to running without any resource constraints. Processes running in containers are free to utilize limitless amounts of memory, potentially impacting neighboring containers and other workloads on your host.
This is hazardous in production environments. Each container should be configured with an appropriate memory limit to prevent runaway resource consumption. This helps reduce contention which will maximize overall system stability.
How Docker Memory Limits Work
Docker lets you set hard and soft memory limits on individual containers. These have different effects on the amount of available memory and the behavior when the limit is reached.
- Hard memory limits set an absolute cap on the memory provided to the container. Exceeding this limit will normally cause the kernel out-of-memory killer to terminate the container process.
- Soft memory limits indicate the amount of memory a container’s expected to use. The container is permitted to use more memory when capacity is available. It could be terminated if it’s exceeding a soft limit during a low-memory condition.
Docker also provides controls for setting swap memory constraints and changing what happens when a memory limit is reached. You’ll see how to use these in the following sections.
Setting Hard and Soft Memory Limits
A hard memory limit is set by the
docker run command’s
--memory flag. It takes a value such as
512m (for megabytes) or
2g (for gigabytes):
$ docker run --memory=512m my-app:latest
Containers have a minimum memory requirement of 6MB. Trying to use
--memory values less than
6m will cause an error.
Soft memory limits are set with the
--memory-reservation flag. This value needs to be lower than
--memory. The limit will only be enforced when container resource contention occurs or the host is low on physical memory.
$ docker run --memory=512m --memory-reservation=256m my-app:latest
This example starts a container which has 256MB of reserved memory. The process could be terminated if it’s using 300MB and capacity is running out. It will always stop if usage exceeds 512MB.
Managing Swap Memory
Containers can be allocated swap memory to accommodate high usage without impacting physical memory consumption. Swap allows the contents of memory to be written to disk once the available RAM has been depleted.
--memory-swap flag controls the amount of swap space available. It only works in conjunction with
--memory. When you set
--memory-swap to different values, the swap value controls the total amount of memory available to the container, including swap space. The value of
--memory determines the portion of the amount that’s physical memory.
$ docker run --memory=512m --memory-swap=762m my-app:latest
This container has access to 762MB of memory of which 512MB is physical RAM. The remaining 250MB is swap space stored on disk.
--memory-swap gives the container access to the same amount of swap space as physical memory:
$ docker run --memory=512m my-app:latest
This container has a total of 1024MB of memory, comprising 512MB of RAM and 512MB of swap.
Swap can be disabled for a container by setting the
--memory-swap flag to the same value as
--memory-swap sets the total amount of memory, and
--memory allocates the physical memory proportion, you’re instructing Docker that 100% of the available memory should be RAM.
In all cases swap only works when it’s enabled on your host. Swap reporting inside containers is unreliable and shouldn’t be used. Commands such as
free that are executed within a container will display the total amount of swap space on your Docker host, not the swap accessible to the container.
Disabling Out-of-Memory Process Kills
Out-of-memory errors in a container normally cause the kernel to kill the process. This results in the container stopping with exit code 137.
Including the optional flag
--oom-kill-disable with your
docker run command disables this behavior. Instead of stopping the process, the kernel will simply block new memory allocations. The process will appear to hang until you either reduce its memory use, cancel new memory allocations, or manually restart the container.
This flag shouldn’t be used unless you’ve implemented mechanisms for resolving out-of-memory conditions yourself. It’s usually better to let the kernel kill the process, causing a container restart that restores normal memory consumption.
Docker containers come without pre-applied resource constraints. This leaves container processes free to consume unlimited memory, threatening the stability of your host.
In this article you’ve learned how to set hard and soft container memory limits to reduce the chance you’ll hit an out-of-memory situation. Setting these limits across all your containers will reduce resource contention and help you stay within your host’s physical memory capacity. You should consider using CPU limits alongside your memory caps – these will prevent individual containers with a high CPU demand from detrimentally impacting their neighbors.
If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.
For forums sites go to Forum.BuradaBiliyorum.Com
If you want to read more like this article, you can visit our Technology category.