You can change Docker’s storage base directory (where container and images go) using the -g
option when starting the Docker daemon.
Ubuntu/Debian: edit your /etc/default/docker
file with the -g option: DOCKER_OPTS='-dns 8.8.8.8 -dns 8.8.4.4 -g /mnt'
Fedora/Centos: edit /etc/sysconfig/docker
, and add the -g option in the other_args variable: ex. other_args='-g /var/lib/testdir'
. If there’s more than one option, make sure you enclose them in ' '. After a restart, (service docker restart) Docker should use the new directory.
Using a symlink is another method to change image storage.
Caution - These steps depend on your current /var/lib/docker being an actual directory (not a symlink to another location).
service docker stop
. Verify no docker process is running ps faux
ls /var/lib/docker/
tar -zcC /var/lib docker > /mnt/pd0/var_lib_docker-backup-$(date +%s).tar.gz
/var/lib/docker
directory to your new partition: mv /var/lib/docker /mnt/pd0/docker
ln -s /mnt/pd0/docker /var/lib/docker
ls /var/lib/docker/
(note the trailing slash to resolve the symlink)service docker start
Estimated reading time: 40 minutes
Background: Porting an ASP.NET Windows app to ASP.NET Core on Linux. Recently I've been working on porting a large, old (10 years), ASP.NET app to.NET Core. On the face of it, that sounds like a fools errand, but luckily the app has moved with the times. It uses Web API 2 and OWIN/Katana, with no Razor or WebForms dependencies, or anything. Docker Community Edition 17.09.0-ce-win33 2017-10-06. Bug fixes Fix Docker For Windows unable to start in some cases: removed use of libgmp sometimes causing the vpnkit process to die. Docker Community Edition 17.09.0-ce-win32 2017-10-02. Upgrades Docker 17.09.0-ce; Docker Compose 1.16.1; Docker Machine 0.12.2. I tried this method, but the instructions for Centos are only accurate for version 6.5 and below. Unfortunately I am running 7.0. I tried to revert everything by moving the /var/lib/docker directory back to its original location, but now when I run docker images or docker ps -a, I have no containers or images.
Swarm services use a declarative model, which means that you define thedesired state of the service, and rely upon Docker to maintain this state. Thestate includes information such as (but not limited to):
For an overview of swarm mode, see Swarm mode key concepts.For an overview of how services work, seeHow services work.
To create a single-replica service with no extra configuration, you only needto supply the image name. This command starts an Nginx service with arandomly-generated name and no published ports. This is a naive example, sinceyou can’t interact with the Nginx service.
The service is scheduled on an available node. To confirm that the servicewas created and started successfully, use the docker service ls
command:
Created services do not always run right away. A service can be in a pendingstate if its image is unavailable, if no node meets the requirements youconfigure for the service, or other reasons. SeePending services for moreinformation.
To provide a name for your service, use the --name
flag:
Just like with standalone containers, you can specify a command that theservice’s containers should run, by adding it after the image name. This examplestarts a service called helloworld
which uses an alpine
image and runs thecommand ping docker.com
:
You can also specify an image tag for the service to use. This example modifiesthe previous one to use the alpine:3.6
tag:
For more details about image tag resolution, seeSpecify the image version the service should use.
Swarm now allows using a Docker Config as a gMSA credential spec - a requirement for Active Directory-authenticated applications. This reduces the burden of distributing credential specs to the nodes they’re used on.
The following example assumes a gMSA and its credential spec (called credspec.json) already exists, and that the nodes being deployed to are correctly configured for the gMSA.
To use a Config as a credential spec, first create the Docker Config containing the credential spec:
Now, you should have a Docker Config named credspec, and you can create a service using this credential spec. To do so, use the --credential-spec flag with the config name, like this:
Your service will use the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec will not be mounted into the container.
If your image is available on a private registry which requires login, use the--with-registry-auth
flag with docker service create
, after logging in. Ifyour image is stored on registry.example.com
, which is a private registry, usea command like the following:
This passes the login token from your local client to the swarm nodes where theservice is deployed, using the encrypted WAL logs. With this information, thenodes are able to log into the registry and pull the image.
In Enterprise Edition 3.0, security is improved through the centralized distribution and management of Group Managed Service Account(gMSA) credentials using Docker Config functionality. Swarm now allows using a Docker Config as a gMSA credential spec, which reduces the burden of distributing credential specs to the nodes on which they are used.
Note: This option is only applicable to services using Windows containers.
Credential spec files are applied at runtime, eliminating the need for host-based credential spec files or registry entries - no gMSA credentials are written to disk on worker nodes. You can make credential specs available to Docker Engine running swarm kit worker nodes before a container starts. When deploying a service using a gMSA-based config, the credential spec is passed directly to the runtime of containers in that service.
The --credential-spec
must be one of the following formats:
file://<filename>
: The referenced file must be present in the CredentialSpecs
subdirectory in the docker data directory, which defaults to C:ProgramDataDocker
on Windows. For example, specifying file://spec.json
loads C:ProgramDataDockerCredentialSpecsspec.json
.registry://<value-name>
: The credential spec is read from the Windows registry on the daemon’s host.config://<config-name>
: The config name is automatically converted to the config ID in the CLI. The credential spec contained in the specified config
is used.The following simple example retrieves the gMSA name and JSON contents from your Active Directory (AD) instance:
Make sure that the nodes to which you are deploying are correctly configured for the gMSA.
To use a Config as a credential spec, create a Docker Config in a credential spec file named credpspec.json
. You can specify any name for the name of the config
.
Now you can create a service using this credential spec. Specify the --credential-spec
flag with the config name:
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec is not mounted into the container.
You can change almost everything about an existing service using thedocker service update
command. When you update a service, Docker stops itscontainers and restarts them with the new configuration.
Since Nginx is a web service, it works much better if you publish port 80to clients outside the swarm. You can specify this when you create the service,using the -p
or --publish
flag. When updating an existing service, the flagis --publish-add
. There is also a --publish-rm
flag to remove a port thatwas previously published.
Assuming that the my_web
service from the previous section still exists, usethe following command to update it to publish port 80.
To verify that it worked, use docker service ls
:
For more information on how publishing ports works, seepublish ports.
You can update almost every configuration detail about an existing service,including the image name and tag it runs. SeeUpdate a service’s image after creation.
To remove a service, use the docker service remove
command. You can remove aservice by its ID or name, as shown in the output of the docker service ls
command. The following command removes the my_web
service.
The following sections provide details about service configuration. This topicdoes not cover every flag or scenario. In almost every instance where you candefine a configuration at service creation, you can also update an existingservice’s configuration in a similar way.
See the command-line references fordocker service create
anddocker service update
, or runone of those commands with the --help
flag.
You can configure the following options for the runtime environment in thecontainer:
--env
flag--workdir
flag--user
flagThe following service’s containers have an environment variable $MYVAR
set to myvalue
, run from the /tmp/
directory, and run as themy_user
user.
To update the command an existing service runs, you can use the --args
flag.The following example updates an existing service called helloworld
so thatit runs the command ping docker.com
instead of whatever command it was runningbefore:
When you create a service without specifying any details about the version ofthe image to use, the service uses the version tagged with the latest
tag.You can force the service to use a specific version of the image in a fewdifferent ways, depending on your desired outcome.
An image version can be expressed in several different ways:
If you specify a tag, the manager (or the Docker client, if you usecontent trust) resolves that tag to a digest.When the request to create a container task is received on a worker node, theworker node only sees the digest, not the tag.
Some tags represent discrete releases, such as ubuntu:16.04
. Tags like thisalmost always resolve to a stable digest over time. It is recommendedthat you use this kind of tag when possible.
Other types of tags, such as latest
or nightly
, may resolve to a newdigest often, depending on how often an image’s author updates the tag. It isnot recommended to run services using a tag which is updated frequently, toprevent different service replica tasks from using different image versions.
If you don’t specify a version at all, by convention the image’s latest
tagis resolved to a digest. Workers use the image at this digest when creatingthe service task.
Thus, the following two commands are equivalent:
If you specify a digest directly, that exact version of the image is alwaysused when creating service tasks.
When you create a service, the image’s tag is resolved to the specific digestthe tag points to at the time of service creation. Worker nodes for thatservice use that specific digest forever unless the service is explicitlyupdated. This feature is particularly important if you do use often-changing tagssuch as latest
, because it ensures that all service tasks use the same versionof the image.
Note: If content trust isenabled, the client actually resolves the image’s tag to a digest beforecontacting the swarm manager, to verify that the image is signed.Thus, if you use content trust, the swarm manager receives the requestpre-resolved. In this case, if the client cannot resolve the image to adigest, the request fails.
If the manager can’t resolve the tag to a digest, each workernode is responsible for resolving the tag to a digest, and different nodes mayuse different versions of the image. If this happens, a warning like thefollowing is logged, substituting the placeholders for real information.
To see an image’s current digest, issue the commanddocker inspect <IMAGE>:<TAG>
and look for the RepoDigests
line. Thefollowing is the current digest for ubuntu:latest
at the time this contentwas written. The output is truncated for clarity.
After you create a service, its image is never updated unless you explicitly rundocker service update
with the --image
flag as described below. Other updateoperations such as scaling the service, adding or removing networks or volumes,renaming the service, or any other type of update operation do not update theservice’s image.
Each tag represents a digest, similar to a Git hash. Some tags, such aslatest
, are updated often to point to a new digest. Others, such asubuntu:16.04
, represent a released software version and are not expected toupdate to point to a new digest often if at all. When you create a service, itis constrained to create tasks using a specific digest of an image until youupdate the service using service update
with the --image
flag.
When you run service update
with the --image
flag, the swarm manager queriesDocker Hub or your private Docker registry for the digest the tag currentlypoints to and updates the service tasks to use that digest.
Note: If you use content trust, the Dockerclient resolves image and the swarm manager receives the image and digest, rather than a tag.
Usually, the manager can resolve the tag to a new digest and the serviceupdates, redeploying each task to use the new image. If the manager can’tresolve the tag or some other problem occurs, the next two sections outline whatto expect.
If the swarm manager can resolve the image tag to a digest, it instructs theworker nodes to redeploy the tasks and use the image at that digest.
If a worker has cached the image at that digest, it uses it.
If not, it attempts to pull the image from Docker Hub or the private registry.
If it succeeds, the task is deployed using the new image.
If the worker fails to pull the image, the service fails to deploy on thatworker node. Docker tries again to deploy the task, possibly on a differentworker node.
If the swarm manager cannot resolve the image to a digest, all is not lost:
The manager instructs the worker nodes to redeploy the tasks using the imageat that tag.
If the worker has a locally cached image that resolves to that tag, it usesthat image.
If the worker does not have a locally cached image that resolves to the tag,the worker tries to connect to Docker Hub or the private registry to pull theimage at that tag.
If this succeeds, the worker uses that image.
If this fails, the task fails to deploy and the manager tries again to deploythe task, possibly on a different worker node.
When you create a swarm service, you can publish that service’s ports to hostsoutside the swarm in two ways:
You can rely on the routing mesh.When you publish a service port, the swarm makes the service accessible at thetarget port on every node, regardless of whether there is a task for theservice running on that node or not. This is less complex and is the rightchoice for many types of services.
You can publish a service task’s port directly on the swarm nodewhere that service is running. This bypasses the routing mesh and provides themaximum flexibility, including the ability for you to develop your own routingframework. However, you are responsible for keeping track of where each task isrunning and routing requests to the tasks, and load-balancing across the nodes.
Keep reading for more information and use cases for each of these methods.
To publish a service’s ports externally to the swarm, use the--publish <PUBLISHED-PORT>:<SERVICE-PORT>
flag. The swarm makes the serviceaccessible at the published port on every swarm node. If an external hostconnects to that port on any swarm node, the routing mesh routes it to a task.The external host does not need to know the IP addresses or internally-usedports of the service tasks to interact with the service. When a user or processconnects to a service, any worker node running a service task may respond. Formore details about swarm service networking, seeManage swarm service networks.
Imagine that you have a 10-node swarm, and you deploy an Nginx service runningthree tasks on a 10-node swarm:
Three tasks run on up to three nodes. You don’t need to know which nodesare running the tasks; connecting to port 8080 on any of the 10 nodesconnects you to one of the three nginx
tasks. You can test this using curl
.The following example assumes that localhost
is one of the swarm nodes. Ifthis is not the case, or localhost
does not resolve to an IP address on yourhost, substitute the host’s IP address or resolvable host name.
The HTML output is truncated:
Subsequent connections may be routed to the same swarm node or a different one.
Using the routing mesh may not be the right choice for your application if youneed to make routing decisions based on application state or you need totalcontrol of the process for routing requests to your service’s tasks. To publisha service’s port directly on the node where it is running, use the mode=host
option to the --publish
flag.
Note: If you publish a service’s ports directly on the swarm node usingmode=host
and also set published=<PORT>
this creates an implicitlimitation that you can only run one task for that service on a given swarmnode. You can work around this by specifying published
without a portdefinition, which causes Docker to assign a random port for each task.
In addition, if you use mode=host
and you do not use the--mode=global
flag on docker service create
, it is difficult to knowwhich nodes are running the service to route work to them.
nginx
web server service on every swarm nodenginx is an open source reverse proxy, loadbalancer, HTTP cache, and a web server. If you run nginx as a service using therouting mesh, connecting to the nginx port on any swarm node shows you theweb page for (effectively) a random swarm node running the service.
The following example runs nginx as a service on each node in your swarm andexposes nginx port locally on each swarm node.
You can reach the nginx server on port 8080 of every swarm node. If you add anode to the swarm, a nginx task is started on it. You cannot start anotherservice or container on any swarm node which binds to port 8080.
Note: This is a naive example. Creating an application-layerrouting framework for a multi-tiered service is complex and out of scope forthis topic.
You can use overlay networks to connect one or more services within the swarm.
First, create overlay network on a manager node using the docker network create
command with the --driver overlay
flag.
After you create an overlay network in swarm mode, all manager nodes have accessto the network.
You can create a new service and pass the --network
flag to attach the serviceto the overlay network:
The swarm extends my-network
to each node running the service.
You can also connect an existing service to an overlay network using the--network-add
flag.
To disconnect a running service from a network, use the --network-rm
flag.
For more information on overlay networking and service discovery, refer toAttach services to an overlay network andDocker swarm mode overlay network security model.
To create a service with access to Docker-managed secrets, use the --secret
flag. For more information, seeManage sensitive strings (secrets) for Docker services
Docker allows you to specify a swarm service’s isolationmode. This setting applies to Windows hosts only and is ignored for Linuxhosts. The isolation mode can be one of the following:
default
: Use the default isolation mode configured for the Docker host, asconfigured by the -exec-opt
flag or exec-opts
array in daemon.json
. Ifthe daemon does not specify an isolation technology, process
is the defaultfor Windows Server, and hyperv
is the default (and only) choice forWindows 10.
process
: Run the service tasks as a separate process on the host.
Note:
process
isolation mode is only supported on Windows Server.Windows 10 only supportshyperv
isolation mode.
hyperv
: Run the service tasks as isolated hyperv
tasks. This increasesoverhead but provides more isolation.
You can specify the isolation mode when creating or updating a new service usingthe --isolation
flag.
Swarm services provide a few different ways for you to control scale andplacement of services on different nodes.
You can specify whether the service needs to run a specific number of replicasor should run globally on every worker node. SeeReplicated or global services.
You can configure the service’sCPU or memory requirements, and theservice only runs on nodes which can meet those requirements.
Placement constraints let you configure the serviceto run only on nodes with specific (arbitrary) metadata set, and cause thedeployment to fail if appropriate nodes do not exist. For instance, you canspecify that your service should only run on nodes where an arbitrary labelpci_compliant
is set to true
.
Placement preferences let you apply an arbitrarylabel with a range of values to each node, and spread your service’s tasksacross those nodes using an algorithm. Currently, the only supported algorithmis spread
, which tries to place them evenly. For instance, if youlabel each node with a label rack
which has a value from 1-10, then specifya placement preference keyed on rack
, then service tasks are placed asevenly as possible across all nodes with the label rack
, after taking otherplacement constraints, placement preferences, and other node-specificlimitations into account.
Unlike constraints, placement preferences are best-effort, and a service doesnot fail to deploy if no nodes can satisfy the preference. If you specify aplacement preference for a service, nodes that match that preference areranked higher when the swarm managers decide which nodes should run theservice tasks. Other factors, such as high availability of the service,also factor into which nodes are scheduled to run service tasks. Forexample, if you have N nodes with the rack label (and then some others), andyour service is configured to run N+1 replicas, the +1 is scheduled on anode that doesn’t already have the service on it if there is one, regardlessof whether that node has the rack
label or not.
Swarm mode has two types of services: replicated and global. For replicatedservices, you specify the number of replica tasks for the swarm manager toschedule onto available nodes. For global services, the scheduler places onetask on each available node that meets the service’splacement constraints andresource requirements.
You control the type of service using the --mode
flag. If you don’t specify amode, the service defaults to replicated
. For replicated services, you specifythe number of replica tasks you want to start using the --replicas
flag. Forexample, to start a replicated nginx service with 3 replica tasks:
To start a global service on each available node, pass --mode global
todocker service create
. Every time a new node becomes available, the schedulerplaces a task for the global service on the new node. For example to start aservice that runs alpine on every node in the swarm:
Service constraints let you set criteria for a node to meet before the schedulerdeploys a service to the node. You can apply constraints to theservice based upon node attributes and metadata or engine metadata. For moreinformation on constraints, refer to the docker service create
CLI reference.
To reserve a given amount of memory or number of CPUs for a service, use the--reserve-memory
or --reserve-cpu
flags. If no available nodes can satisfythe requirement (for instance, if you request 4 CPUs and no node in the swarmhas 4 CPUs), the service remains in a pending state until an appropriate node isavailable to run its tasks.
If your service attempts to use more memory than the swarm node has available,you may experience an Out Of Memory Exception (OOME) and a container, or theDocker daemon, might be killed by the kernel OOM killer. To prevent this fromhappening, ensure that your application runs on hosts with adequate memory andseeUnderstand the risks of running out of memory.
Swarm services allow you to use resource constraints, placement preferences, andlabels to ensure that your service is deployed to the appropriate swarm nodes.
Use placement constraints to control the nodes a service can be assigned to. Inthe following example, the service only runs on nodes with thelabelregion
setto east
. If no appropriately-labelled nodes are available, tasks will wait inPending
until they become available. The --constraint
flag uses an equalityoperator ( or !=
). For replicated services, it is possible that allservices run on the same node, or each node only runs one replica, or that somenodes don’t run any replicas. For global services, the service runs on everynode that meets the placement constraint and any resource requirements.
You can also use the constraint
service-level key in a docker-compose.yml
file.
If you specify multiple placement constraints, the service only deploys ontonodes where they are all met. The following example limits the service to run onall nodes where region
is set to east
and type
is not set to devel
:
You can also use placement constraints in conjunction with placement preferencesand CPU/memory constraints. Be careful not to use settings that are notpossible to fulfill.
For more information on constraints, refer to the docker service create
CLI reference.
While placement constraints limit the nodes a servicecan run on, placement preferences try to place tasks on appropriate nodesin an algorithmic way (currently, only spread evenly). For instance, if youassign each node a rack
label, you can set a placement preference to spreadthe service evenly across nodes with the rack
label, by value. This way, ifyou lose a rack, the service is still running on nodes on other racks.
Placement preferences are not strictly enforced. If no node has the labelyou specify in your preference, the service is deployed as though thepreference were not set.
Placement preferences are ignored for global services.
The following example sets a preference to spread the deployment across nodesbased on the value of the datacenter
label. If some nodes havedatacenter=us-east
and others have datacenter=us-west
, the service isdeployed as evenly as possible across the two sets of nodes.
Missing or null labels
Nodes which are missing the label used to spread still receivetask assignments. As a group, these nodes receive tasks in equalproportion to any of the other groups identified by a specific labelvalue. In a sense, a missing label is the same as having the label witha null value attached to it. If the service should only run onnodes with the label being used for the spread preference, thepreference should be combined with a constraint.
You can specify multiple placement preferences, and they are processed in theorder they are encountered. The following example sets up a service withmultiple placement preferences. Tasks are spread first over the variousdatacenters, and then over racks (as indicated by the respective labels):
You can also use placement preferences in conjunction with placement constraintsor CPU/memory constraints. Be careful not to use settings that are notpossible to fulfill.
This diagram illustrates how placement preferences work:
When updating a service with docker service update
, --placement-pref-add
appends a new placement preference after all existing placement preferences.--placement-pref-rm
removes an existing placement preference that matches theargument.
When you create a service, you can specify a rolling update behavior for how theswarm should apply changes to the service when you run docker service update
.You can also specify these flags as part of the update, as arguments todocker service update
.
The --update-delay
flag configures the time delay between updates to a servicetask or sets of tasks. You can describe the time T
as a combination of thenumber of seconds Ts
, minutes Tm
, or hours Th
. So 10m30s
indicates a 10minute 30 second delay.
By default the scheduler updates 1 task at a time. You can pass the--update-parallelism
flag to configure the maximum number of service tasksthat the scheduler updates simultaneously.
When an update to an individual task returns a state of RUNNING
, the schedulercontinues the update by continuing to another task until all tasks are updated.If, at any time during an update a task returns FAILED
, the scheduler pausesthe update. You can control the behavior using the --update-failure-action
flag for docker service create
or docker service update
.
In the example service below, the scheduler applies updates to a maximum of 2replicas at a time. When an updated task returns either RUNNING
or FAILED
,the scheduler waits 10 seconds before stopping the next task to update:
The --update-max-failure-ratio
flag controls what fraction of tasks can failduring an update before the update as a whole is considered to have failed. Forexample, with --update-max-failure-ratio 0.1 --update-failure-action pause
,after 10% of the tasks being updated fail, the update is paused.
An individual task update is considered to have failed if the task doesn’tstart up, or if it stops running within the monitoring period specified withthe --update-monitor
flag. The default value for --update-monitor
is 30seconds, which means that a task failing in the first 30 seconds after itsstarted counts towards the service update failure threshold, and a failureafter that is not counted.
In case the updated version of a service doesn’t function as expected, it’spossible to manually roll back to the previous version of the service usingdocker service update
’s --rollback
flag. This reverts the serviceto the configuration that was in place before the most recentdocker service update
command.
Other options can be combined with --rollback
; for example,--update-delay 0s
to execute the rollback without a delay between tasks:
You can configure a service to roll back automatically if a service update failsto deploy. See Automatically roll back if an update fails.
Manual rollback is handled at the server side, which allows manually-initiatedrollbacks to respect the new rollback parameters. Note that --rollback
cannotbe used in conjunction with other flags to docker service update
.
You can configure a service in such a way that if an update to the servicecauses redeployment to fail, the service can automatically roll back to theprevious configuration. This helps protect service availability. You can setone or more of the following flags at service creation or update. If you do notset a value, the default is used.
Flag | Default | Description |
---|---|---|
--rollback-delay | 0s | Amount of time to wait after rolling back a task before rolling back the next one. A value of 0 means to roll back the second task immediately after the first rolled-back task deploys. |
--rollback-failure-action | pause | When a task fails to roll back, whether to pause or continue trying to roll back other tasks. |
--rollback-max-failure-ratio | 0 | The failure rate to tolerate during a rollback, specified as a floating-point number between 0 and 1. For instance, given 5 tasks, a failure ratio of .2 would tolerate one task failing to roll back. A value of 0 means no failure are tolerated, while a value of 1 means any number of failure are tolerated. |
--rollback-monitor | 5s | Duration after each task rollback to monitor for failure. If a task stops before this time period has elapsed, the rollback is considered to have failed. |
--rollback-parallelism | 1 | The maximum number of tasks to roll back in parallel. By default, one task is rolled back at a time. A value of 0 causes all tasks to be rolled back in parallel. |
The following example configures a redis
service to roll back automaticallyif a docker service update
fails to deploy. Two tasks can be rolled back inparallel. Tasks are monitored for 20 seconds after rollback to be sure they donot exit, and a maximum failure ratio of 20% is tolerated. Default values areused for --rollback-delay
and --rollback-failure-action
.
For best performance and portability, you should avoid writing important datadirectly into a container’s writable layer, instead using data volumes or bindmounts. This principle also applies to services.
You can create two types of mounts for services in a swarm, volume
mounts orbind
mounts. Regardless of which type of mount you use, configure it using the--mount
flag when you create a service, or the --mount-add
or --mount-rm
flag when updating an existing service. The default is a data volume if youdon’t specify a type.
Data volumes are storage that exist independently of a container. Thelifecycle of data volumes under swarm services is similar to that undercontainers. Volumes outlive tasks and services, so their removal must bemanaged separately. Volumes can be created before deploying a service, or ifthey don’t exist on a particular host when a task is scheduled there, they arecreated automatically according to the volume specification on the service.
To use existing data volumes with a service use the --mount
flag:
If a volume with the same <VOLUME-NAME>
does not exist when a task isscheduled to a particular host, then one is created. The default volumedriver is local
. To use a different volume driver with this create-on-demandpattern, specify the driver and its options with the --mount
flag:
For more information on how to create data volumes and the use of volumedrivers, see Use volumes.
Bind mounts are file system paths from the host where the scheduler deploysthe container for the task. Docker mounts the path into the container. Thefile system path must exist before the swarm initializes the container for thetask.
The following examples show bind mount syntax:
To mount a read-write bind:
To mount a read-only bind:
Important: Bind mounts can be useful but they can also cause problems. Inmost cases, it is recommended that you architect your application such thatmounting paths from the host is unnecessary. The main risks include thefollowing:
If you bind mount a host path into your service’s containers, the pathmust exist on every swarm node. The Docker swarm mode scheduler can schedulecontainers on any machine that meets resource availability requirementsand satisfies all constraints and placement preferences you specify.
The Docker swarm mode scheduler may reschedule your running servicecontainers at any time if they become unhealthy or unreachable.
Host bind mounts are non-portable. When you use bind mounts, there is noguarantee that your application runs the same way in development as it doesin production.
You can use templates for some flags of service create
, using the syntaxprovided by the Go’s text/templatepackage.
The following flags are supported:
--hostname
--mount
--env
Valid placeholders for the Go template are:
Placeholder | Description |
---|---|
.Service.ID | Service ID |
.Service.Name | Service name |
.Service.Labels | Service labels |
.Node.ID | Node ID |
.Node.Hostname | Node hostname |
.Task.Name | Task name |
.Task.Slot | Task slot |
This example sets the template of the created containers based on theservice’s name and the ID of the node where the container is running:
To see the result of using the template, use the docker service ps
anddocker inspect
commands.