Using Docker to Run Multiple Elasticsearch Instances

1 year ago by Milos Zdravkovic

Docker Elasticsearch

We'll assume that you already have Docker up and running. If not, brief instructions on how to set it up in Ubuntu 16.04 can be found here.

Note: Unlike $, the # prompt marks the commands for which you need to acquire root privilegies. You can also prefix these commands with the sudo command (if available on your system) to run them.

First, let's list all available Elasticsearch images and pull down the "official" one.

# docker search elasticsearch
NAME                              DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
elasticsearch                     Elasticsearch is a powerful open source se...   1674      [OK]       
itzg/elasticsearch                Provides an easily configurable Elasticsea...   37                   [OK]
nshou/elasticsearch-kibana        Elasticsearch-2.3.4 Kibana-4.5.3                15                   [OK]
barnybug/elasticsearch            Latest Elasticsearch 1.7.2 and previous re...   15                   [OK]
digitalwonderland/elasticsearch   Latest Elasticsearch with Marvel & Kibana       14                   [OK]
monsantoco/elasticsearch          ElasticSearch Docker image                      9                    [OK]
lmenezes/elasticsearch-kopf       elasticsearch kopf                              8                    [OK]
million12/elasticsearch           Elasticsearch (CentOS 7)                        6                    [OK]
mesoscloud/elasticsearch          [UNMAINTAINED] Elasticsearch                    5                    [OK]
pires/elasticsearch               Elasticsearch (1.7.0) cluster on top of Ku...   3                    [OK]
blacktop/elasticsearch            Alpine Linux based Elasticsearch Docker Image   2                    [OK]
chialab/elasticsearch             Elasticsearch image with Marvel plugin.         1                    [OK]
visity/elasticsearch-curator      Automated build for docker-elasticsearch-c...   1                    [OK]
khezen/elasticsearch              Elasticsearch Docker image including Shiel...   1                    [OK]
shifudao/elasticsearch            elasticsearch for shifudao test environmen...   0                    [OK]
bryanhong/elasticsearch           Elasticsearch, standalone or clustered          0                    [OK]
glampinghub/elasticsearch         ElasticSearch 0.9 WIP                           0                    [OK]
findspire/elasticsearch           Elasticsearch for our dev environment           0                    [OK]
synopsis/elasticsearch            Docker image with elasticsearch                 0                    [OK]
phase2/elasticsearch              elasticsearch image to allow for extra plu...   0                    [OK]
livingdocs/elasticsearch          The elasticsearch setup we at Livingdocs c...   0                    [OK]
meedan/elasticsearch              elasticsearch with elasticsearch-gui            0                    [OK]
1science/elasticsearch            Elasticsearch Docker images based on Alpin...   0                    [OK]
drupaldocker/elasticsearch        Elasticsearch for Drupal                        0                    [OK]
ianneub/elasticsearch                                                             0                    [OK]

# docker pull elasticsearch
Using default tag: latest
latest: Pulling from library/elasticsearch

6a5a5368e0c2: Pull complete 
7b9457ec39de: Pull complete 
d5cc639e6fca: Pull complete 
2cac98b7f5b9: Pull complete 
bf96dd67c9aa: Pull complete 
ab05ba8362e2: Pull complete 
fa7e8f9f253c: Pull complete 
1bcda778c27e: Pull complete 
7ef4f9486437: Pull complete 
70899aa48619: Pull complete 
86c562ccbdcd: Pull complete 
f6bb1563ea2d: Pull complete 
0c3036c1ad72: Pull complete 
d5bc2f99845f: Pull complete 
Digest: sha256:fe78f31641f2c17276a001b78d83b74500d910c9619eaa3f6279dbfaf1914c0d
Status: Downloaded newer image for elasticsearch:latest

If everything went as expected, elasticsearch should be visible in the list of images installed on your system:

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
elasticsearch       latest              22287ab1f811        2 weeks ago         342.8 MB

Now, let's say we want to create three Elasticseacrh instances called Alpha, Beta and Gamma. In order for data stored on them to survive across reboots we need to create persistent storage volumes as follows:

# docker volume create --name Alpha
# docker volume create --name Beta
# docker volume create --name Gamma

The next thing to solve is the port conflict. Containers are isolated environments, so all three instances can safely bind to tcp/9200 (which is the default for Elastic) even though they are running on the same host. But, in order for all three of them to be visible to the outside world, we need to give Docker three different external ports using -p flag as shown below:

# docker run -d -p -v Alpha:/usr/share/elasticsearch/data elasticsearch
# docker run -d -p -v Beta:/usr/share/elasticsearch/data elasticsearch
# docker run -d -p -v Gamma:/usr/share/elasticsearch/data elasticsearch

We now have three Elastic instances up and running. The -d flag will detach the instances from our terminal (otherwise, we'll end-up "inside" the container). The -v flag specifies the volume to use and its path inside a container. The last argument (in this case elasticsearch) is the name of the image we're instantiating.

Elasticsearch should not be visible to anyone you don't trust since it doesn't come with any kind of authorization or access control. That's why we are restricting access to the localhost only by explicitly specifying the address in the -p argument. External ports for Alpha, Beta and Gamma in this example are 9201, 9202 and 9203, respectively.

You can place the last three commands in /etc/rc.local. This is probably the lamest but still the most universal way to start your containers on each boot.


To see if Alpha, Beta and Gamma are responding, run the following commands:

$ curl localhost:9201
$ curl localhost:9202
$ curl localhost:9203

The respond for each of these commands should be something like this:

  "name" : "Moon Knight",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Sb-UdwbFQ4SLKT4eHFM6uA",
  "version" : {
    "number" : "2.4.1",
    "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16",
    "build_timestamp" : "2016-09-27T18:57:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.2"
  "tagline" : "You Know, for Search"

If you are not getting any response use the following command to check if the containers are actually up and running:

# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                NAMES
c9605a58a477        elasticsearch       "/docker-entrypoint.s"   4 seconds ago       Up 3 seconds        9300/tcp,>9200/tcp   romantic_williams
7623ce0e1796        elasticsearch       "/docker-entrypoint.s"   14 seconds ago      Up 14 seconds       9300/tcp,>9200/tcp   agitated_golick
636f9a9120ee        elasticsearch       "/docker-entrypoint.s"   7 minutes ago       Up 7 minutes        9300/tcp,>9200/tcp   serene_goldwasser

Put some random data in each of the instances. For example:

$ curl -XPOST 'localhost:9201/laraget/example?pretty' -d '{"nubmers": "one two three"}'
$ curl -XPOST 'localhost:9202/laraget/example?pretty' -d '{"numbers": "uno due tre"}'
$ curl -XPOST 'localhost:9203/laraget/example?pretty' -d '{"numbers": "un deux trois"}'

Reboot the system and test if the data is still there:

$ curl -XPOST 'localhost:9201/laraget/example/_search?pretty' -d '{}'
  "took" : 10,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "laraget",
      "_type" : "example",
      "_id" : "AVfN793atYyUChxTrjPl",
      "_score" : 1.0,
      "_source" : {
        "nubmers" : "one two three"
    } ]

Try to access these instances from some remote host using the similar curl commands (don't forget to replace the localhost with the actual IP address of your Docker machine). If everything is a as expected, you should not get any response:

$ curl
curl: (7) Failed to connect to port 9202: Connection refused

Cleaning up the mess

The list of currently running containers can be obtained by:

# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                NAMES
f8c6c8a248bb        elasticsearch       "/docker-entrypoint.s"   5 minutes ago       Up 5 minutes        9300/tcp,>9200/tcp   happy_wilson
a873e63a80f9        elasticsearch       "/docker-entrypoint.s"   5 minutes ago       Up 5 minutes        9300/tcp,>9200/tcp   suspicious_austin
90a56460d1ec        elasticsearch       "/docker-entrypoint.s"   5 minutes ago       Up 5 minutes        9300/tcp,>9200/tcp   silly_bose

Stopping containers is easy. Just specify the ID obtained from the output of previous command:

# docker stop a873e63a80f9

To prevent them from starting automatically on boot remove the appropriate line from /etc/rc.local.

Volumes can be destroyed with the command such as this:

# docker volume rm Beta

Note that this command will fail if the volume is currently in use. The "zombies" are also known to be the problem for this command:

# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                                NAMES
295690cb339b        elasticsearch       "/docker-entrypoint.s"   9 seconds ago       Up 8 seconds                9300/tcp,>9200/tcp   berserk_lamport
a2fa8f73e11f        elasticsearch       "/docker-entrypoint.s"   10 seconds ago      Up 9 seconds                9300/tcp,>9200/tcp   goofy_tesla
216ebc785428        elasticsearch       "/docker-entrypoint.s"   11 seconds ago      Up 10 seconds               9300/tcp,>9200/tcp   amazing_hugle
6596df09fc9c        elasticsearch       "/docker-entrypoint.s"   38 seconds ago      Exited (0) 23 seconds ago                                        zen_euclid
1675c49aaccb        elasticsearch       "/docker-entrypoint.s"   39 seconds ago      Exited (0) 23 seconds ago                                        fervent_yalow
7787c022a214        elasticsearch       "/docker-entrypoint.s"   39 seconds ago      Exited (0) 23 seconds ago                                        kickass_aryabhata

You can remove a "zombie" by running the docker rm command as shown in the example below:

# docker rm 6596df09fc9c 1675c49aaccb 7787c022a214
# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                NAMES
295690cb339b        elasticsearch       "/docker-entrypoint.s"   3 minutes ago       Up 3 minutes        9300/tcp,>9200/tcp   berserk_lamport
a2fa8f73e11f        elasticsearch       "/docker-entrypoint.s"   3 minutes ago       Up 3 minutes        9300/tcp,>9200/tcp   goofy_tesla
216ebc785428        elasticsearch       "/docker-entrypoint.s"   3 minutes ago       Up 3 minutes        9300/tcp,>9200/tcp   amazing_hugle


Installing Docker on Ubuntu 16.04

Add the GPG key and the official Docker repository to the system:

# apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo "deb ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list

Download package information from all configured sources and confirm that the docker-engine package will be installed from the Docker repo instead of the default Ubuntu 16.04 repo:

# sudo apt-get update
# apt-cache policy docker-engine
  Installed: (none)
  Candidate: 1.11.1-0~xenial
  Version table:
     1.11.1-0~xenial 500
        500 ubuntu-xenial/main amd64 Packages
     1.11.0-0~xenial 500
        500 ubuntu-xenial/main amd64 Packages

If you have the right repository, proceed with actual installation:

# sudo apt-get install -y docker-engine

Check the current and boot status by running the following two commands:

$ systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2016-10-16 10:10:27 CEST; 44min ago
 Main PID: 2639 (dockerd)
    Tasks: 17
   Memory: 19.8M
      CPU: 1.105s
   CGroup: /system.slice/docker.service
           ├─2639 /usr/bin/dockerd -H fd://
           └─2647 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcon

Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.267675753+02:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.268233729+02:00" level=warning msg="Your kernel does not support swap memory limit."
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.268954023+02:00" level=info msg="Loading containers: start."
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.293562780+02:00" level=info msg="Firewalld running: false"
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.515648940+02:00" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be us
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.653058648+02:00" level=info msg="Loading containers: done."
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.653216556+02:00" level=info msg="Daemon has completed initialization"
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.653248736+02:00" level=info msg="Docker daemon" commit=bb80604 graphdriver=aufs version=1.12.2
Oct 16 10:10:27 Elastic-Blog dockerd[2639]: time="2016-10-16T10:10:27.658162829+02:00" level=info msg="API listen on /var/run/docker.sock"
Oct 16 10:10:27 Elastic-Blog systemd[1]: Started Docker Application Container Engine.
$ systemctl is-enabled docker

If Docker is not in active (running) state or not enabled on boot then you can fix this by running:

# systemctl start docker


# systemctl enable docker

View All Blog Posts Here