Here is a short guide covering the steps I followed in order to deploy an ElasticSearch cluster using SaltStack to manage configuration.


For the purposes of testing, I will be using a bunch of locally-installed virtualmachines. Specifically, we'll have:

  • A machine running salt master
  • Three elasticsearch nodes

Creating the VMs

To create the machines, I just installed a bare-bones wheezy image (under KVM) and cloned it four times:

virt-clone -o TemplateWheezy -n es-cluster-salt-master -f /mnt/virtualmachines/es-cluster-salt-master.img -m '52:54:00:ee:55:f0'
for id in 01 02 03; do
    virt-clone -o TemplateWheezy -n es-cluster-node-"$id" -f /mnt/virtualmachines/es-cluster-node-"$id".img -m '52:54:00:ee:55:'"$id"

To properly setup the network, I then added static enties for all the machines in the cluster:

virsh net-edit default
<host mac='52:54:00:ee:55:f0' name='' ip=''/>
<host mac='52:54:00:ee:55:01' name='' ip=''/>
<host mac='52:54:00:ee:55:02' name='' ip=''/>
<host mac='52:54:00:ee:55:03' name='' ip=''/>

Now we can start the VMs:

virsh start es-cluster-salt-master
for id in 01 02 03; do
    virsh start es-cluster-node-"$id"

You can make sure machines are responding:

ping -c1
ping -c1
ping -c1
ping -c1

Troubleshooting: if the dhcp seems to be misbehaving, make sure you remove all the cached DNS leases:

virsh net-destroy default
rm /var/lib/libvirt/dnsmasq/default.*
virsh net-start default

If you haven't done it yet in the template, it is a good moment for installing openssh on the VMs. Otherwise, you might want to replace the openssh certificates to have different ones on each machine.

Preparing the machines

First of all, configure the FQDNs of machines:

hostname salt-master  # or node-XX, ...
hostname > /etc/hostname
sed 's/^127\.0\.1\.1\s.*/\t'"$(hostname)"'.es-cluster.local '"$(hostname)"'/' -i /etc/hosts

To check:

$ hostname -f

Installing saltstack

I just followed the official installation guide for Debian:

echo 'deb wheezy-saltstack main' > /etc/apt/sources.list.d/saltstack.list
wget -q -O- "" | apt-key add -
apt-get update

On the master:

apt-get install salt-master

On the minions:

# If your don't have a proper DNS..
echo ' salt' >> /etc/hosts

apt-get install salt-minion

hostname -f > /etc/salt/minion_id

Configure minions to reach the master

If you want to use a DNS name different from the default salt, change /etc/salt/minion:


(I usually set a CNAME on the internal DNS to make the name salt point to the correct machine, and leave the default value in minions configuration).

Register the minion keys on the master

[email protected]:~# salt-key -L
Accepted Keys:
Unaccepted Keys:
Rejected Keys:

[email protected]:~# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
Proceed? [n/Y] y
Key for minion accepted.
Key for minion accepted.
Key for minion accepted.

Finish setting up the machines

First, check that minions are responding, by issuing:

salt '*'

You can also use this to check minions status:

salt-run manage.status

Configure grains on cluster nodes

We add some extra grains on the cluster machines in order to:

  • keep track of the configuration we want on each machine
  • store some configuration, such as which cluster the machine belongs to

In real life, we might want to configure other things, for example to identify the physical location of the server; then the cluster names will be decidede in the SLS files depending on those values.

Add this to minion configuration files:

    - elasticsearch
    cluster: es-cluster-local-01

Writing states

Now it's time to prepare the state (SLS) files that will be used to manage the cluster.


On the salt master:

mkdir /srv/salt
cd /srv/salt

Creating the "top" file


    - common_packages
    - match: grain
    - elasticsearch


This is used mostly to configure common stuff we want on each machine, for example editor, configuration files, etc. This is mine:

        - names:
            - git
            - etckeeper
            - tmux
            - htop
            - tree
            - emacs23-nox
            - yaml-mode

    - rev: master
    - target: /opt/CommonScripts

    - managed
    - source: salt://conf/bashrc

And, of course, the bashrc file, in /srv/salt/conf/bashrc:

# Standard ~/.bashrc
# Generated via Salt

export EDITOR='emacs'
alias e=emacs

if [ -e /opt/CommonScripts/Configs/bash/bash_aliases ]; then
    . /opt/CommonScripts/Configs/bash/bash_aliases

if [ -e /opt/CommonScripts/Configs/bash/ ]; then
   eval $( python /opt/CommonScripts/Configs/bash/ )

if [ -e ~/.bashrc_local ]; then
    . ~/.bashrc_local

Prerequisite: Oracle Java

Installing Java from Oracle on Debian is tricky, due to licensing problems, but luckily there is a command to generate java packages.

Install the tools to build java packages:

apt-get install java-package

Download an appropriate tarball, like this:

wget -O jre-7u60-linux-x64.tar.gz

As a normal user (root wouldn't work for security reasons), run this to create the .deb package:

make-jpkg jre-7u60-linux-x64.tar.gz

then answer to the questions made interactively.

After that, copy the resulting package to /srv/salt/java/oracle-j2re1.7_1.7.0+update60_amd64.deb

Then configure /srv/salt/java/init.sls to install the package:

        - sources:
            - oracle-j2re1.7: oracle-j2re1.7_1.7.0+update60_amd64.deb

The elasticsearch configuration

The most important part is the /srv/salt/elasticsearch/init.sls file:

# Include the ``java`` sls in order to use oracle_java_pkg
    - java

# Note: this is only valid for the Debian repo / package
# You should filter on grain['os'] conditional for yum-based distros
        - humanname: Elasticsearch Official Debian Repository
        - name: deb stable main
        - dist: stable
        - key_url: salt://elasticsearch/GPG-KEY-elasticsearch
        - file: /etc/apt/sources.list.d/elasticsearch.list

        - installed
        - require:
            - pkg: oracle_java_pkg
            - pkgrepo: elasticsearch_repo
        - running
        - enable: True
        - require:
            - pkg: elasticsearch
            - file: /etc/elasticsearch/elasticsearch.yml

    - managed
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - source: salt://elasticsearch/elasticsearch.yml

Download the elasticsearch repository key and store it as /srv/salt/elasticsearch/GPG-KEY-elasticsearch:

wget -O /srv/salt/elasticsearch/GPG-KEY-elasticsearch

Now, the elasticsearch configuration template, /srv/salt/elasticsearch/elasticsearch.yml:

# Elasticsearch configuration for {{ grains['fqdn'] }}
# Cluster: {{ grains['elasticsearch']['cluster'] }} {{ grains['elasticsearch']['cluster'] }} "{{ grains['fqdn'] }}"

Deploying configuration on minions

It's as easy as running:

salt '*' state.highstate

If you want more compact output, you can add the --state-output=terse argument to the above command.

Once the command completes, you should have your Elasticsearch cluster deployed, up and running.

Bonus: os-level targeting

If you need to match only a certain operating system in the top.sls, you can use compound matching like this:

'[email protected]:Debian and [email protected]:wheezy':
    - match: compound

Will match all the minions running Debian Wheezy.