rudijs.github.com

Web Development, Web Operations, Devops Blog

ELK V2 Quickstart with Official Docker Images: Elasticsearch v2, Logstash v2, Kibana v4

Overview

Elasticsearch and Logstash have both released new major versions, v2, this post will demostrate the ELK stack using them with Docker.

We'll be using offical Docker images from hub.docker.com

Using the ELK stack (Elasticsearch, Logstash and Kibana) we'll implement a centralized logging system.

TL;DR

In just a few commands use docker to get an ELK v2 stack up and running.

Method

The following method has been tested with Linux Ubuntu 14:04.

Whichever Linux distro you choose, the only pre-requisite to follow along here is to have Docker installed.

A nice feature when using these Docker images to launch containers is that you can specify the exact versions you require.

The ELK stack is three pieces of software, each updating independently, so it's very nice to be able to set the exact versions you want.

You can follow along by typing or copy pasting the following commands in red.

Step 1 - Download

These are the three docker images we'll be downloading and launching:

We'll download each image one by one (you can download all 3 in one command or even at just run time):

  • sudo docker pull elasticsearch:2.1.0
  • sudo docker pull logstash:2.1.0
  • sudo docker pull kibana:4.3.0

Step 2 - Elasticsearch

  • Create a directory to hold the persisted index data.
  • mkdir esdata
  • Run a Docker container, bind the esdata directory (volume) and expose port 9200 and listen on all IPs
  • sudo docker run -d --name elasticsearch -v "$PWD/esdata":/usr/share/elasticsearch/data -p 9200:9200 elasticsearch:2.1.0 -Des.network.bind_host=0.0.0.0
  • You should see some output like:
  • f624c4ea0f532b8022d948befdb81299e08c57e3e3e50c75976f66366ec423a8
  • Check the container is running OK:
  • sudo docker ps
  • You should see output similar to:
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                NAMES
391370901f42        elasticsearch:2.1.0                                                        "/docker-entrypoint.   7 seconds ago       Up 6 seconds        9300/tcp, 0.0.0.0:9200->9200/tcp
  • We can also look at the start up output from the elasticsearch container.
  • sudo docker logs elasticsearch

You should see output similar to:

[2015-11-30 07:43:46,075][INFO ][node                     ] [Water Wizard] version[2.1.0], pid[1], build[72cd1f1/2015-11-18T22:40:03Z]
[2015-11-30 07:43:46,075][INFO ][node                     ] [Water Wizard] initializing ...
[2015-11-30 07:43:46,209][INFO ][plugins                  ] [Water Wizard] loaded [], sites []
[2015-11-30 07:43:46,296][INFO ][env                      ] [Water Wizard] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/disk/by-uuid/307721ef-5d43-483d-916c-8d84ea413439)]], net usable_space [16.7gb], net total_space [39.3gb], spins? [possibly], types [ext4]
[2015-11-30 07:43:50,919][INFO ][node                     ] [Water Wizard] initialized
[2015-11-30 07:43:50,948][INFO ][node                     ] [Water Wizard] starting ...
[2015-11-30 07:43:51,277][WARN ][common.network           ] [Water Wizard] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.23}
[2015-11-30 07:43:51,278][INFO ][transport                ] [Water Wizard] publish_address {172.17.0.23:9300}, bound_addresses {[::]:9300}
[2015-11-30 07:43:51,336][INFO ][discovery                ] [Water Wizard] elasticsearch/IfHSxUEDRb-h4vxP3g_FVA
[2015-11-30 07:43:54,466][INFO ][cluster.service          ] [Water Wizard] new_master {Water Wizard}{IfHSxUEDRb-h4vxP3g_FVA}{172.17.0.23}{172.17.0.23:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-30 07:43:46,075][INFO ][node                     ] [Water Wizard] version[2.1.0], pid[1], build[72cd1f1/2015-11-18T22:40:03Z]
[2015-11-30 07:43:46,075][INFO ][node                     ] [Water Wizard] initializing ...
[2015-11-30 07:43:46,209][INFO ][plugins                  ] [Water Wizard] loaded [], sites []
[2015-11-30 07:43:46,296][INFO ][env                      ] [Water Wizard] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/disk/by-uuid/307721ef-5d43-483d-916c-8d84ea413439)]], net usable_space [16.7gb], net total_space [39.3gb], spins? [possibly], types [ext4]
[2015-11-30 07:43:50,919][INFO ][node                     ] [Water Wizard] initialized
[2015-11-30 07:43:50,948][INFO ][node                     ] [Water Wizard] starting ...
[2015-11-30 07:43:51,277][WARN ][common.network           ] [Water Wizard] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.23}
[2015-11-30 07:43:51,278][INFO ][transport                ] [Water Wizard] publish_address {172.17.0.23:9300}, bound_addresses {[::]:9300}
[2015-11-30 07:43:51,336][INFO ][discovery                ] [Water Wizard] elasticsearch/IfHSxUEDRb-h4vxP3g_FVA
[2015-11-30 07:43:54,466][INFO ][cluster.service          ] [Water Wizard] new_master {Water Wizard}{IfHSxUEDRb-h4vxP3g_FVA}{172.17.0.23}{172.17.0.23:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-11-30 07:43:54,593][WARN ][common.network           ] [Water Wizard] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.23}
[2015-11-30 07:43:54,594][INFO ][http                     ] [Water Wizard] publish_address {172.17.0.23:9200}, bound_addresses {[::]:9200}
[2015-11-30 07:43:54,597][INFO ][node                     ] [Water Wizard] started
[2015-11-30 07:43:54,651][INFO ][gateway                  ] [Water Wizard] recovered [0] indices into cluster_state

Elasticsearch should now be running on port 9200.

To test, point your browser at port 9200 http://localhost:9200.

You should see output similar to the following with status code of 200.

{
  "name" : "Dragon Man",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.1.0",
    "build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
    "build_timestamp" : "2015-11-18T22:40:03Z",
    "build_snapshot" : false,
    "lucene_version" : "5.3.1"
  },
  "tagline" : "You Know, for Search"
}

Step 3 - Logstash

  • Create a directory for your logstash configuration files.
  • mkdir -p logstash/conf.d/
  • Create an input logstash configuration file logstash/conf.d/input.conf with this content:
input {
    file {
        type => "test"
        path => [
            "/host/var/log/test.log"
            ]
    }
}
  • Create an output logstash configuration file logstash/conf.d/output.conf with this content:
output {
    elasticsearch {
        hosts => ["localhost"]
    }
}
  • For our use case here our Docker Logstash container will monitor a log file from our host machine.
  • Create a directory for log files that our Logstash Docker container will monitor.
  • mkdir -p var/log
  • Start our logstash docker container. It will watch the test.log file from the var/log directory we just created.
  • sudo docker run -d --name logstash -v $PWD/logstash/conf.d:/etc/logstash/conf.d:ro -v $PWD/var/log:/host/var/log:ro --net host logstash:2.1.0 logstash -f /etc/logstash/conf.d --debug
  • Note: We've used the --debug flag for this demonstration so we can check logstash's start up processes and watch for any errors
  • sudo docker logs -f logstash

To test your Logstash to Elasticsearch connection, run the following command in a new shell:

  • echo 101 > var/log/test.log
  • Now lets check Elasticsearch
  • curl localhost:9200/logstash-*/_search?pretty=true
  • You should see some json format output with a "_source" property with "message" 101.
{
  "took" : 42,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "logstash-2015.11.30",
      "_type" : "test",
      "_id" : "AVFWoIPrYVXvU5tGKQQM",
      "_score" : 1.0,
      "_source":{"message":"101","@version":"1","@timestamp":"2015-11-30T04:22:16.361Z","host":"tesla","path":"/host/var/log/test.log","type":"test"}
    } ]
  }
}

Step 4 - Kibana

  • sudo docker run -d --name kibana -p 5601:5601 -e ELASTICSEARCH_URL=http://localhost:9200 --net host kibana:4.3.0

Kibana should now be running on port 5601.

To test, point your web browser at port 5601 localhost:5601

You should see the Kibana UI.

Click green Create button to create the Kibana index, then click Discover from the main top menu to load up the log entries from Elasticsearch.

We can now start to explore some more.

Lets start by setting up Kibana to auto-refresh, click up in the top right "Last 15 minutes"

Click "Auto-refresh" and set it to '5 seconds'

Now let's create a new log entry, switch to the terminal command line and enter in:

echo 201 >> var/log/test.log

Now back in Kibana after 5 or less seconds we should see the 201 log entry.

Summary

I hope this post gets you up and running quickly and painlessly - ready to explore more of the power of the ELK stack.

A good next step is to follow up with the online documentation.

Comments and feedback are very much welcomed.

If I've overlooked anything, if you can see room for improvement or if any errors please do let me know.

Thanks!

Docker Elasticsearch v2 Geolocation Sort and Filter

Overview

Now that Elasticsearch v2.0.0 is out lets take a quick look at how quick and easy it is to explore it's features using Docker.

This post will demonstrate two Elasticsearch Geolocation features using the official Docker image with Linux Ubuntu 14.04:

  1. Sorting by Distance
  2. Geo Distance Filter

All the following steps are commands that you will run from a terminal.

You can type them out or copy and paste them directly.

Setup

Clone the code repository and cd into it:

Start an Elasticsearch Docker container.

This command will download the official Elasticsearch v2.0.0 image from hub.docker.com and start a container instance:

  • sudo docker run -d --name elasticsearch -v "$PWD/esdata":/usr/share/elasticsearch/data -p 9200:9200 elasticsearch:2.0.0 -Des.network.bind_host=0.0.0.0

Create

Create a new Elasticsearch index

  • curl -XPUT http://localhost:9200/geo

Create a mapping on the new index for our data

  • mapping_place.json
  • curl -XPUT localhost:9200/geo/_mapping/place --data-binary @mapping_place.json

Geo Location Data

For this demostration we'll be using the East Coast of Australia from Cairns to Hobart.

Australian East Coast

Add some 'places' to our index

Click the links to the place_ files below to view the JSON query objects being sent to the server.

  • place_brisbane.json
  • curl -XPOST http://localhost:9200/geo/place/ --data-binary @place_brisbane.json
  • place_sydney.json
  • curl -XPOST http://localhost:9200/geo/place/ --data-binary @place_sydney.json
  • place_melbourne.json
  • curl -XPOST http://localhost:9200/geo/place/ --data-binary @place_melbourne.json

Search

Search all data (no sorting or filters):

  • curl -s -XGET http://localhost:9200/geo/place/_search
  • Results: Brisbane, Sydney, Melbourne

The next searches we'll do will be JSON objects that we POST to the server.

Click the links to the query_ files below to view the JSON query objects being sent to the server.

Search and sort by distance:

  • Search from Cairns in Far North Queensland (Top of the map): query_distance_from_cairns.json
  • curl -s -XPOST http://localhost:9200/geo/place/_search --data-binary @query_distance_from_cairns.json
  • Results: Brisbane, Sydney, Melbourne
  • Search from Hobart in Tasmania (Bottom of the map): query_distance_from_hobart.json
  • curl -s -XPOST http://localhost:9200/geo/place/_search --data-binary @query_distance_from_hobart.json
  • Results: Melbourne, Sydney, Brisbane
  • Search from Canberra, the National Capital, which is nearer to Sydney: query_distance_from_canberra.json
  • curl -s -XPOST http://localhost:9200/geo/place/_search --data-binary @query_distance_from_canberra.json
  • Results: Sydney, Melbourne, Brisbane

Search, sort and filter by distance:

  • Search from Hobart in Tasmania (Bottom of the map) and limit the distance range to 1,500km: query_distance_from_hobart_filter_1500km.json
  • curl -s -XPOST http://localhost:9200/geo/place/_search --data-binary @query_distance_from_hobart_filter_1500km.json
  • Results: Melbourne, Sydney

Summary

Deploying Elasticsearch v2.0.0 with Docker and using Elasticsearch's Geolocation features is clean, simple and very powerful.

For complete details on the tools and options available consult the Official Documentation

I hope this post gets you up and running quickly and painlessly - ready to explore more of the power of the Elasticsearch.

Comments and feedback are very much welcomed.

If I've overlooked anything, if you can see room for improvement or if any errors please do let me know.

Thanks!

Docker ELK Quickstart: Elasticsearch, Logstash, Kibana

Overview

In this post we'll look at a quick start 'how to' with Docker and the ELK stack.

The ELK stack is, of course, Elasticsearch, Logstash and Kibana.

We'll be using this stack to implement a centralized logging system.

This post draws inspiration from the official elastic examples github repo at github.com/elastic/examples

TLDR

In just a few commands, use docker to get an ELK stack up and running.

Method

The following method has been tested, and is being used, with Linux Ubuntu 14:04.

Installation

Install Docker

If you don't have docker installed already, go ahead and install it.

You can find instructions for your computer at the Official installation docs

Note: Once you have docker installed, we'll be using the command line for all install and setup steps below.

You will need to open a new shell window and type or copy and paste the following commands:

Install Elasticsearch

Install Logstash

Install Kibana

Test Installation

Elasticsearch

  • Create a directory to hold the persisted index data.
  • mkdir esdata
  • Run a Docker container, bind the esdata directory (volume) and expose port 9200.
  • sudo docker run -d --name elasticsearch -v "$PWD/esdata":/usr/share/elasticsearch/data -p 9200:9200 elasticsearch
  • You should see some output like:
  • f624c4ea0f532b8022d948befdb81299e08c57e3e3e50c75976f66366ec423a8
  • Check the container is running OK:
  • sudo docker ps
  • You should see output similar to:
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                NAMES
81acb79909b2        elasticsearch       "/docker-entrypoint.   4 seconds ago       Up 4 seconds        0.0.0.0:9200->9200/tcp, 9300/tcp   elasticsearch
  • We can also look at the start up output from the elasticsearch container.
  • sudo docker logs elasticsearch

You should see output like:

[2015-10-04 01:30:24,859][INFO ][node                     ] [Darkoth] version[1.7.2], pid[1], build[e43676b/2015-09-14T09:49:53Z]
[2015-10-04 01:30:24,860][INFO ][node                     ] [Darkoth] initializing ...
[2015-10-04 01:30:24,901][INFO ][plugins                  ] [Darkoth] loaded [], sites []
[2015-10-04 01:30:24,924][INFO ][env                      ] [Darkoth] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/crypt2)]], net usable_space [101.5gb], net total_space [114gb], types [ext4]
[2015-10-04 01:30:26,507][INFO ][node                     ] [Darkoth] initialized
[2015-10-04 01:30:26,507][INFO ][node                     ] [Darkoth] starting ...
[2015-10-04 01:30:26,546][INFO ][transport                ] [Darkoth] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.12:9300]}
[2015-10-04 01:30:26,556][INFO ][discovery                ] [Darkoth] elasticsearch/MoTbiQ-ZQ42H5KmQiSDznQ
[2015-10-04 01:30:30,320][INFO ][cluster.service          ] [Darkoth] new_master [Darkoth][MoTbiQ-ZQ42H5KmQiSDznQ][896132e24bd7][inet[/172.17.0.12:9300]], reason: zen-disco-join (elected_as_master)
[2015-10-04 01:30:30,348][INFO ][http                     ] [Darkoth] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.12:9200]}
[2015-10-04 01:30:30,348][INFO ][node                     ] [Darkoth] started
[2015-10-04 01:30:30,363][INFO ][gateway                  ] [Darkoth] recovered [0] indices into cluster_state

Elasticsearch should now be running on port 9200.

To test, point your browser at port 9200 http://localhost:9200.

You should see output similar to the following with status code of 200.

{
  "status" : 200,
  "name" : "Letha",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.7.2",
    "build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
    "build_timestamp" : "2015-09-14T09:49:53Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

Logstash

  • Create a directory for your logstash configuration files.
  • mkdir -p logstash/conf.d/
  • Create an input logstash configuration file logstash/conf.d/input.conf with this content:
input {
    file {
        type => "test"
        path => [
            "/host/var/log/test.log"
            ]
    }
}
  • Create an output logstash configuration file logstash/conf.d/output.conf with this content:
output {
    elasticsearch {
        host => ["localhost"]
        protocol => "http"
    }
}
  • For our use case here our Docker Logstash container will monitor a log file from our host machine.
  • Create a directory for log files that our Logstash Docker container will monitor.
  • mkdir -p var/log
  • Start our logstash docker container. It will watch the test.log file from the var/log directory we just created.
  • sudo docker run -d --name logstash -v $PWD/logstash/conf.d:/etc/logstash/conf.d:ro -v $PWD/var/log:/host/var/log --net host logstash logstash -f /etc/logstash/conf.d --debug
  • We've used the --debug flag so we can check logstash's start up processes and watch for any errors:
  • sudo docker logs -f logstash

To test your Logstash to Elasticsearch installation, run the following command in a new shell:

  • echo 101 > var/log/test.log
  • Now lets check Elasticsearch
  • curl localhost:9200/logstash-*/_search?pretty=true
  • You should see some json format output with a "_source" property with "message" 101.
{
  "took" : 36,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "logstash-2015.10.04",
      "_type" : "test",
      "_id" : "AVAwf1qDktIqdRR8yR3P",
      "_score" : 1.0,
      "_source":{"message":"101","@version":"1","@timestamp":"2015-10-04T01:37:43.554Z","host":"rudi-Lenovo-Y50-70","path":"/host/var/log/test.log","type":"test"}
    } ]
  }
}

Kibana

  • sudo docker run -d --name kibana -p 5601:5601 -e ELASTICSEARCH_URL=http://localhost:9200 --net host kibana

Kibana should now be running on port 5601.

To test, point your web browser at port 5601 localhost:5601

You should see the Kibana UI.

Click green Create button to create the Kibana index, then click Discover from the main top menu to load up the log entries from Elasticsearch.

We can now start to explore some more.

Lets start by setting up Kibana to auto-refresh, click up in the top right "Last 15 minutes"

Click "Auto-refresh" and set it to '5 seconds'

Now let's create a new log entry, switch to the terminal command line and enter in:

echo 201 >> var/log/test.log

Now back in Kibana after 5 or less seconds we should see the 201 log entry.

Summary

In my experience, once you know how to use and are comfortable with Docker, building and deploying an ELK stack is very quick and easy.

The steps described above are solid but for me personally I'd tweak them for production use.

For example:

  1. Docker has other features you can use like linking containers, so you don't expose ports.
  2. Using the '--net host' flag might also not be the best option for production.
  3. Pin the docker images you are using to a specific version eg: sudo docker pull logstash:1.5.2
  4. If you have many machines, run your own Docker Private Registry so that your deployments are faster.

Anyways I hope this post gets you up and running quickly and painlessly - ready to explore more of the power of the ELK stack.

Comments and feedback are very much welcomed.

If I've overlooked anything, if you can see room for improvement or if any errors please do let me know.

Thanks!

Couchbase Node.js SDK Callbacks to Promises

Overview

Where possible I prefer to use Promises rather than Callbacks when coding in Node.js.

The Couchbase Node.js SDK 2.0 documentation examples use the callback style of coding.

Here is an approach to "roll your own Couchbase Node.js Promises".

The following Node.js code uses Q to convert those code examples to Promises.

These examples will dependency inject into the functions rather than reference global variables.

The final example will use ramda.js and curry the functions.

Setup

'use strict';

var assert = require('assert'),
  couchbase = require('couchbase'),
  Q = require('q'),
  R = require('ramda');

var cluster = new couchbase.Cluster('couchbase://localhost');
var bucket = cluster.openBucket('default');

Code - Callbacks to Promises

// myBucket.insert('document_name', {some:'value'}, function(err, res) {
// console.log('Success!');
// });

function bucketInsert(bucket, documentName, documentValue) {

  assert.equal(typeof bucket, 'object', 'argument bucket must be an object');
  assert.equal(typeof documentName, 'string', 'argument documentName must be a string');
  assert.equal(typeof documentValue, 'object', 'argument documentValue must be an object');

  return Q.ninvoke(bucket, 'insert', documentName, documentValue).then(function (res) {

    if (!res.cas) {
      throw new Error('Bucket Insert Failed.');
    }

    return res;

  });

}

// myBucket.get('document_name', function(err, res) {
// console.log('Value: ', res.value);
// });

function bucketGet(bucket, documentName) {

  assert.equal(typeof bucket, 'object', 'argument bucket must be an object');
  assert.equal(typeof documentName, 'string', 'argument documentName must be a string');

  return Q.ninvoke(bucket, 'get', documentName).then(function (res) {

    if (!res.cas || !res.value) {
      throw new Error('Bucket Get Failed.');
    }

    return res.value;

  });

}

Usage

/* Insert */
bucketInsert(bucket, 'document_name', {some: 'value'})
  .then(function (res) {
    console.log('Success!');
  })
  .catch(function (err) {
    console.log(err);
  });

// outputs:
// Success!


/* Get */
bucketGet(bucket, 'document_name')
  .then(function (res) {
    console.log(res);
  })
  .catch(function (err) {
    console.log(err);
  });

// outputs
// { some: 'value' }


/* Chained Insert and Get */
bucketInsert(bucket, 'document_name2', {some: 'value2'})
  .then(function () {
    return bucketGet(bucket, 'document_name2')
  })
  .then(function (res) {
    console.log(res);
  })
  .catch(function (err) {
    console.log(err);
  });

// outputs
// { some: 'value2' }


/* Curry and Chain Insert and Get */
var curriedInsert = R.curry(bucketInsert),
  insert = curriedInsert(bucket);

var curriedGet = R.curry(bucketGet),
  get = curriedGet(bucket);

insert('document_name3', {some: 'value3'})
  .then(function () {
    return get('document_name3')
  })
  .then(function (res) {
    console.log(res);
  })
  .catch(function (err) {
    console.log(err);
  });

// outputs
// { some: 'value3' }  

Summary

This approach works well for me and I'm quite pleased with it.

The next post will demonstrate Unit testing this code with Mocha, Sinon.js and Chai.js.

I hope this helps, comments and feedback are very much welcomed.

If I've overlooked anything, if you can see room for improvement or if any errors please do let me know.

Thanks!

Restricting Docker Container Access with Iptables

Overview

The Docker networking documentation shows how to easily restrict external container access to a single IP using Iptables.

Docker’s forward rules permit all external source IPs by default. 

To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER filter chain. 

For example, to restrict external access such that only source IP 8.8.8.8 can access the containers, the following rule could be added:

$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP

What's the best approach for allowing, say, two external IP addresses?

Adding another rule negating another IP won't work as the 1st negation would have already matched and returned from the table.

One approach is to create a PRE_DOCKER table with a final return of DROP or REJECT before the DOCKER table.

This blog post will detail a method for this approach that's working well for my use case using Ubuntu 14.04 and Docker v1.7.

Method

To begin create a bash script, let's name it docker_iptables.sh (mode 0755 executable).

This script will grant:

  • Public access to http and https only.
  • Two admin IPs will be granted access to all running containers.
  • Two LAN IPs will be granted access to all running containers.

Everything else will be rejected

This script must run after docker starts or restarts.

#!/usr/bin/env bash

# Usage:
# timeout 10 docker_iptables.sh
#
# Use the builtin shell timeout utility to prevent infinite loop (see below)

if [ ! -x /usr/bin/docker ]; then
    exit
fi

# Check if the PRE_DOCKER chain exists, if it does there's an existing reference to it.
iptables -C FORWARD -o docker0 -j PRE_DOCKER

if [ $? -eq 0 ]; then
    # Remove reference (will be re-added again later in this script)
    iptables -D FORWARD -o docker0 -j PRE_DOCKER
    # Flush all existing rules
    iptables -F PRE_DOCKER
else
    # Create the PRE_DOCKER chain
    iptables -N PRE_DOCKER
fi

# Default action
iptables -I PRE_DOCKER -j DROP

# Docker Containers Public Admin access (insert your IPs here)
iptables -I PRE_DOCKER -i eth0 -s 192.184.41.144 -j ACCEPT
iptables -I PRE_DOCKER -i eth0 -s 120.29.76.14 -j ACCEPT

# Docker Containers Restricted LAN Access (insert your LAN IP range or multiple IPs here)
iptables -I PRE_DOCKER -i eth1 -s 192.168.1.101 -j ACCEPT
iptables -I PRE_DOCKER -i eth1 -s 192.168.1.102 -j ACCEPT

# Docker internal use
iptables -I PRE_DOCKER -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -I PRE_DOCKER -i docker0 ! -o docker0 -j ACCEPT
iptables -I PRE_DOCKER -m state --state RELATED -j ACCEPT
iptables -I PRE_DOCKER -i docker0 -o docker0 -j ACCEPT

# Docker container named www-nginx public access policy
WWW_IP_CMD="/usr/bin/docker inspect --format='{{.NetworkSettings.IPAddress}}' www-nginx"
WWW_IP=$($WWW_IP_CMD)

# Double check, wait for docker socket (upstart docker.conf already does this)
while [ ! -e "/var/run/docker.sock" ]; do echo "Waiting for /var/run/docker.sock..."; sleep 1; done

# Wait for docker web server container IP
while [ -z "$WWW_IP" ]; do echo "Waiting for www-nginx IP..."; WWW_IP=$($WWW_IP_CMD); done

# Insert web server container filter rules
iptables -I PRE_DOCKER -i eth0 -p tcp -d $WWW_IP --dport 80  -j ACCEPT
iptables -I PRE_DOCKER -i eth0 -p tcp -d $WWW_IP --dport 443 -j ACCEPT

# Finally insert the PRE_DOCKER table before the DOCKER table in the FORWARD chain.
iptables -I FORWARD -o docker0 -j PRE_DOCKER

It's important to note the usage of this script uses the builtin shell command timeout.

This is to prevent the script from hanging the machine if there's any error while waiting for Docker.

Note: As Docker is forwarding to port 80 we wait for the IP address of the container.

This is because if we want to forward to port 80 on another container, without the destination IP

the one rule will grant access to all containers forwarding to port 80.

Usage

Using Upstart with Ubuntu 14.04 add this script /etc/init/docker-iptables.conf

Whenever docker starts or restarts it will run the docker_iptables.sh script.

Note: Adjust the path to script and email address to suit your environment.

description "Start Docker Custom Iptables Rules"
author "your@email.com"

start on started docker

script
    /usr/bin/timeout 30 /home/ubuntu/docker_iptables.sh
end script

Summary

So far my experience using this approach is working well.

I'm using Firehol to manage my firewall policies.

I've updated the firehol start/stop/restart init script to restart docker.

On docker's start/restart upstart event the docker_iptables.sh (the above script) will run.

This three stage process:

  • Reloads the iptables firewall policies.
  • Restarts all docker containers.
  • Then filters access to the running docker containers.

I hope this helps, comments and feedback are very much welcomed.

If I've overlooked anything, if you can see room for improvement or if any errors please do let me know.

Thanks!