Tuesday, September 27, 2016

Docker 1.12 swarm mode elastic load balancing


Docker Built-In Orchestration Ready For Production: Docker 1.12 Goes GA describes the native swarm mode feature that integrates cluster management, virtual networking, and policy based deployment of services.

This article will demonstrate how real-time streaming telemetry can be used to construct an elastic load balancing solution that dynamically adjusts service capacity to match changing demand.

Getting started with swarm mode describes the steps to configure a swarm cluster. For example, following command issued on any of the Manager nodes deploys a web service on the cluster:
docker service create --replicas 2 -p 80:80 --name apache httpd:2.4
And the following command raises the number of containers in the service pool from 2 to 4:
docker service scale apache=4
Asynchronous Docker metrics describes how sFlow telemetry provides the real-time visibility required for elastic load balancing. The diagram shows how streaming telemetry allows the sFlow-RT controller to determine the load on the service pool so that it can use the Docker service API to automatically increase or decrease the size of the pool as demand changes. Elastic load balancing of the service pools ensures consistent service levels by adding additional resources if demand increases. In addition, efficiency is improved by releasing resources when demand drops so that they can be used by other services. Finally, global visibility into all resources and services makes it possible to load balance between services, reducing service pools for non-critical services to release resources during peak demand.

The first step is to install and configure Host sFlow agents on each of the nodes in the Docker swarm cluster. The following /etc/hsflowd.conf file configures Host sFlow to monitor Docker and send sFlow telemetry to a designated collector (in this case 10.0.0.162):
sflow {
  sampling = 400
  polling = 10
  collector { ip = 10.0.0.162 } 
  docker { }
  pcap { dev = docker0 }
  pcap { dev = docker_gwbridge }
}
Note: The configuration file is identical for all nodes in the cluster making it easy to automate the installation and configuration of sFlow monitoring using  Puppet, Chef, Ansible, etc.

Verify that the sFlow measurements are arriving at the collector node (10.0.0.162) using sflowtool:
docker -p 6343:6343/udp sflow/sflowtool
The following elb.js script implements elastic load balancer functionality using the sFlow-RT real-time analytics engine:
var api = "https://10.0.0.134:2376";
var certs = '/tls/';
var service = 'apache';

var replicas_min = 1;
var replicas_max = 10;
var util_min = 0.5;
var util_max = 1;
var bytes_min = 50000;
var bytes_max = 100000;
var enabled = false;

function getInfo(name) {
  var info = null;
  var url = api+'/services/'+name;
  try { info = JSON.parse(http2({url:url, certs:certs}).body); }
  catch(e) { logWarning("cannot get " + url + " error=" + e); }
  return info;
}

function setReplicas(name,count,info) {
  var version = info["Version"]["Index"];
  var spec = info["Spec"];
  spec["Mode"]["Replicated"]["Replicas"]=count;
  var url = api+'/v1.24/services/'+info["ID"]+'/update?version='+version;
  try {
    http2({
      url:url, certs:certs, method:'POST',
      headers:{'Content-Type':'application/json'},
      body:JSON.stringify(spec)
    });
  }
  catch(e) { logWarning("cannot post to " + url + " error=" + e); }
  logInfo(service+" set replicas="+count);
}

var hostpat = service+'\\.*';
setIntervalHandler(function() {
  var info = getInfo(service);
  if(!info) return;

  var replicas = info["Spec"]["Mode"]["Replicated"]["Replicas"];
  if(!replicas) {
    logWarning("no active members for service=" + service);
    return;
  }

  var res = metric(
    'ALL', 'avg:vir_cpu_utilization,avg:vir_bytes_in,avg:vir_bytes_out',
    {'vir_host_name':[hostpat],'vir_cpu_state':['running']}
  );

  var n = res[0].metricN;

  // we aren't seeing all the containers (yet)
  if(replicas !== n) return;

  var util = res[0].metricValue;
  var bytes = res[1].metricValue + res[2].metricValue;

  if(!enabled) return;

  // load balance
  if(replicas < replicas_max && (util > util_max || bytes > bytes_max)) {
    setReplicas(service,replicas+1,info);
  }
  else if(replicas > replicas_min && util < util_min && bytes < bytes_min) {
    setReplicas(service,replicas-1,info);
  }
},2);

setHttpHandler(function(req) {
  enabled = req.query && req.query.state && req.query.state[0] === 'enabled';
  return enabled ? "enabled" : "disabled";
});
Some notes on the script:
  1. The setReplicas(name,count,info) function uses the Docker Remote API to implement functionality equivalent to the docker service scale name=count command shown earlier. The REST API is accessible at https://10.0.0.134:2376 in this example.
  2. The setIntervalHandler() function runs every 2 seconds, retrieving metrics for the service pool and scaling the number of replicas in the service up or down based on thresholds.
  3. The setHttpHandler() function exposes a simple REST API for enabling / disabling the load balancer functionality. The API can easily be extended to all thresholds to be set, to report statistics, etc.
  4. Certificates, key.pem, cert.pem, and ca.pem, required to authenticate API requests must be present in the /tls/ directory.
  5. The thresholds are set to unrealistically low values for the purpose of this demonstration.
  6. The script can easily be extended to load balance multiple services simultaneously.
  7. Writing Applications provides additional information on sFlow-RT scripting.
Run the controller:
docker run -v `pwd`/tls:/tls -v `pwd`/elb.js:/sflow-rt/elb.js \
 -e "RTPROP=-Dscript.file=elb.js" -p 8008:8008 -p 6343:6343/udp -d sflow/sflow-rt
The autoscaling functionality can be enabled:
curl "http://localhost:8008/script/elb.js/json?state=enabled"
and disabled:
curl "http://localhost:8008/script/elb.js/json?state=disabled"
using the REST API exposed by the script.
The chart above shows the results of a simple test to demonstrate the elastic load balancer function. First, ab - Apache HTTP server benchmarking tool was used to generate load on the apache service running under Docker swarm:
ab -rt 60 -n 300000 -c 4 http://10.0.0.134/
Next, the test was repeated with the elastic load balancer enabled. The chart clearly shows that the load balancer is keeping the average network load on each container under control.
2016-09-24T00:57:10+0000 INFO: Listening, sFlow port 6343
2016-09-24T00:57:10+0000 INFO: Listening, HTTP port 8008
2016-09-24T00:57:10+0000 INFO: elb.js started
2016-09-24T01:00:17+0000 INFO: apache set replicas=2
2016-09-24T01:00:23+0000 INFO: apache set replicas=3
2016-09-24T01:00:27+0000 INFO: apache set replicas=4
2016-09-24T01:00:33+0000 INFO: apache set replicas=5
2016-09-24T01:00:41+0000 INFO: apache set replicas=6
2016-09-24T01:00:47+0000 INFO: apache set replicas=7
2016-09-24T01:00:59+0000 INFO: apache set replicas=8
2016-09-24T01:01:29+0000 INFO: apache set replicas=7
2016-09-24T01:01:33+0000 INFO: apache set replicas=6
2016-09-24T01:01:35+0000 INFO: apache set replicas=5
2016-09-24T01:01:39+0000 INFO: apache set replicas=4
2016-09-24T01:01:43+0000 INFO: apache set replicas=3
2016-09-24T01:01:45+0000 INFO: apache set replicas=2
2016-09-24T01:01:47+0000 INFO: apache set replicas=1
The sFlow-RT log shows that containers are added to the apache service to handle the increased load and removed once demand decreases.

This example relied on a small subset of the information available from the sFlow telemetry stream. In addition to container resource utilization, the Host sFlow agent exports an extensive set of metrics from the nodes in the Docker swarm cluster. If the nodes are virtual machines running in a public or private cloud, the metrics can be used to perform elastic load balancing of the virtual machine pool making up the cluster, increasing the cluster size if demand increases and reducing cluster size when demand decreases. In addition, poorly performing instances can be detected and removed from the cluster (see Stop thief! for an example).
The sFlow agents also efficiently report on traffic flowing within and between microservices running on the swarm cluster. For example, the following command:
docker run -p 6343:6343/udp -p 8008:8008 -d sflow/top-flows
launches the top-flows application to show an up to the second view of active flows in the network.

Comprehensive real-time analytics is critical to effectively managing agile container-bases infrastructure. Open source Host sFlow agents provide a lightweight method of instrumenting the infrastructure that unifies network and system monitoring to deliver a full set of standard metrics to performance management applications.

Monday, September 26, 2016

Asynchronous Docker metrics

Docker allows large numbers of lightweight containers can be started and stopped within seconds, creating an agile infrastructure that can rapidly adapt to changing requirements. However, the rapidly changing populating of containers poses a challenge to traditional methods of monitoring which struggle to keep pace with the changes. For example, periodic polling methods take time to detect new containers and can miss short lived containers entirely.

This article describes how the latest version of the Host sFlow agent is able to track the performance of a rapidly changing population of Docker containers and export a real-time stream of standard sFlow metrics.
The diagram above shows the life cycle status events associated with a container. The Docker Remote API provides a set of methods that allow the Host sFlow agent to communicate with the Docker to list containers and receive asynchronous container status events. The Host sFlow agent uses the events to keep track of running containers and periodically exports cpu, memory, network and disk performance counters for each container.

The diagram at the beginning of this article shows the sequence of messages, going from top to bottom, required to track a container. The Host sFlow agent first registers for container lifecycle events before asking for all the currently running containers. Later, when a new container is started, Docker immediately sends an event to the Host sFlow agent, which requests additional information (such as the container process identifier - PID) that it can use to retrieve performance counters from the operating system. Initial counter values are retrieved and exported along with container identity information as an sFlow counters message and a polling task for the new container is initiated. Container counters are periodically retrieved and exported while the container continues to run (2 polling intervals are shown in the diagram). When the Host sFlow agent receives an event from Docker indicating that the container is being stopped, it retrieves the final values of the performance counters, exports a final sFlow message, and removes the polling task for the container.

This method of asynchronously triggered periodic counter export allows an sFlow collector to accurately track rapidly changing container populations in large scale deployments. The diagram only shows the sequence of events relating to monitoring a single container. Docker network visibility demonstration shows the full range of network traffic and system performance information being exported.

Detailed real-time visibility is essential for fully realizing the benefits of agile container infrastructure, providing the feedback needed to track and automatically optimize the performance of large scale microservice deployments.

Saturday, September 17, 2016

Triggered remote packet capture using filtered ERSPAN

Packet brokers are typically deployed as a dedicated network connecting network taps and SPAN/mirror ports to packet analysis applications such as Wireshark, Snort, etc.

Traditional hierarchical network designs were relatively straightforward to monitor using a packet broker since traffic flowed through a small number of core switches and so a small number of taps provided network wide visibility. The move to leaf and spine fabric architectures eliminates the performance bottleneck of core switches to deliver low latency and high bandwidth connectivity to data center applications. However, traditional packet brokers are less attractive since spreading traffic across many links with equal cost multi-path (ECMP) routing means that many more links need to be monitored.

This article will explore how the remote Selective Spanning capability in Cumulus Linux 3.0 combined with industry standard sFlow telemetry embedded in commodity switch hardware provides a cost effective alternative to traditional packet brokers.

Cumulus Linux uses iptables rules to specify packet capture sessions. For example, the following rule forwards packets with source IP 20.0.1.0 and destination IP 20.0.1.2 to a packet analyzer on host 20.0.2.2:
-A FORWARD --in-interface swp+ -s 20.0.0.2 -d 20.0.1.2 -j ERSPAN --src-ip 90.0.0.1 --dst-ip 20.0.2.2
REST API for Cumulus Linux ACLs describes a simple Python wrapper that exposes IP tables through a RESTful API. For example, the following command remotely installs the capture rule on switch 10.0.0.233:
curl -H "Content-Type:application/json" -X PUT --data \
  '["[iptables]","-A FORWARD --in-interface swp+ -s 20.0.0.2 -d 20.0.1.2 -j ERSPAN --src-ip 90.0.0.1 --dst-ip 20.0.2.2"]' \
  http://10.0.0.233:8080/acl/capture1
The following command deletes the rule:
curl -X DELETE http://10.0.0.233:8080/acl/capture1
Selective Spanning makes it possible to turn every switch and port in the network into a capture device. However, it is import to carefully select which traffic to capture since the aggregate bandwidth of an ECMP fabric is measured in Terabits per second - far more traffic than can be handled by typical packet analyzers.
SDN packet broker describes an analogy for the role that sFlow plays in steering the capture network to that of a finderscope, the small wide-angle telescope used to provide an overview of the sky and guide a telescope to its target. The article goes on to describes some of the benefits of combining sFlow analytics with selective packet capture:
  1. Offload The capture network is a limited resource, both in terms of bandwidth and in the number of flows that can be simultaneously captured.  Offloading as many tasks as possible to the sFlow analyzer frees up resources in the capture network, allowing the resources to be applied where they add most value. A good sFlow analyzer delivers data center wide visibility that can address many traffic accounting, capacity planning and traffic engineering use cases. In addition, many of the packet analysis tools (such as Wireshark) can accept sFlow data directly, further reducing the cases where a full capture is required.
  2. Context Data center wide monitoring using sFlow provides context for triggering packet capture. For example, sFlow monitoring might show an unusual packet size distribution for traffic to a particular service. Queries to the sFlow analyzer can identify the set of switches and ports involved in providing the service and identify a set of attributes that can be used to selectively capture the traffic.
  3. DDoS Certain classes of event such as DDoS flood attacks may be too large for the capture network to handle. DDoS mitigation with Cumulus Linux frees the capture network to focus on identifying more serious application layer attacks.
The diagram at the top of this article shows an example of using sFlow to target selective capture of traffic to blacklisted addresses. In this example sFlow-RT is used to perform real-time sFlow analytics. The following emerging.js script instructs sFlow-RT to download the Emerging Threats blacklist and identify any local hosts that are communicating with addresses in the blacklist. A full packet capture is triggered when a potentially compromised host is detected:
var wireshark = '10.0.0.70';
var idx=0;
function capture(localIP,remoteIP,agent) {
  var acl = [
    '[iptables]',
    '# emerging threat capture',
    '-A FORWARD --in-interface swp+ -s '+localIP+' -d '+remoteIP 
    +' -j ERSPAN --src-ip '+agent+' --dst-ip '+wireshark,
    '-A FORWARD --in-interface swp+ -s '+remoteIP+' -d '+localIP 
    +' -j ERSPAN --src-ip '+agent+' --dst-ip '+wireshark
  ];
  var id = 'emrg'+idx++;
  logWarning('capturing '+localIP+' rule '+id+' on '+agent);
  http('http://'+agent+':8080/acl/'+id,
        'PUT','application/json',JSON.stringify(acl));
}

var groups = {};
function loadGroup(name,url) {
  try {
    var res, cidrs = [], str = http(url);
    var reg = /^(\d{1,3}\.){3}\d{1,3}(\/\d{1,2})?$/mg;
    while((res = reg.exec(str)) != null) cidrs.push(res[0]);
    if(cidrs.length > 0) groups[name]=cidrs;
  } catch(e) {
    logWarning("failed to load " + url + ", " + e);
  }
}

loadGroup('compromised',
  'https://rules.emergingthreats.net/blockrules/compromised-ips.txt');
loadGroup('block',
  'https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt');
setGroups('emerging',groups);

setFlow('emerging',
  {keys:'ipsource,ipdestination,group:ipdestination:emerging',value:'frames',
   log:true,flowStart:true});

setFlowHandler(function(rec) {
  var [localIP,remoteIP,group] = rec.flowKeys.split(',');
  try { capture(localIP,remoteIP,rec.agent); }
  catch(e) { logWarning("failed to capture " + e); }
});
Some comments about the script:
  1. The script uses sFlow telemetry to identify the potentially compromised host and the location (agent) observing the traffic.
  2. The location information is required so that the capture rule can be installed on a switch that is in the traffic path.
  3. The application has been simplified for clarity. In production, the blacklist information would be periodically updated and the capture sessions would be tracked so that they can be deleted when they they are no longer required.
  4. Writing Applications provides an introduction to sFlow-RT's API.
Configure sFlow on the Cumulus switches to stream telemetry to a host running Docker. Next, log into the host and run the following command in a directory containing the emerging.js script:
docker run -v "$PWD/emerging.js":/sflow-rt/emerging.js \
 -e "RTPROP=-Dscript.file=emerging.js" -p 6343:6343/udp sflow/sflow-rt
Note: Deploying analytics as a Docker service is a convenient method of packaging and running sFlow-RT. However, you can also download and install sFlow-RT as a package.

Once the software is running, you should see output similar to the following:
2016-09-17T22:19:16+0000 INFO: Listening, sFlow port 6343
2016-09-17T22:19:16+0000 INFO: Listening, HTTP port 8008
2016-09-17T22:19:16+0000 INFO: emerging.js started
2016-09-17T22:19:44+0000 WARNING: capturing 10.0.0.162 rule emrg0 on 10.0.0.253
The last line shows that traffic from host 10.0.0.162 to a blacklisted address has been detected and that selective spanning session has been configured on switch 10.0.0.253 to capture packets and send them to the host running Wireshark (10.0.0.70) for further analysis.