Saturday, February 27, 2016

Open vSwitch version 2.5 released

The recent Open vSwitch version 2.5 release includes significant network virtualization enhancements:
   - sFlow agent now reports tunnel and MPLS structures.
   ...
   - Add experimental version of OVN.  OVN, the Open Virtual Network, is a
     system to support virtual network abstraction.  OVN complements the
     existing capabilities of OVS to add native support for virtual network
     abstractions, such as virtual L2 and L3 overlays and security groups.
The sFlow Tunnel Structures specification enhances visibility into network virtualization by capturing encapsulation / decapsulation actions performed by tunnel end points. In many network virtualization implementations VXLAN, GRE, Geneve tunnels are terminate in Open vSwitch and so the new feature has broad application.

The second related feature is the inclusion of the Open Virtual Network (OVN), providing a simple method of building virtual networks for OpenStack and Docker.

The following articles provide additional background:

Friday, February 26, 2016

Linux bridge, macvlan, ipvlan, adapters

The open source Host sFlow project added a feature to efficiently monitor traffic on Linux host network interfaces: network adapters, Linux bridge, macvlan, ipvlan, etc. Implementation of high performance sFlow traffic monitoring is made possible by the inclusion of random packet sampling support in the Berkeley Packet Filter (BPF) implementation in recent Linux kernels (3.19 or later).

In addition to the new BPF capability, hsflowd has a couple of other ways to monitor traffic:
  • iptables, add a statistic rule to the iptables firewall to add traffic monitoring
  • Open vSwitch, has built-in sFlow instrumentation that can be configured by hsflowd.
The BPF sampling mechanism is less complex to configure than iptables and can be used to monitor any Linux network device, including: network adapters (e.g. eth0) and the Linux bridge (e.g. docker0). Monitoring a network adapter also provides visibility into lightweight macvlan and ipvlan network virtualization technologies that are likely to become more prevalent in the Linux container ecosystem, see Using Docker with macvlan Interfaces.

The following commands build and install hsflowd on an Ubuntu 14.03 host:
sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install libpcap-dev
sudo apt-get install git
git clone https://github.com/sflow/host-sflow
cd host-sflow
make
sudo make install
Installing Host sFlow on a Linux server provides basic instructions for configuring the Host sFlow agent (hsflowd). To monitor traffic on the host, edit the /etc/hsflowd.conf file configure the sFlow collector and enable packet sampling on eth0
pcap { dev = eth0 }
Now start the daemon:
sudo hsflowd start
At this point packet traversing eth0 will be sampled and sent out as part of the standard sFlow telemetry stream sent to an sFlow analyzer. For example, using sFlow-RT with the top-flows application as the sFlow analyzer generated the top flows table below.
There a numerous server monitoring agents available in the open source community that will export similar host statistics (CPU, memory, disk) to the Host sFlow agent. Host sFlow differs by also including network traffic visibility using the same packet sampling mechanism supported by most data center switches. Significant advances are extending the visibility into the physical network, for example, Broadcom BroadView Instrumentation tracks buffer utilization and microbursts that effect application performance.
A common standard for monitoring physical and virtual network and server infrastructure reduces operational complexity. Visibility into network activity is critical to understanding the performance of scale out applications that drive large amounts of East-West traffic. Host sFlow, along with support for sFlow in the physical network, delivers scaleable data center wide telemetry to SDN and DevOps tools so that they can better orchestrate the allocation of resources to maximize performance and reduce costs.

Sunday, February 21, 2016

CloudFlare DDoS Mitigation Pipeline

The Usenix Enigma 2016 talk from Marek Majkowski describes CloudFlare's automated DDoS mitigation solution. CloudFlare provides reverse proxy services for millions of web sites and their customers are frequently targets of DDoS attacks. The talk is well worth watching in its entirety to learn about their experiences.
Network switches stream standard sFlow data to CloudFlare's "Gatebot" Reactive Automation component, which analyzes the data to identify attack vectors. Berkeley Packet Filter (BPF) rules are constructed to target specific attacks and apply customer specific mitigation policies. The rules are automatically installed in iptables firewalls on the CloudFlare servers.
The chart shows that over a three month period CloudFlare's mitigation system handled between 30 and 300 attacks per day.
Attack volumes mitigated regularly hit 100 million packers per second and reach peaks of over 150 million packets per second. These large attacks can cause significant damage and automated mitigation is critical to reducing their impact.

Elements of the CloudFlare solution are readily accessible to anyone interested in building DDoS mitigation solutions. Industry standard sFlow instrumentation is widely supported by switch vendors. Download sFlow-RT analytics software and combine real-time DDoS detection with business policies to automate mitigation actions. A number of DDoS mitigation examples are described on this blog that provide useful starting point for implementing an automated DDoS mitigation solution.

Monday, February 1, 2016

SignalFx

SignalFx is an example of a cloud based analytics service. SignalFx provides a REST API for uploading metrics and a web portal that it simple to combine and trend data and build and share dashboards.

This article describes a proof of concept demonstrating how SignalFx's cloud service can be used to cost effectively monitor large scale cloud infrastructure by leveraging standard sFlow instrumentation. SignalFx offers a free 14 day trial, making it easy to evaluate solutions based on this demonstration.

The diagram shows the measurement pipeline. Standard sFlow measurements from hosts, hypervisors, virtual machines, containers, load balancers, web servers and network switches stream to the sFlow-RT real-time analytics engine. Metrics are pushed from sFlow-RT to SignalFx using the REST API.

Over 40 vendors implement the sFlow standard and compatible products are listed on sFlow.org. The open source Host sFlow agent exports standard sFlow metrics from hosts, virtual machines and containers and local services. For additional background, the Velocity conference talk provides an introduction to sFlow and case study from a large social networking site.

SignalFx's service is priced based on the number of data points that they need to store and they estimate a cost of $15 per host per month to record comprehensive host statistics at 10 second granularity. Collecting metrics from a cluster of 1,000 hosts would cost as much as $15,000 per month.
There are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of the metrics collection service. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines are frequently added and removed. Instead, sFlow-RT tracks all the members of the pool, calculates summary statistics for the pool, and logs the summary statistics. This pre-processing can significantly reduce storage requirements, reducing costs and increasing query performance. The sFlow-RT analytics software also calculates traffic flow metrics, hot/missed Memcache keys, top URLs, exports events via syslog to Splunk, Logstash etc. and provides access to detailed metrics through its REST API.
The following steps were involved in setting up the proof of concept.

First register for free trial at SignalFx.com.

Download and install sFlow-RT.

Create a signalfx.js script in the sFlow-RT home directory with the following lines (use the token from your SignalFx account):
var url = "https://ingest.signalfx.com/v2/datapoint";
var token = "YOUR_APP_API_TOKEN";

setIntervalHandler(function() {
  var metrics = ['min:load_one','q1:load_one','med:load_one',
                 'q3:load_one','max:load_one'];
  var vals = metric('ALL',metrics,{os_name:['linux']});
  var gauges = [];
  for each (var val in vals) {
     gauges.push({
       metric: val.metricName.replace(/[^a-zA-Z0-9_]/g,"_"),
       dimensions:{cluster:"Linux"},
       value: val.metricValue
     });
  }
  var body = {"gauge":gauges};
  var req = {
    url:url,
    operation:'post',
    headers: {
      'Content-Type':'application/json',
      'X-SF-TOKEN':token
    },
    body: JSON.stringify(body)
  };
  try { http2(req); }
  catch(e) { logWarning("metric upload failed " + e); }
} , 10); 
Add the following sFlow-RT configuration entry to load the script:
script.file=signalfx.js
Now start sFlow-RT. Cluster performance metrics describes the summary metrics that sFlow-RT can calculate. In this case, the load average minimum, maximum, and quartiles for the cluster are being calculated and pushed to SignalFx every minute.

Install Host sFlow agents on the physical or virtual machines in your cluster and direct them to send metrics to the sFlow-RT host. The installation steps can be easily automated using orchestration tools like Puppet, Chef, Ansible, etc.

Physical and virtual switches in the cluster can be configured to send sFlow to sFlow-RT in order to add traffic metrics to the mix, exporting metrics that characterizing traffic between service tiers etc. However, in public cloud environments, traffic flow information is typically not available. The articles, Amazon Elastic Compute Cloud (EC2) and Rackspace cloudservers describe how Host sFlow agents can be configured to monitor traffic between virtual machines in the cloud.
Metrics should start appearing in SignalFx as soon as the Host sFlow agents are started.

In this example, sFlow-RT is exporting 5 metrics to summarize the cluster performance, reducing the total monthly cost of monitoring the 1,000 host cluster to less than $15 per month. Of course there are likely to be more metrics that you will want to track, but the ability to selectively log high value metrics provides a way to control costs and maximize benefits.

If you are managing physical infrastructure then sFlow provides a simple way to incorporate network telemetry. For example, add the following metrics to the script to summarize network health:
  • max:ifinutilization
  • max:ifoututilization
  • sum:ifindiscards
  • sum:ifinerrors
  • sum:ifoutdiscards
  • sum:ifouterrors
A network connecting 1,000 physical hosts would have considerably more than 1,000 switch ports and summarizing the per port statistics greatly reduces the cost of monitoring the network. For a catalog of network, host, and application metrics, see Metrics.