Wednesday, April 23, 2014

Mininet integrated hybrid OpenFlow testbed

Figure 1: Hybrid Programmable Forwarding Planes
Integrated hybrid OpenFlow combines OpenFlow and existing distributed routing protocols to deliver robust software defined networking (SDN) solutions. Performance optimizing hybrid OpenFlow controller describes how the sFlow and OpenFlow standards combine to deliver visibility and control to address challenges including: DDoS mitigation, ECMP load balancing, LAG load balancing, and large flow marking.

A number of vendors support sFlow and integrated hybrid OpenFlow today, examples described on this blog include: Alcatel-Lucent, Brocade, and Hewlett-Packard. However, building a physical testbed is expensive and time consuming. This article describes how to build an sFlow and hybrid OpenFlow testbed using free Mininet network emulation software. The testbed emulates ECMP leaf and spine data center fabrics and provides a platform for experimenting with analytics driven feedback control using the sFlow-RT hybrid OpenFlow controller.

First build an Ubuntu 13.04 / 13.10 virtual machine then follow instructions for installing Mininet - Option 3: Installation from Packages.

Next, install an Apache web server:
sudo apt-get install apache2
Install the sFlow-RT integrated hybrid OpenFlow controller, either on the Mininet virtual machine, or on a different system (Java 1.6+ is required to run sFlow-RT):
wget http://www.inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
Copy the leafandspine.py script from the sflow-rt/extras directory to the Mininet virtual machine.

The following options are available:
./leafandspine.py --help
Usage: leafandspine.py [options]

Options:
  -h, --help            show this help message and exit
  --spine=SPINE         number of spine switches, default=2
  --leaf=LEAF           number of leaf switches, default=2
  --fanout=FANOUT       number of hosts per leaf switch, default=2
  --collector=COLLECTOR
                        IP address of sFlow collector, default=127.0.0.1
  --controller=CONTROLLER
                        IP address of controller, default=127.0.0.1
  --topofile=TOPOFILE   file used to write out topology, default topology.txt
Figure 2 shows a simple leaf and spine topology consisting of four hosts and four switches:
Figure 2: Simple leaf and spine topology
The following command builds the topology and specifies a remote host (10.0.0.162) running sFlow-RT as the hybrid OpenFlow controller and sFlow collector:
sudo ./leafandspine.py --collector 10.0.0.162 --controller 10.0.0.162 --topofile /var/www/topology.json
Note: All the links are configured to 10Mbit/s and the sFlow sampling rate is set to 1-in-10. These settings are equivalent to a 10Gbit/s network with a 1-in-10,000 sampling rate - see Large flow detection.

The network topology is written to  /var/www/topology.json making it accessible through HTTP. For example, the following command retrieves the topology from the Mininet VM (10.0.0.61):
curl http://10.0.0.61/topology.json
{"nodes": {"s3": {"ports": {"s3-eth4": {"ifindex": "392", "name": "s3-eth4"}, "s3-eth3": {"ifindex": "390", "name": "s3-eth3"}, "s3-eth2": {"ifindex": "402", "name": "s3-eth2"}, "s3-eth1": {"ifindex": "398", "name": "s3-eth1"}}, "tag": "edge", "name": "s3", "agent": "10.0.0.61", "dpid": "0000000000000003"}, "s2": {"ports": {"s2-eth1": {"ifindex": "403", "name": "s2-eth1"}, "s2-eth2": {"ifindex": "405", "name": "s2-eth2"}}, "name": "s2", "agent": "10.0.0.61", "dpid": "0000000000000002"}, "s1": {"ports": {"s1-eth1": {"ifindex": "399", "name": "s1-eth1"}, "s1-eth2": {"ifindex": "401", "name": "s1-eth2"}}, "name": "s1", "agent": "10.0.0.61", "dpid": "0000000000000001"}, "s4": {"ports": {"s4-eth2": {"ifindex": "404", "name": "s4-eth2"}, "s4-eth3": {"ifindex": "394", "name": "s4-eth3"}, "s4-eth1": {"ifindex": "400", "name": "s4-eth1"}, "s4-eth4": {"ifindex": "396", "name": "s4-eth4"}}, "tag": "edge", "name": "s4", "agent": "10.0.0.61", "dpid": "0000000000000004"}}, "links": {"s2-eth1": {"ifindex1": "403", "ifindex2": "402", "node1": "s2", "node2": "s3", "port2": "s3-eth2", "port1": "s2-eth1"}, "s2-eth2": {"ifindex1": "405", "ifindex2": "404", "node1": "s2", "node2": "s4", "port2": "s4-eth2", "port1": "s2-eth2"}, "s1-eth1": {"ifindex1": "399", "ifindex2": "398", "node1": "s1", "node2": "s3", "port2": "s3-eth1", "port1": "s1-eth1"}, "s1-eth2": {"ifindex1": "401", "ifindex2": "400", "node1": "s1", "node2": "s4", "port2": "s4-eth1", "port1": "s1-eth2"}}}
Don't start sFlow-RT yet, it should only be started after Mininet has finished building the topology.

Verify connectivity before starting sFlow-RT:
mininet> pingall
*** Ping: testing ping reachability
h1 -> h2 h3 h4 
h2 -> h1 h3 h4 
h3 -> h1 h2 h4 
h4 -> h1 h2 h3 
*** Results: 0% dropped (12/12 received)
This test demonstrates that the Mininet topology has been constructed with a set of default forwarding rules that provide connectivity without the need for an OpenFlow controller - emulating the behavior of  a network of integrated hybrid OpenFlow switches.

The following sFlow-RT script ecmp.js demonstrates ECMP load balancing in the emulated network:
include('extras/json2.js');

// Define large flow as greater than 1Mbits/sec for 1 second or longer
var bytes_per_second = 1000000/8;
var duration_seconds = 1;

var top = JSON.parse(http("http://10.0.0.61/topology.json"));
setTopology(top);

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:2}
);

setEventHandler(function(evt) {
 var rec = topologyInterfaceToLink(evt.agent,evt.dataSource);
 if(!rec || !rec.link) return;
 var link = topologyLink(rec.link);
 logInfo(link.node1 + "-" + link.node2 + " " + evt.flowKey);
},['elephant']);
Modify the sFlow-RT start.sh script to include the following arguments:
RT_OPTS="-Dopenflow.controller.start=yes -Dopenflow.controller.flushRules=no"
SCRIPTS="-Dscript.file=ecmp.js"
Some notes on the script:
  1. The topology is retrieved by making an HTTP request to the Mininet VM (10.0.0.61)
  2. The 1Mbits/s threshold for large flows was selected because it represents 10% of the bandwidth of the 10Mbits/s links in the emulated network
  3. The event handler prints the link the flow traversed - identifying the link by the pair of switches it connects
Start sFlow-RT:
./start.sh
Now generate some large flows between h1 and h3 using the Mininet iperf command:
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.58 Mbits/sec', '10.8 Mbits/sec']
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.58 Mbits/sec', '10.8 Mbits/sec']
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.59 Mbits/sec', '10.3 Mbits/sec']
The following results were logged by sFlow-RT:
2014-04-21T19:00:36-0700 INFO: ecmp.js started
2014-04-21T19:01:16-0700 INFO: s1-s3 10.0.0.1,10.0.1.1,49240,5001
2014-04-21T19:01:16-0700 INFO: s1-s4 10.0.0.1,10.0.1.1,49240,5001
2014-04-21T20:53:19-0700 INFO: s2-s4 10.0.0.1,10.0.1.1,49242,5001
2014-04-21T20:53:19-0700 INFO: s2-s3 10.0.0.1,10.0.1.1,49242,5001
2014-04-21T20:53:29-0700 INFO: s1-s3 10.0.0.1,10.0.1.1,49244,5001
2014-04-21T20:53:29-0700 INFO: s1-s4 10.0.0.1,10.0.1.1,49244,5001
The results demonstrate that the emulated leaf and spine network is performing equal cost multi-path (ECMP) forwarding - different flows between the same pair of hosts take different paths across the fabric (the highlighted lines correspond to the paths shown in Figure 2).
Open vSwitch in Mininet is the key to this emulation, providing sFlow and multi-path forwarding support 
The following script implements the large flow marking example described in Performance optimizing hybrid OpenFlow controller:
include('extras/json2.js');

// Define large flow as greater than 1Mbits/sec for 1 second or longer
var bytes_per_second = 1000000/8;
var duration_seconds = 1;

var idx = 0;

var top = JSON.parse(http("http://10.0.0.61/topology.json"));
setTopology(top);

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:4}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ds = evt.dataSource;
 if(topologyInterfaceToLink(agent,ds)) return;

 var ports = ofInterfaceToPort(agent,ds);
 if(ports && ports.length == 1) {
  var dpid = ports[0].dpid;
  var id = "mark" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
    priority:1000, idleTimeout:2,
    match:{dl_type:2048, nw_proto:6, nw_src:k[0], nw_dst:k[1],
           tp_src:k[2], tp_dst:k[3]},
    actions:["set_nw_tos=128","output=normal"]
  };
  setOfRule(dpid,id,rule);
 }
},['elephant']);

setFlow('tos0',{value:'bytes',filter:'iptos=00000000',t:1});
setFlow('tos128',{value:'bytes',filter:'iptos=10000000',t:1});
Some notes on the script:
  1. The topologyInterfaceToLink() function looks up link information based on agent and interface. The event handler uses this function to exclude inter-switch links, applying controls to ingress ports only.
  2. The OpenFlow rule priority for rules created by controller scripts must be greater than 500 to override the default rules created by leafandspine.py
  3. The tos0 and tos128 flow definitions have been added to so that the re-marking can be seen.
Restart sFlow-RT with the new script and use a web browser to view the default tos0 and the re-marked tos128 traffic.
Figure 3: Marking large flows
Use iperf to generate traffic between h1 and h3 (the traffic needs to cross more than one switch so it can be observed before and after marking). The screen capture in figure 3 demonstrates that the controller immediately detects and marks large flows.

Saturday, April 19, 2014

Configuring Mellanox switches

The following commands configure a Mellanox switch (10.0.0.252) to sample packets at 1-in-10000, poll counters every 30 seconds and send sFlow to an analyzer (10.0.0.50) using the default sFlow port 6343:
sflow enable
sflow agent-ip 10.0.0.252
sflow collector-ip 10.0.0.50
sflow sampling-rate 10000
sflow counter-poll-interval 30
For each interface:
interface ethernet 1/1 sflow enable
A previous posting discussed the selection of sampling rates. Additional information can be found on the Mellanox web site.

See Trying out sFlow for suggestions on getting started with sFlow monitoring and reporting.

Sunday, April 6, 2014

DDoS mitigation hybrid OpenFlow controller

Performance optimizing hybrid OpenFlow controller describes the growing split in the SDN controller market between edge controllers using virtual switches to deliver network virtualization (e.g. VMware NSX, Nuage Networks, Juniper Contrail, etc.) and fabric controllers that optimize performance of the physical network. The article provides an example using InMon's sFlow-RT controller to detect and mark large "elephant" flows so that they don't interfere with latency sensitive small "mice" flows.

This article describes an additional example, using the sFlow-RT controller to implement the ONS 2014 SDN Idol winning distributed denial of service (DDoS) mitigation solution - Real-time SDN Analytics for DDoS mitigation.
Figure 1: ISP/IX Market Segment
Figure 1 shows how service providers are ideally positioned to mitigate large flood attacks directed at their customers. The mitigation solution involves an SDN controller that rapidly detects and filters out attack traffic and protects the customer's Internet access.
Figure 2: Novel DDoS Mitigation solution using Real-time SDN Analytics
Figure 2 shows the elements of the control system in the SDN Idol demonstration. The addition of an embedded OpenFlow controller in sFlow-RT allows the entire DDoS mitigation system to be collapsed into the following sFlow-RT JavaScript application:
// Define large flow as greater than 100Mbits/sec for 1 second or longer
var bytes_per_second = 100000000/8;
var duration_seconds = 1;

var idx = 0;

setFlow('udp_target',
 {keys:'ipdestination,udpsourceport',
  value:'bytes', filter:'direction=egress', t:duration_seconds}
);

setThreshold('attack',
 {metric:'udp_target', value:bytes_per_second, byFlow:true, timeout:2, 
  filter:{ifspeed:[1000000000]}}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ports = ofInterfaceToPort(agent);
 if(ports && ports.length == 1) {
  var dpid = ports[0].dpid;
  var id = "drop" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
   priority:500, idleTimeout:20, hardTimeout:3600,
   match:{dl_type:2048, nw_proto:17, nw_dst:k[0], tp_src:k[1]},
   actions:[]
  };
  setOfRule(dpid,id,rule);
 }
},['attack']);
The following command line arguments load the script and enable OpenFlow on startup:
-Dscript.file=ddos.js -Dopenflow.controller.start=yes
Some notes on the script:
  1. The 100Mbits/s threshold for large flows was selected because it represents 10% of the bandwidth of the 1Gigabit access ports on the network
  2. The setFlow filter specifies egress flows since the goal is to filter flows as converge on customer facing egress ports.
  3. The setThreshold filter specifies that thresholds are only applied to 1Gigabit access ports
  4. The OpenFlow rule generated in setEventHandler matches the destination address and source port associated with the DDoS attack and includes an idleTimeout of 20 seconds and a hardTimeout of 3600 seconds. This means that OpenFlow rules are automatically removed by the switch when the flow becomes idle without any further intervention from the controller. If the attack is still in progress when the hardTimeout expires and the rule is removed, the attack will be immediately be detected by the controller and a new rule will be installed.
The nping tool can be used to simulate DDoS attacks to test the application. The following script simulates a series of DNS reflection attacks:
while true; do nping --udp --source-port 53 --data-length 1400 --rate 2000 --count 700000 --no-capture --quiet 10.100.10.151; sleep 40; done
The following screen capture shows a basic test setup and results:
The chart at the top right of the screen capture shows attack traffic mixed with normal traffic arriving at the edge switch. The switch sends a continuous stream of measurements to the sFlow-RT controller running the DDoS mitigation application. When an attack is detected, an OpenFlow rule is pushed to the switch to block the traffic. The chart at the bottom right trends traffic on the protected customer link, showing that normal traffic is left untouched, but attack traffic is immediately detected and removed from the link.
Note: While this demonstration only used a single switch, the solution easily scales to hundreds of switches and thousands of edge ports.
This example, along with the large flow marking example, demonstrates that basing the sFlow-RT fabric controller on widely supported sFlow and OpenFlow standards and including an open, standards based, programming environment (JavaScript / ECMAScript) makes sFlow-RT an ideal platform for rapidly developing and deploying traffic engineering SDN applications in existing networks.

Thursday, April 3, 2014

Cisco, ACI, OpFlex and OpenDaylight

Cisco's April 2nd, 2014 announcement - Cisco and Industry Leaders Will Deliver Open, Multi-Vendor, Standards-Based Networks for Application Centric Infrastructure with OpFlex Protocol - has drawn mixed reviews from industry commentators.

In, Cisco Submits Its (Very Different) SDN to IETF & OpenDaylight, SDNCentral editor Craig Matsumoto comments, "You know how, early on, people were all worried Cisco would 'take over' OpenDaylight? This is pretty much what they were talking about. It’s not a 'takeover,' literally, but OpFlex and the group policy concept steer OpenDaylight into a new direction that it otherwise wouldn’t have, one that Cisco happens to already have taken."

CIMI Corp. President, Tom Nolle, remarks "We’re all in business to make money, and if Cisco takes a position in a key market like SDN that seems to favor…well…doing nothing much different, you have to assume they have good reason to believe that their approach will resonate with buyers." - Cisco’s OpFlex: We Have Sound AND Fury

This article will look at some of the architectural issues raised by Cisco's announcement based on the following documents:
The diagram at the top of this article illustrates the architecture of Cisco's OpenDaylight proposal.  The crack in the diagram was added to show the split between Cisco's proposed additions and existing OpenDaylight components. It is clear that Cisco has simply bolted a new controller to the side of the existing OpenDaylight controller, the ACI controller on the left has a native Southbound API (OpFlex) and treats the the existing OpenDaylight controller as a Southbound plug-in (the arrow that connects the Affinity Decomposer module to the existing Affinity Service module). The existing OpenDaylight controller is marginalized by relegating its role to managing Traditional Network Elements, implying that next generation SDN revolves around devices that support the OpFlex protocol exclusively.

What is the function of Cisco's new controller? The press release states, ACI is the first data center and cloud solution to offer full visibility and integrated management of both physical and virtual networked IT resources, accelerating application deployment through a dynamic, application-aware network policy model. However, if you look a little deeper - Cisco Application Policy Infrastructure Controller Data Center Policy Model - the underlying architecture of ACI is based on promise theory.

Promise theory underpins many data center orchestration tools, including: CFEngine, Puppet, Chef, Ansible, and Salt. These automation tools are an important part of the DevOps toolkit - providing a way to rapidly reconfigure resources and roll out new services. Does it make sense to create a new controller and protocol just to manage network equipment?
The DevOps movement has revolutionized the data center by breaking down silos, merging application development and IT operations to increase the speed and agility of service creation and delivery.
An alternative to creating a new, network only, orchestration system is to open up network equipment to the orchestration tools that DevOps teams already use. The article, Dell, Cumulus, Open Source, Open Standards, and Unified Management, discusses the trend toward open, Linux-based, switch platforms. An important benefit of this move to open networking platforms is that the same tools that are today used to manage Linux servers can also be used to manage the configuration of the network - for example, Cumulus Architecture currently lists Puppet, Chef and CFEngine as options for network automation. Eliminating the need to deploy and coordinate separate network and system orchestration tools significantly reduces operational complexity and increases agility; breaking down the network silo to facilitate the creation of a NetDevOps team.
While it might be argued that Cisco's ACI/OpFlex is better at configuring network devices than existing DevOps tools, the fierce competition and rapid pace of innovation in the DevOps space is likely to outpace Cisco's efforts to standardize the OpFlex protocol in the IETF.
Finally, it is not clear how serious Cisco is about its ACI architecture. Cisco Nexus 3000 series switches are based on standard merchant silicon hardware and support open, multi-vendor, standards and APIs, including: sFlow, OpenFlow, Linux Containers, XML, JSON, Puppet, Chef, Python, and OpenStack. Nexus 9000 series switches, the focus of Cisco's ACI strategy, include custom Cisco hardware to support ACI but also contain merchant silicon, allowing the switches to be run in either ACI or NX-OS mode. The value of open platforms is compelling and I expect Cisco's customers will favor NX-OS mode on the Nexus 9000 series and push Cisco to provide feature parity with the Nexus 3000 series.

Tuesday, March 25, 2014

Integrated hybrid OpenFlow control of HP switches

Performance optimizing hybrid OpenFlow controller describes InMon's sFlow-RT controller. The controller makes use of the sFlow and OpenFlow standards and is optimized for real-time traffic engineering applications that managing large traffic flows, including: DDoS mitigation, ECMP load balancing, LAG load balancing, large flow marking etc.

The previous article provided an example of large flow marking using an Alcatel-Lucent OmniSwitch 6900 switch. This article discusses how to replicate the example using HP Networking switches.

At present, the following HP switch models are listed as having OpenFlow support:
  • FlexFabric 12900 Switch Series
  • 12500 Switch Series
  • FlexFabric 11900 Switch Series
  • 8200 zl Switch Series
  • HP FlexFabric 5930 Switch Series
  • 5920 Switch Series
  • 5900 Switch Series
  • 5400 zl Switch Series
  • 3800 Switch Series
  • HP 3500 and 3500 yl Switch Series
  • 2920 Switch Series 
Note: All of the above HP switches (and many others) support the sFlow standard - see sFlow Products: Network Equipment @ sFlow.org.

HP's OpenFlow implementation supports integrated hybrid mode - provided the OpenFlow controller pushes a default low priority OpenFlow rule that matches all packets and applies the NORMAL action (i.e. instructs the switch to apply default switching / routing forwarding to the packets).

In this example, an HP 5400 zl switch is used to run a slightly modified version of the sFlow-RT controller JavaScript application described in Performance optimizing hybrid OpenFlow controller:
// Define large flow as greater than 100Mbits/sec for 0.2 seconds or longer
var bytes_per_second = 100000000/8;
var duration_seconds = 0.2;

var idx = 0;

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', filter:'direction=ingress', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:2, 
  filter:{ifspeed:[1000000000]}}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ports = ofInterfaceToPort(agent);
 if(ports && ports.length == 1) {
  var dpid = ports[0].dpid;
  var id = "mark" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
   priority:500, idleTimeout:20,
   match:{dl_type:2048, nw_proto:6, nw_src:k[0], nw_dst:k[1],
          tp_src:k[2], tp_dst:k[3]},
   actions:["set_nw_tos=128","output=normal"]
  };
  setOfRule(dpid,id,rule);
 }
},['elephant']);
The idleTimeout was increased from 2 to 20 seconds since the switch has a default Probe Interval of 10 seconds (the interval between OpenFlow counter updates). If the OpenFlow rule idleTimeout is set shorter than the Probe Interval the switch will remove the OpenFlow rule before the flow ends.
Mar. 27, 2014 Update: The  HP Switch Software OpenFlow Administrator's Guide K/KA/WB 15.14, Appendix B Implementation Notes, describes the effect of the probe interval on idle timeouts and describes how to change the default (using openflow hardware statistics refresh rate) but warns that shorter refresh rates will increase CPU load on the switch.
The following command line arguments load the script and enable OpenFlow on startup:
-Dscript.file=ofmark.js \
-Dopenflow.controller.start=yes \
-Dopenflow.controller.addNormal=yes
The additional highlighted argument instructs the sFlow-RT controller to install the wild card OpenFlow NORMAL rule automatically when the switch connects.

The screen capture at the top of the page shows a mixture of small flows "mice" and large flows "elephants" generated by a server connected to the HP 5406 zl switch. The graph at the bottom right shows the mixture of unmarked traffic being sent to the switch. The sFlow-RT controller receives a stream of sFlow measurements from the switch and detects each elephant flows in real-time, immediately installing an OpenFlow rule that matches the flow and instructing the switch to mark the flow by setting the IP type of service bits. The traffic upstream of the switch is shown in the top right chart and it can be clearly seen that each elephant flow has been identified and marked, while the mice have been left unmarked.
Note: While this demonstration only used a single switch, the solution easily scales to hundreds of switches and thousands of edge ports.
The results from the HP switch are identical to those obtained with the Alcatel-Lucent switch, demonstrating the multi-vendor interoperability provided by the sFlow and OpenFlow standards. In addition, sFlow-RT's support for an open, standards based, programming environment (JavaScript / ECMAScript) makes it an ideal platform for rapidly developing and deploying traffic engineering SDN applications in existing networks.

Monday, March 24, 2014

Performance optimizing hybrid OpenFlow controller

The latest release of InMon's sFlow-RT controller adds integrated hybrid OpenFlow support - optimized for real-time traffic engineering applications that manage large traffic flows, including: DDoS mitigation, ECMP load balancing, LAG load balancing, large flow marking etc.

This article discusses the evolving architecture of software defined networking (SDN) and the role of analytics and traffic engineering. InMon's sFlow-RT controller is used to provide practical examples of the architecture.
Figure 1: Fabric: A Retrospective on Evolving SDN
The article, Fabric: A Retrospective on Evolving SDN by Martin Casado, Teemu Koponen, Scott Shenker, and Amin Tootoonchian, makes the case for a two tier software defined networking (SDN) architecture; comprising a smart edge and an efficient core. The article, Pragmatic software defined networking on this blog, examines how the edge is moving into virtual switches, with tunneling (VxLAN, NVGRE, GRE, STT) used to virtualize the network and decouple the edge from the core. As complex policy decisions move to the network edge, the core fabric is left with the task of efficiently managing physical resources in order to deliver low latency, high bandwidth connectivity between edge switches.

First generation SDN controllers were designed before the edge / core split became apparent and contain a complex set of features that limit their performance and scaleability. The sFlow-RT controller represents a second generation design, targeted specifically at the challenge of optimizing the performance of the physical network fabric.

Equal cost multi-path routing (ECMP), link aggregation (LAG) and multi-chassis link aggregation (MLAG) provide methods for spreading traffic across multiple paths in data center network fabrics in order to deliver the bandwidth needed to support big data and cloud workloads. However, these methods of load balancing do not handle all types of traffic well - in particular, long duration, high bandwidth "elephant" flows - see Of Mice and Elephants by Martin Casado and Justin Pettit with input from Bruce Davie, Teemu Koponen, Brad Hedlund, Scott Lowe, and T. Sridhar. The article SDN and large flows on this blog reviews traffic studies and discusses the benefits of actively managing large flows using an SDN controller.

The sFlow-RT controller makes use of two critical technologies that allow it to actively manage large flows in large scale production networks:
  1. sFlow - The sFlow standard is widely supported by data center switch vendors and merchant silicon and provides the scaleability and rapid detection of large flows needed for effective real-time control.
  2. Integrated hybrid OpenFlow - With integrated hybrid OpenFlow, the normal forwarding mechanisms in switches are the default for handling traffic (ECMP, LAG, MLAG), i.e. packets are never sent to the controller to make forwarding decisions. OpenFlow is used to selectively override default forwarding in order to improve performance. This approach is extremely scaleable and robust - there is minimal overhead associated with maintaining the OpenFlow connections between the controller and the switches, and the network will still continue to forward traffic if the controller fails. 
A demonstration using an older version of sFlow-RT with an embedded JavaScript DDoS mitigation application and an external OpenFlow controller (Open Daylight) won the SDN Idol competition at the 2014 Open Networking Summit - see Real-time SDN Analytics for DDoS mitigation for a video of the live demonstration and details of the competition.
Much of the complexity in developing SDN control applications involves sharing and distributing state between the analytics engine, application, and OpenFlow controller. Modifying the DDoS mitigation scripts presented on this blog to use sFlow-RT's embedded OpenFlow controller is straightforward and delivers the performance, scaleability and robustness needed to move the DDoS mitigation applications into production.
However, in this article, the interesting problem of detecting and marking elephant flows described in Of Mice and Elephants and Marking large flows will be used to demonstrate the sFlow-RT controller. Before looking at how sFlow-RT handles large flow marking, it is worth looking at existing approaches. The article, Elephant Flow Mitigation via Virtual-Physical Communication, on VMware's Network Virtualization blog describes how Open vSwitch was modified to include an "NSX Elephant agent" to detect large flows. When a large flow is detected, a notification is sent to the HP VAN SDN Controller that is running a special application that responds to the large flow notifications. The industry’s first east-west federated solution on HP's Networking blog further describes the solution, providing a demonstration in which large flows are given a different priority marking to small flows.

In contrast, the following sFlow-RT controller JavaScript application implements large flow marking:
// Define large flow as greater than 100Mbits/sec for 0.2 seconds or longer
var bytes_per_second = 100000000/8;
var duration_seconds = 0.2;

var idx = 0;

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', filter:'direction=ingress', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:2, 
  filter:{ifspeed:[1000000000]}}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ports = ofInterfaceToPort(agent);
 if(ports && ports.length == 1) {
  var dpid = ports[0].dpid;
  var id = "mark" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
   priority:500, idleTimeout:2,
   match:{dl_type:2048, nw_proto:6, nw_src:k[0], nw_dst:k[1],
          tp_src:k[2], tp_dst:k[3]},
   actions:["set_nw_tos=128","output=normal"]
  };
  setOfRule(dpid,id,rule);
 }
},['elephant']);
The following command line arguments load the script and enable OpenFlow on startup:
-Dscript.file=ofmark.js -Dopenflow.controller.start=yes
Some notes on the script:
  1. The 100Mbits/s threshold for large flows was selected because it represents 10% of the bandwidth of the 1Gigabit access ports on the network
  2. The setFlow filter specifies ingress flows since the goal is to mark flows as they enter the network
  3. The setThreshold filter specifies that thresholds are only applied to 1Gigabit access ports
  4. The OpenFlow rule generated in setEventHandler exactly matches the addresses and ports in each TCP connection and includes an idleTimeout of 2 seconds. This means that OpenFlow rules are automatically removed by the switch when the flow becomes idle without any further intervention from the controller.
The iperf tool can be used to generate a sequence of large flows to test the controller:
while true; do iperf -c 10.100.10.152 -i 20 -t 20; sleep 20; done
The following screen capture shows a basic test setup and results:
The screen capture shows a mixture of small flows "mice" and large flows "elephants" generated by a server connected to an edge switch (in this case an Alcatel-Lucent OmniSwitch 6900). The graph at the bottom right shows the mixture of unmarked traffic being sent to the switch. The sFlow-RT controller receives a stream of sFlow measurements from the switch and detects each elephant flows in real-time, immediately installing an OpenFlow rule that matches the flow and instructing the switch to mark the flow by setting the IP type of service bits. The traffic upstream of the switch is shown in the top right chart and it can be clearly seen that each elephant flow has been identified and marked, while the mice have been left unmarked.
Note: While this demonstration only used a single switch, the solution easily scales to hundreds of switches and thousands of edge ports.
The sFlow-RT large flow marking application is a much simpler than the large flow marking solution demonstrated by HP and VWware - which requires special purpose instrumentation embedded in a customized virtual switch and complex inter-controller communication. In contrast, the sFlow-RT controller leverages standard sFlow instrumentation built into commodity network devices to detect large flows in real time. Over 40 network vendors support the sFlow standard, including most of the NSX network gateway services partners and the HP 5930AF used in the HP / VMware demo. In addition, sFlow is widely available in virtual switches, including: Open vSwitch, Hyper-V Virtual Switch, IBM Distributed Virtual Switch 5000v, and HP FlexFabric Virtual Switch 5900v.

While it is possible to mark large flows based on a simple notification - as was demonstrated by HP/VMware - load balancing requires complete visibility into all links in the fabric. Using sFlow instrumentation embedded in the network devices, sFlow-RT has the real-time view of the utilization on all links and the information on all large flows and the their paths across the fabric needed for load balancing.
Why sFlow? What about other measurement protocols like Cisco NetFlow, IPFIX, SNMP, or using OpenFlow counters? The sFlow standard is uniquely suited to real-time traffic engineering because it provides the low latency, scaleability, and flexibility needed to support SDN traffic engineering applications. For a detailed discussion of the requirements for analytics driven control, see Performance Aware SDN.
Basing the sFlow-RT fabric controller on widely supported sFlow and OpenFlow standards and including an open, standards based, programming environment (JavaScript / ECMAScript) makes sFlow-RT an ideal platform for rapidly developing and deploying traffic engineering SDN applications in existing networks.

Tuesday, March 4, 2014

ONS2014 SDN Idol finalist demonstrations


The video of the ONS 2014 SDN Idol final demonstrations has been released (the demonstrations were presented live at the Open Networking Summit on Monday, March 3, 02:30P - 04:00P).

The first demo presented is Real-time SDN Analytics for DDoS mitigation, a joint Brocade / InMon solution that combines real-time sFlow analytics and OpenFlow with SDN so that service providers can deliver large scale distributed denial of service (DDoS) attack mitigation services to their enterprise customers using their existing network infrastructure. DDoS mitigation is particularly topical, two weeks ago, a large attack was targeted at CloudFlare, DDoS Attack Hits 400 Gbit/s, Breaks Record, and this past week, Meetup.com has been hit with a large persistent attack, Meetup Suffering Significant DDoS Attack, Taking It Offline For Days. The SDN DDoS mitigation solution can address these large attacks by leveraging the multi-Terabit, line-rate, monitoring and filtering capabilities in the network switches.
ONS2014 Announces Finalists for SDN Idol 2014 provides some sFlow related trivia relating to the finalists. 
An expert panel of judges selected the finalists:

The finalists were selected based on the following criteria:
Voting is open to ONS delegates and will occur during this evenings reception and the winner will be announced tomorrow.

March 5, 2014 Update: The Brocade/InMon DDoS mitigation as a service solution was voted SDN Idol winner - Brocade Crowned Winner of SDN Idol 2014 at Open Networking Summit 2014 - Real-Time SDN Analytics for DDoS Mitigation Bags Top Honors