Wednesday, April 23, 2014

Mininet integrated hybrid OpenFlow testbed

Figure 1: Hybrid Programmable Forwarding Planes
Integrated hybrid OpenFlow combines OpenFlow and existing distributed routing protocols to deliver robust software defined networking (SDN) solutions. Performance optimizing hybrid OpenFlow controller describes how the sFlow and OpenFlow standards combine to deliver visibility and control to address challenges including: DDoS mitigation, ECMP load balancing, LAG load balancing, and large flow marking.

A number of vendors support sFlow and integrated hybrid OpenFlow today, examples described on this blog include: Alcatel-Lucent, Brocade, and Hewlett-Packard. However, building a physical testbed is expensive and time consuming. This article describes how to build an sFlow and hybrid OpenFlow testbed using free Mininet network emulation software. The testbed emulates ECMP leaf and spine data center fabrics and provides a platform for experimenting with analytics driven feedback control using the sFlow-RT hybrid OpenFlow controller.

First build an Ubuntu 13.04 / 13.10 virtual machine then follow instructions for installing Mininet - Option 3: Installation from Packages.

Next, install an Apache web server:
sudo apt-get install apache2
Install the sFlow-RT integrated hybrid OpenFlow controller, either on the Mininet virtual machine, or on a different system (Java 1.6+ is required to run sFlow-RT):
wget http://www.inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
Copy the leafandspine.py script from the sflow-rt/extras directory to the Mininet virtual machine.

The following options are available:
./leafandspine.py --help
Usage: leafandspine.py [options]

Options:
  -h, --help            show this help message and exit
  --spine=SPINE         number of spine switches, default=2
  --leaf=LEAF           number of leaf switches, default=2
  --fanout=FANOUT       number of hosts per leaf switch, default=2
  --collector=COLLECTOR
                        IP address of sFlow collector, default=127.0.0.1
  --controller=CONTROLLER
                        IP address of controller, default=127.0.0.1
  --topofile=TOPOFILE   file used to write out topology, default topology.txt
Figure 2 shows a simple leaf and spine topology consisting of four hosts and four switches:
Figure 2: Simple leaf and spine topology
The following command builds the topology and specifies a remote host (10.0.0.162) running sFlow-RT as the hybrid OpenFlow controller and sFlow collector:
sudo ./leafandspine.py --collector 10.0.0.162 --controller 10.0.0.162 --topofile /var/www/topology.json
Note: All the links are configured to 10Mbit/s and the sFlow sampling rate is set to 1-in-10. These settings are equivalent to a 10Gbit/s network with a 1-in-10,000 sampling rate - see Large flow detection.

The network topology is written to  /var/www/topology.json making it accessible through HTTP. For example, the following command retrieves the topology from the Mininet VM (10.0.0.61):
curl http://10.0.0.61/topology.json
{"nodes": {"s3": {"ports": {"s3-eth4": {"ifindex": "392", "name": "s3-eth4"}, "s3-eth3": {"ifindex": "390", "name": "s3-eth3"}, "s3-eth2": {"ifindex": "402", "name": "s3-eth2"}, "s3-eth1": {"ifindex": "398", "name": "s3-eth1"}}, "tag": "edge", "name": "s3", "agent": "10.0.0.61", "dpid": "0000000000000003"}, "s2": {"ports": {"s2-eth1": {"ifindex": "403", "name": "s2-eth1"}, "s2-eth2": {"ifindex": "405", "name": "s2-eth2"}}, "name": "s2", "agent": "10.0.0.61", "dpid": "0000000000000002"}, "s1": {"ports": {"s1-eth1": {"ifindex": "399", "name": "s1-eth1"}, "s1-eth2": {"ifindex": "401", "name": "s1-eth2"}}, "name": "s1", "agent": "10.0.0.61", "dpid": "0000000000000001"}, "s4": {"ports": {"s4-eth2": {"ifindex": "404", "name": "s4-eth2"}, "s4-eth3": {"ifindex": "394", "name": "s4-eth3"}, "s4-eth1": {"ifindex": "400", "name": "s4-eth1"}, "s4-eth4": {"ifindex": "396", "name": "s4-eth4"}}, "tag": "edge", "name": "s4", "agent": "10.0.0.61", "dpid": "0000000000000004"}}, "links": {"s2-eth1": {"ifindex1": "403", "ifindex2": "402", "node1": "s2", "node2": "s3", "port2": "s3-eth2", "port1": "s2-eth1"}, "s2-eth2": {"ifindex1": "405", "ifindex2": "404", "node1": "s2", "node2": "s4", "port2": "s4-eth2", "port1": "s2-eth2"}, "s1-eth1": {"ifindex1": "399", "ifindex2": "398", "node1": "s1", "node2": "s3", "port2": "s3-eth1", "port1": "s1-eth1"}, "s1-eth2": {"ifindex1": "401", "ifindex2": "400", "node1": "s1", "node2": "s4", "port2": "s4-eth1", "port1": "s1-eth2"}}}
Don't start sFlow-RT yet, it should only be started after Mininet has finished building the topology.

Verify connectivity before starting sFlow-RT:
mininet> pingall
*** Ping: testing ping reachability
h1 -> h2 h3 h4 
h2 -> h1 h3 h4 
h3 -> h1 h2 h4 
h4 -> h1 h2 h3 
*** Results: 0% dropped (12/12 received)
This test demonstrates that the Mininet topology has been constructed with a set of default forwarding rules that provide connectivity without the need for an OpenFlow controller - emulating the behavior of  a network of integrated hybrid OpenFlow switches.

The following sFlow-RT script ecmp.js demonstrates ECMP load balancing in the emulated network:
// Define large flow as greater than 1Mbits/sec for 1 second or longer
var bytes_per_second = 1000000/8;
var duration_seconds = 1;

var top = JSON.parse(http("http://10.0.0.61/topology.json"));
setTopology(top);

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:2}
);

setEventHandler(function(evt) {
 var rec = topologyInterfaceToLink(evt.agent,evt.dataSource);
 if(!rec || !rec.linkname) return;
 var link = topologyLink(rec.linkname);
 logInfo(link.node1 + "-" + link.node2 + " " + evt.flowKey);
},['elephant']);
Modify the sFlow-RT start.sh script to include the following arguments:
RT_OPTS="-Dopenflow.start=yes -Dopenflow.flushRules=no"
SCRIPTS="-Dscript.file=ecmp.js"
Some notes on the script:
  1. The topology is retrieved by making an HTTP request to the Mininet VM (10.0.0.61)
  2. The 1Mbits/s threshold for large flows was selected because it represents 10% of the bandwidth of the 10Mbits/s links in the emulated network
  3. The event handler prints the link the flow traversed - identifying the link by the pair of switches it connects
Start sFlow-RT:
./start.sh
Now generate some large flows between h1 and h3 using the Mininet iperf command:
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.58 Mbits/sec', '10.8 Mbits/sec']
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.58 Mbits/sec', '10.8 Mbits/sec']
mininet> iperf h1 h3
*** Iperf: testing TCP bandwidth between h1 and h3
*** Results: ['9.59 Mbits/sec', '10.3 Mbits/sec']
The following results were logged by sFlow-RT:
2014-04-21T19:00:36-0700 INFO: ecmp.js started
2014-04-21T19:01:16-0700 INFO: s1-s3 10.0.0.1,10.0.1.1,49240,5001
2014-04-21T19:01:16-0700 INFO: s1-s4 10.0.0.1,10.0.1.1,49240,5001
2014-04-21T20:53:19-0700 INFO: s2-s4 10.0.0.1,10.0.1.1,49242,5001
2014-04-21T20:53:19-0700 INFO: s2-s3 10.0.0.1,10.0.1.1,49242,5001
2014-04-21T20:53:29-0700 INFO: s1-s3 10.0.0.1,10.0.1.1,49244,5001
2014-04-21T20:53:29-0700 INFO: s1-s4 10.0.0.1,10.0.1.1,49244,5001
The results demonstrate that the emulated leaf and spine network is performing equal cost multi-path (ECMP) forwarding - different flows between the same pair of hosts take different paths across the fabric (the highlighted lines correspond to the paths shown in Figure 2).
Open vSwitch in Mininet is the key to this emulation, providing sFlow and multi-path forwarding support 
The following script implements the large flow marking example described in Performance optimizing hybrid OpenFlow controller:
include('extras/leafandspine-hybrid.js');

// Define large flow as greater than 1Mbits/sec for 1 second or longer
var bytes_per_second = 1000000/8;
var duration_seconds = 1;

var idx = 0;

var top = JSON.parse(http("http://10.0.0.61/topology.json"));
setTopology(top);

setFlow('tcp',
 {keys:'ipsource,ipdestination,tcpsourceport,tcpdestinationport',
  value:'bytes', t:duration_seconds}
);

setThreshold('elephant',
 {metric:'tcp', value:bytes_per_second, byFlow:true, timeout:4}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ds = evt.dataSource;
 if(topologyInterfaceToLink(agent,ds)) return;

 var port = ofInterfaceToPort(agent,ds);
 if(port) {
  var dpid = port.dpid;
  var id = "mark" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
    priority:1000, idleTimeout:2,
    match:{eth_type:2048, ip_proto:6, ip_src:k[0], ip_dst:k[1],
           tcp_src:k[2], tcp_dst:k[3]},
    actions:["set_ip_dscp=32","output=normal"]
  };
  setOfRule(dpid,id,rule);
 }
},['elephant']);

setFlow('tos0',{value:'bytes',filter:'ipdscp=0',t:1});
setFlow('tos128',{value:'bytes',filter:'ipdscp=32',t:1});
Some notes on the script:
  1. The topologyInterfaceToLink() function looks up link information based on agent and interface. The event handler uses this function to exclude inter-switch links, applying controls to ingress ports only.
  2. The OpenFlow rule priority for rules created by controller scripts must be greater than 500 to override the default rules created by leafandspine.py
  3. The tos0 and tos128 flow definitions have been added to so that the re-marking can be seen.
Restart sFlow-RT with the new script and use a web browser to view the default tos0 and the re-marked tos128 traffic.
Figure 3: Marking large flows
Use iperf to generate traffic between h1 and h3 (the traffic needs to cross more than one switch so it can be observed before and after marking). The screen capture in figure 3 demonstrates that the controller immediately detects and marks large flows.

Saturday, April 19, 2014

Configuring Mellanox switches

The following commands configure a Mellanox switch (10.0.0.252) to sample packets at 1-in-10000, poll counters every 30 seconds and send sFlow to an analyzer (10.0.0.50) using the default sFlow port 6343:
sflow enable
sflow agent-ip 10.0.0.252
sflow collector-ip 10.0.0.50
sflow sampling-rate 10000
sflow counter-poll-interval 30
For each interface:
interface ethernet 1/1 sflow enable
A previous posting discussed the selection of sampling rates. Additional information can be found on the Mellanox web site.

See Trying out sFlow for suggestions on getting started with sFlow monitoring and reporting.

Sunday, April 6, 2014

DDoS mitigation hybrid OpenFlow controller

Performance optimizing hybrid OpenFlow controller describes the growing split in the SDN controller market between edge controllers using virtual switches to deliver network virtualization (e.g. VMware NSX, Nuage Networks, Juniper Contrail, etc.) and fabric controllers that optimize performance of the physical network. The article provides an example using InMon's sFlow-RT controller to detect and mark large "elephant" flows so that they don't interfere with latency sensitive small "mice" flows.

This article describes an additional example, using the sFlow-RT controller to implement the ONS 2014 SDN Idol winning distributed denial of service (DDoS) mitigation solution - Real-time SDN Analytics for DDoS mitigation.
Figure 1: ISP/IX Market Segment
Figure 1 shows how service providers are ideally positioned to mitigate large flood attacks directed at their customers. The mitigation solution involves an SDN controller that rapidly detects and filters out attack traffic and protects the customer's Internet access.
Figure 2: Novel DDoS Mitigation solution using Real-time SDN Analytics
Figure 2 shows the elements of the control system in the SDN Idol demonstration. The addition of an embedded OpenFlow controller in sFlow-RT allows the entire DDoS mitigation system to be collapsed into the following sFlow-RT JavaScript application:
// Define large flow as greater than 100Mbits/sec for 1 second or longer
var bytes_per_second = 100000000/8;
var duration_seconds = 1;

var idx = 0;

setFlow('udp_target',
 {keys:'ipdestination,udpsourceport',
  value:'bytes', filter:'direction=egress', t:duration_seconds}
);

setThreshold('attack',
 {metric:'udp_target', value:bytes_per_second, byFlow:true, timeout:2, 
  filter:{ifspeed:[1000000000]}}
);

setEventHandler(function(evt) {
 var agent = evt.agent;
 var ports = ofInterfaceToPort(agent);
 if(ports && ports.length == 1) {
  var dpid = ports[0].dpid;
  var id = "drop" + idx++;
  var k = evt.flowKey.split(',');
  var rule= {
   priority:500, idleTimeout:20, hardTimeout:3600,
   match:{dl_type:2048, nw_proto:17, nw_dst:k[0], tp_src:k[1]},
   actions:[]
  };
  setOfRule(dpid,id,rule);
 }
},['attack']);
The following command line arguments load the script and enable OpenFlow on startup:
-Dscript.file=ddos.js -Dopenflow.start=yes
Some notes on the script:
  1. The 100Mbits/s threshold for large flows was selected because it represents 10% of the bandwidth of the 1Gigabit access ports on the network
  2. The setFlow filter specifies egress flows since the goal is to filter flows as converge on customer facing egress ports.
  3. The setThreshold filter specifies that thresholds are only applied to 1Gigabit access ports
  4. The OpenFlow rule generated in setEventHandler matches the destination address and source port associated with the DDoS attack and includes an idleTimeout of 20 seconds and a hardTimeout of 3600 seconds. This means that OpenFlow rules are automatically removed by the switch when the flow becomes idle without any further intervention from the controller. If the attack is still in progress when the hardTimeout expires and the rule is removed, the attack will be immediately be detected by the controller and a new rule will be installed.
The nping tool can be used to simulate DDoS attacks to test the application. The following script simulates a series of DNS reflection attacks:
while true; do nping --udp --source-port 53 --data-length 1400 --rate 2000 --count 700000 --no-capture --quiet 10.100.10.151; sleep 40; done
The following screen capture shows a basic test setup and results:
The chart at the top right of the screen capture shows attack traffic mixed with normal traffic arriving at the edge switch. The switch sends a continuous stream of measurements to the sFlow-RT controller running the DDoS mitigation application. When an attack is detected, an OpenFlow rule is pushed to the switch to block the traffic. The chart at the bottom right trends traffic on the protected customer link, showing that normal traffic is left untouched, but attack traffic is immediately detected and removed from the link.
Note: While this demonstration only used a single switch, the solution easily scales to hundreds of switches and thousands of edge ports.
This example, along with the large flow marking example, demonstrates that basing the sFlow-RT fabric controller on widely supported sFlow and OpenFlow standards and including an open, standards based, programming environment (JavaScript / ECMAScript) makes sFlow-RT an ideal platform for rapidly developing and deploying traffic engineering SDN applications in existing networks.

Thursday, April 3, 2014

Cisco, ACI, OpFlex and OpenDaylight

Cisco's April 2nd, 2014 announcement - Cisco and Industry Leaders Will Deliver Open, Multi-Vendor, Standards-Based Networks for Application Centric Infrastructure with OpFlex Protocol - has drawn mixed reviews from industry commentators.

In, Cisco Submits Its (Very Different) SDN to IETF & OpenDaylight, SDNCentral editor Craig Matsumoto comments, "You know how, early on, people were all worried Cisco would 'take over' OpenDaylight? This is pretty much what they were talking about. It’s not a 'takeover,' literally, but OpFlex and the group policy concept steer OpenDaylight into a new direction that it otherwise wouldn’t have, one that Cisco happens to already have taken."

CIMI Corp. President, Tom Nolle, remarks "We’re all in business to make money, and if Cisco takes a position in a key market like SDN that seems to favor…well…doing nothing much different, you have to assume they have good reason to believe that their approach will resonate with buyers." - Cisco’s OpFlex: We Have Sound AND Fury

This article will look at some of the architectural issues raised by Cisco's announcement based on the following documents:
The diagram at the top of this article illustrates the architecture of Cisco's OpenDaylight proposal.  The crack in the diagram was added to show the split between Cisco's proposed additions and existing OpenDaylight components. It is clear that Cisco has simply bolted a new controller to the side of the existing OpenDaylight controller, the ACI controller on the left has a native Southbound API (OpFlex) and treats the the existing OpenDaylight controller as a Southbound plug-in (the arrow that connects the Affinity Decomposer module to the existing Affinity Service module). The existing OpenDaylight controller is marginalized by relegating its role to managing Traditional Network Elements, implying that next generation SDN revolves around devices that support the OpFlex protocol exclusively.

What is the function of Cisco's new controller? The press release states, ACI is the first data center and cloud solution to offer full visibility and integrated management of both physical and virtual networked IT resources, accelerating application deployment through a dynamic, application-aware network policy model. However, if you look a little deeper - Cisco Application Policy Infrastructure Controller Data Center Policy Model - the underlying architecture of ACI is based on promise theory.

Promise theory underpins many data center orchestration tools, including: CFEngine, Puppet, Chef, Ansible, and Salt. These automation tools are an important part of the DevOps toolkit - providing a way to rapidly reconfigure resources and roll out new services. Does it make sense to create a new controller and protocol just to manage network equipment?
The DevOps movement has revolutionized the data center by breaking down silos, merging application development and IT operations to increase the speed and agility of service creation and delivery.
An alternative to creating a new, network only, orchestration system is to open up network equipment to the orchestration tools that DevOps teams already use. The article, Dell, Cumulus, Open Source, Open Standards, and Unified Management, discusses the trend toward open, Linux-based, switch platforms. An important benefit of this move to open networking platforms is that the same tools that are today used to manage Linux servers can also be used to manage the configuration of the network - for example, Cumulus Architecture currently lists Puppet, Chef and CFEngine as options for network automation. Eliminating the need to deploy and coordinate separate network and system orchestration tools significantly reduces operational complexity and increases agility; breaking down the network silo to facilitate the creation of a NetDevOps team.
While it might be argued that Cisco's ACI/OpFlex is better at configuring network devices than existing DevOps tools, the fierce competition and rapid pace of innovation in the DevOps space is likely to outpace Cisco's efforts to standardize the OpFlex protocol in the IETF.
Finally, it is not clear how serious Cisco is about its ACI architecture. Cisco Nexus 3000 series switches are based on standard merchant silicon hardware and support open, multi-vendor, standards and APIs, including: sFlow, OpenFlow, Linux Containers, XML, JSON, Puppet, Chef, Python, and OpenStack. Nexus 9000 series switches, the focus of Cisco's ACI strategy, include custom Cisco hardware to support ACI but also contain merchant silicon, allowing the switches to be run in either ACI or NX-OS mode. The value of open platforms is compelling and I expect Cisco's customers will favor NX-OS mode on the Nexus 9000 series and push Cisco to provide feature parity with the Nexus 3000 series.