Thursday, January 31, 2013

Down the rabbit hole

The article, Tunnels, describes the use of tunneling protocols such as GRE, NVGRE and VXLAN to create virtual networks in cloud environments. Tunneling is also an important tool in addressing challenges posed by IPv6 migration. However, while tunnels are an effective way to virtualize networking, they pose difficult challenges for application development and operations (DevOps) teams trying to optimize network performance and for network administrators who no longer have visibility into the applications running over the physical infrastructure.

This article uses sFlow-RT to demonstrate how sFlow monitoring, build into the physical and virtual network infrastructure, can be used to provide comprehensive visibility into tunneled traffic to application, operations and networking teams.

Note: The sFlow-RT analytics module is primarily intended to be used in automated performance aware software defined networking applications. However, it also provides a rudimentary web based user interface that can be used to demonstrate the visibility into tunneled traffic offered by the sFlow standard.

Application performance

One of the reasons that tunnels are popular for network virtualization is that they provide a useful abstraction that hides the underlying physical network topology. However, while this abstraction offers significant operational flexibility, lack of visibility into the physical network can result in poorly placed workloads, inefficient use of resources, and consequent performance problems (see NUMA).

In this example, consider the problem faced by a system manager troubleshooting poor throughput between two virtual machines: 10.0.201.1 and 10.0.201.2.
Figure 1: Tracing a tunneled flow
Figure 1 shows the Flows table with the following flow definition:
  1. Name: trace
  2. Keys: ipsource,ipdestination,ipprotocol
  3. Value: frames
  4. Filter: ipsource.1=10.0.201.1&ipdestination.1=10.0.201.2
These settings define a new flow definition called trace that is looking for traffic in which the inner (tenant) addresses are 10.0.201.1 and 10.0.201.2 and asks for information on the outer IP addresses.

Note: ipsource.1 has a suffix of 1, indicating a reference to the inner address. It is possible to have nested tunnels such that the inner, inner ipsource address would be indicated as ipsource.2 etc.

Figure 2: Outer addresses of a tunneled flow
Clicking on the flow in the Flows table brings up the chart shown in Figure 2. The chart shows a flow of approximately 15K packets per second and identifies the outer ipsource, ipdestination and ipprotocol as 10.0.0.151, 10.0.0.152 and 47 respectively.

Note: The IP protocol of 47 indicates that this is a GRE tunnel.
Figure 3: All data sources observing a flow
The sFlow-RT module has a REST/HTTP API and editing the URL modifies the query to reveal additional information. Figure 3 shows the effect of changing the query from metric to dump. The dump output shows each switch (Agent) and port (Data Source) that saw the traffic. In this case the traffic was seen traversing 2 virtual switches 10.0.0.28 and 10.0.0.20, and a physical switch 10.0.0.253.

Given the switch and port information, follow up queries could be constructed to look at utilizations, errors and discards on the links to see if there are network problems affecting the traffic.

Network performance

Tunnels hide the applications using the network from network managers, making it difficult to manage capacity, assess the impact of network performance problems and maintain security.

Consider the same example, but this time from a network manager's perspective, having identified a large flow from address 10.0.0.151 to 10.0.0.152.
Figure 4: Looking into a tunnel
Figure 4 shows the Flows table with the following definition:
  1. Name: inside
  2. Keys: ipsource.1,ipdestination.1,stack
  3. Value: frames
  4. Filter: ipsource=10.0.0.151&10.0.0.152
These settings define a new flow called inside that is looking for traffic in which the outer addresses are 10.0.0.151 and 10.0.0.152 and asks for information on the inner (tenant) addresses.
Figure 5: Inner addresses in a tunneled flow
Again, clicking on the entry in the Flows table brings up the chart shown in Figure 5. The chart shows a flow of 15K packets per second and identifies the inner ipsource.1, ipdestination.1 and stack as 10.0.201.1, 10.0.201.2 and eth.ip.gre.ip.tcp respectively.

Given the inner IP addresses and stack, follow up queries can identify the TCP port, server names, application names, CPU loads etc. needed to understand the application demand driving traffic and determine possible actions (moving a virtual machine for example).

Automation

This was a trivial example, in practice tunneled topologies are more complex and cloud data centers are far too large to be managed using manual processes like the one demonstrated here. sFlow-RT provides visibility into large, complex, multi-layered environments, including: QinQ, TRILL, VXLAN, NVGRE and 6over4. Programmatic access to performance data through sFlow-RT's REST API allows cloud orchestration and software defined networking (SDN) controllers to incorporate real-time network, server and application visibility to automatically load balance and optimize workloads.

Monday, January 21, 2013

Memcache hot keys and cluster load balancing

Figure 1: Link saturation due to hot Memcache key (from Etsy Code as Craft blog
The article, mctop – a tool for analyzing memcache get traffic, on Etsy's Code as Craft blog describes the problems that can occur in Memcache clusters when traffic associated with hot/popular keys results in network congestion and poor application performance.

Figure 1 from the Etsy article shows traffic on the link to a server hosting a hot key. The clipping on the chart as traffic reaches 960 Mbits/s indicates that the link is saturated. The chart also shows the drop in traffic when client code was modified to reduce accesses to the hot key.

When looking at access patterns in a Memcache cluster, it is important to understand how clients decide which member of the cluster to access for a particular key. Generally, a hash function is computed on the key, and based on the value of the hash, the client selects a server in the cluster, e.g.
index = hash(key) % cluster.size
selected_server = cluster[index]
value = selected_server.get(key)
Using a hash function randomly distributes keys across the cluster and allows clients to independently determine which server to access for a given key, resulting in a cache architecture than can be scaled-out to very large sizes.
Figure 2: Memcache traffic between clients and servers
Figure 2 illustrates how hot keys cause traffic patterns to concentrate on a single server in the cluster. Each line color represents traffic associated with a particular key. While there may be millions of keys in the cache, most keys are infrequently accessed. A much smaller number of frequently accessed keys - the hot keys - dominates when looking at traffic patterns. For example, traffic associated with a hot key, shown in red in the diagram, is driven by frequent access to the key by many clients.

There are interesting similarities between traffic patterns generated by hot keys and the challenge of Load balancing LAG/ECMP groups described in a previous article. The article describes how performance aware software defined networking (SDN) can be used to detect and redirect large traffic flows. The remainder of this article will examine whether SDN techniques can be applied to manage Memcache performance.

The first step is to include instrumentation by deploying Memcache servers with integrated support for the sFlow standard (just as switches supporting the sFlow standard are used to provide real-time measurements in the LAG/ECMP article). The sFlow-RT analytics engine is used to generate actionable metrics, for example to alert when traffic to a key exceeds a threshold.

The following Python script, memcache.py, generates notifications of hot keys and missed keys that can be used to detect performance problems and identify the particular keys and servers affected:
import requests
import json

hotkey = {'keys':'memcachekey', 'value':'bytes'}
missedkey = {'keys':'memcachekey', 'value':'requests', 'filter':'memcachestatus=NOT_FOUND'}
hotkeythreshold = {'metric':'hotkey', 'value':1000000}
missedkeythreshold = {'metric':'missedkey', 'value':20}

rt='http://localhost:8008'
r = requests.put(rt + '/flow/hotkey/json',data=json.dumps(hotkey))
r = requests.put(rt + '/flow/missedkey/json',data=json.dumps(missedkey))
r = requests.put(rt + '/threshold/hotkey/json',data=json.dumps(hotkeythreshold))
r = requests.put(rt + '/threshold/missedkey/json',data=json.dumps(missedkeythreshold))
eventurl = rt + '/events/json?maxEvents=10&timeout=60'
eventID = -1
while 1 == 1:
  r = requests.get(eventurl + "&eventID=" + str(eventID))
  if r.status_code != 200: break
  events = r.json()
  if len(events) == 0: continue

  eventID = events[0]["eventID"]
  events.reverse()
  for e in events:
    r = requests.get(rt + '/metric/' + e['agent'] + '/' + e['dataSource'] + '.' + e['metric'] + '/json')
    metrics = r.json()
    if len(metrics) > 0:
      evtMetric = metrics[0]
      evtKeys = evtMetric.get('topKeys',None)
      if(evtKeys and len(evtKeys) > 0):
        topKey = evtKeys[0]
        key = topKey.get('key', None)
        value = topKey.get('value',None)
        print e['agent'] + ',' + key + ',' + str(value)
The following output shows the results produced as the script generates a notifications:
$ python blog_mem.py 
missedkey,10.0.0.151,session.uesr_id,33.7777777778
hotkey,10.0.0.143,session.time,1481.28081712
The script has identified a hot key, session.time, identified the member of the cluster hosting the key, 10.0.0.143,  and the amount of traffic to the key, 1481 bytes/second.

Note: The script also identified the key, session.uesr_id, as having a high miss rate. It is pretty clear that this is a typo and a client is using the wrong key name, it should be session.user_id. Correcting the error increases the cache hit rate, reduced load on the databases and improves application response time, see Memcached missed keys.

The problems identified in the hot key example on the Etsy blog and the missed key example above were corrected manually; by changing how the application logic around accessing the key in the first case, and correcting a typo in the second. However, it's interesting to consider whether it might be possible to automatically load balance traffic on the Memcache cluster by programmatically changing how keys are mapped to servers in the cluster, similar to the way in which OpenFlow is used in the LAG article to control switch forwarding.

Memcache clients already need to know the ordered list of the servers in the cluster in order to implement the hash-based load balancing mechanism. If in addition, a set of matching rules (analogous to OpenFlow match rules) could be applied to keys to specifically override the server selection decision, then it would be possible to automatically load balance the cluster by evenly distributing high usage keys, e.g.
selected_server = lookuprule(key)
if(!selected_server) {
  index = hash(key) % cluster.size
  selected_server = cluster[index]
}
value = selected_server.get(key)
Note: A workable solution isn't quite this simple. A viable solution would also need to move the data to the new server before applying a new rule in order to avoid a cache miss storm.

Memcache hot keys is an interesting example that demonstrates the close relationship between network, server and application performance. A silo'ed organization would find it difficult to address this issue: the networking team looking at the problem in isolation would see it as a capacity problem and might propose an expensive and disruptive upgrade, the Memcache administrator (without network visibility) might just see a chronic and inexplicable performance problem, and the application team (relying on the Memcache cluster to improve performance and scaleability of the web site) would see chronic performance problems affecting site users.
Figure 3: Typical Web 2.0 application architecture
While this article focused primarily on Memcache performance, the cache is only one element in a more complex application architecture. Figure 3 shows the elements in a Web 2.0 data center (e.g. Facebook, Twitter, Wikipedia, Youtube, etc.). A cluster of web servers handles requests from users. Typically, the application logic for the web site will run on the web servers in the form of server side scripts (PHP, Ruby, ASP etc). The web applications access the database to retrieve and update user data. Since the database can quickly become a bottleneck, the cache is used to store the results of database queries.

Combining sFlow solutions for monitoring network devices, hosts, web servers, Memcache servers and the applications built on this infrastructure delivers the unified visibility needed to manage data center wide performance and lays the foundation for a software defined data center (SDDC).

Saturday, January 19, 2013

Load balancing LAG/ECMP groups

Figure 1: Hash collision on a link aggregation group
The Internet Draft, draft-krishnan-opsawg-large-flow-load-balancing, is a good example of the type of problem that can be addressed using performance aware software defined networking (SDN). The Internet Draft describes the need to for real-time analytics to drive load balancing of long lived flows in LAG/ECMP groups.

The draft describes the challenge of managing long lived connections in the context of service provider backbones, but similar problems occur in the data center where long lived storage connections (iSCSI/FCoE) and network virtualization tunnels (VxLAN, NVGRE, STT, GRE etc) are responsible for a significant fraction of data center traffic.

The challenge posed by long lived flows is best illustrated by a specific example. The article, Link aggregation, provides a basic overview of LAG/MLAG topologies. Figure 1 shows a detailed view of an aggregation group consisting of 4 links connecting Switch A and Switch C and is used to illustrate the problem posed by long lived flows.

To ensure that packets in a flow arrive in order at their destination, Switch C computes a hash function over selected fields in the packets (e.g. L2: source and destination MAC address, L3: source and destination IP address, or L4: source and destination IP address + source and destination TCP/UDP ports) and picks a link based on the value of the hash, e.g.
index = hash(packet fields) % linkgroup.size
selected_link = linkgroup[index]
Hash based load balancing is easily implemented in hardware and works reasonably well, as long as traffic consists of large numbers of short duration flows, since the hash function randomly distributes flows across the members of the group. However, consider the problem posed by the two long lived high traffic flows, shown in the diagram as Packet Flow 1 and Packet Flow 2. There is a 1 in 4 chance that the two flows will be assigned the same group member and they will share this link for their lifetime.
Figure 2: from article Link aggregation
If you consider the network topology in Figure 2, there may be many aggregation groups in the shared path between the two flows, with a chance of collision on each hop. In addition, when you consider the large number of long living storage and tunneled flows in the data center, the probability that busy flows will collide on each aggregation group is high.

Collisions between high traffic flows can result in chronic performance problems poorly balanced load on the link. The link carrying colliding flows may become overloaded and experiencing packet loss and delay while other links in the group may be lightly loaded with plenty of spare capacity. The challenge is identifying links with flow collisions and changing the path selection used by the switches in order to use spare capacity in the link group.
Figure 3: Elements of an SDN stack to load balance aggregation groups
Figure 3 shows how performance aware SDN can be used to load balance long lived connections and increase the performance across the data center. A multi-path SDN load balancing system would consist of following elements:
  1. Measurement - The sFlow standard provides multi-vendor, scaleable, low latency monitoring of the entire network infrastructure.
  2. Analytics - The sFlow-RT real-time analytics engine receives the sFlow measurements and  rapidly identifies large flows. In addition, the analytics engine provides the detailed information into link aggregation topology and health needed to choose alternate paths.
  3. SDN application - The SDN application implements a load balancing algorithm, immediately responding to large flows with commands to the OpenFlow controller.
  4. Controller - The OpenFlow controller translates high level instructions to re-route flows into low level OpenFlow commands.
  5. Configuration - The OpenFlow protocol provides a fast, programatic, means for the controller to re-configuring forwarding in the network devices.
Figure 4: Load balance link aggregation group by moving large flow
Figure 4 shows the result of applying dynamic load balancing to the aggregation group.  The controller detected that the link connecting switch A, port 2 to Switch C, port 1 was heavily utilized and identified the two colliding flows responsible for the traffic. The controller selected the alternate link connecting Switch A, port 2 to Switch C, port 3 as being underutilized and used OpenFlow to reconfigure the forwarding tables in Switch C to direct Packet Flow 2 to this alternate path. The result is that traffic is evenly spread over the link group members, increasing effective capacity and improving performance by lowering packet loss and delay across the group.

Load balancing is a continuous process, traffic carried by each of the long lived flows is continually changing and different flows will collide at different times and in different places. To be effective, the the control system needs to have pervasive visibility into traffic and control of switch forwarding. Selecting switches that support both the OpenFlow and sFlow standards creates a solid foundation for deploying performance aware software defined networking solutions like the one described in this article.

Load balancing isn't just an issue for link aggregation (LAG/MLAG) topologies, the same issues occur with equal cost multi-path routing (ECMP) and WAN traffic optimization. Other applications for performance aware SDN include denial of service mitigation, multi-tenant performance isolation and workload placement.

Tuesday, January 8, 2013

Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX

Figure 1: Low latency software defined networking control loop
The articles SDN and delay and Delay and stability describe the critical importance of low measurement delay in constructing stable and effective controls. This article will examine the difference in measurement latency between sFlow and NetFlow/IPFIX and their relative suitability for driving control decisions.
Figure 2: sFlow and NetFlow agent architectures
Figure 2 illustrates shows the architectural differences between the sFlow and IPFIX/NetFlow instrumentation in a switch:
  1. NetFlow/IPFIX Cisco NetFlow and IPFIX (the IETF standard based on NetFlow) define a protocol for exporting flow records. A flow record summarizes a set of packets that share common attributes - for example, a typical flow record includes ingress interface, source IP address, destination IP address, IP protocol, source TCP/UDP port, destination TCP/UDP port, IP ToS, start time, end time, packet count and byte count. Figure 2 shows the steps performed by the switch in order to construct flow records. First the stream of packets is likely to be sampled (particularly in high-speed switches). Next, the sampled packet header is decoded to extract key fields. A hash function is computed over the keys in order to look up the flow record in the flow cache. If an existing record is found, its values are updated, otherwise a record is created for the new flow. Records are flushed from the cache based on protocol information (e.g. if a FIN flag is seen in a TCP packet), a timeout, inactivity, or when the cache is full. The flushed records are finally sent to the traffic analysis application.
  2. sFlow With sFlow monitoring, the decode, hash, flow cache and flush functionality are no longer implemented on the switch. Instead, sampled packet headers are immediately sent to the traffic analysis application which decodes the packets and analyzes the data. In addition, sFlow provides a polling function, periodically sending standard interface counters to the traffic analysis applications, eliminating the need for SNMP polling, see Link utilization
The flow cache introduces significant measurement delay for NetFlow/IPFIX based monitoring since the measurements are only accessible to management applications once they are flushed from the cache and sent to a traffic analyzer. In contrast, sFlow has no cache - measurements are immediately sent and can be quickly acted upon, resulting in extremely low measurement delay.

Open vSwitch is a useful testbed for demonstrating the impact of the flow cache on measurement delay since it can simultaneously export both NetFlow and sFlow, allowing a side-by-side comparison. The article, Comparing sFlow and NetFlow in a vSwitch, describes how to configure sFlow and NetFlow on the Open vSwitch and demonstrates some of the differences between the two measurement technologies. However, this article focusses on the specific issue of measurement delay.

Figure 3 shows the experimental setup, with sFlow directed to InMon sFlow-RT and NetFlow directed to SolarWinds Real-Time NetFlow Analyzer.

Note: Both tools are available at no charge, making it easy for anyone to reproduce these results.


Figure 3: Latency of large flow detection using sFlow and NetFlow
The charts in Figure 3 show how each technology reports on a large data transfer. The charts have been aligned to have the same time axis so you can easily compare them. The vertical blue line indicates the start of the data transfer.
  1. sFlow By analyzing the continuous stream of sFlow messages from the switch, sFlow-RT immediately detects and continuously tracks the data transfer from the moment the data transfer starts to its completions just over two minutes later.
  2. NetFlow The Real-Time NetFlow Analyzer doesn't report on the transfer until it receives the first NetFlow record 60 seconds after the data transfer started, indicated by the first vertical red line. The 60 delay corresponds to the active timeout used to flush records from the flow cache. A second NetFlow record, indicated by the second red line, is responsible for the second spike 60 seconds later, and a final NetFlow record, received after the transfer completes and indicated by the third red line, is responsible for the third spike in the chart.
Note: A one minute active timeout is the lowest configurable value on many Cisco switches (the default is 30 minutes), see Configuring NetFlow and NetFlow Data Export.

The large measurement delay imposed by the NetFlow/IPFIX flow cache makes the technology unsuitable for SDN control applications. The measurement delay can lead to instability since the controller is never sure of the current traffic levels and may be taking action based on stale data reported for flows that are no longer active.

In contrast, the sFlow measurement system quickly detects and continuously tracks large flows, allowing an SDN traffic management application to reconfigure switches and balance the paths that active flows take across the network.

Monday, January 7, 2013

SDN and delay

Figure 1: Components of delay in a feedback control system
Delay and stability, describes the critical importance of low latency in building stable and effective feedback control systems. The article lists the different components of delay which are drawn in the timeline shown in Figure 1.

Feedback control describes how a system responds to changes, or disturbances. For example consider how you might respond to a denial of service attack. The first component of delay is the measurement delay: the time taken by the monitoring system to detect the attack and generate actionable information (for example the IP addresses of the attacker and victim). Next there is planning delay: the time taken to decide how to respond to the attack, for example weighing alternatives and deciding to null route traffic. Implementing the plan involves configuration delay as command line commands are input into the router. Next, there is a delay as the route propagates and finally the effect of the control is seen as the denial of service traffic is dropped by upstream routers.
Figure 2: Low latency software defined networking control loop
The software defined networking (SDN) control system, shown in Figure 2, significantly reduces the time taken execute the control loop. The sFlow standard provides multi-vendor, scaleable, low latency monitoring of the entire network, server and application infrastructure. The sFlow-RT real-time analytics engine detects the denial of service attack within seconds and provides actionable information to an SDN application which automates the planning process and immediately responds with commands to the Controller. The OpenFlow protocol provides a fast, programatic means for the controller to re-configuring forwarding in the network devices, significantly reducing the configuration delay.

Performance aware software defined networking solutions reduce response times from minutes to a seconds. The article, Delay and stability, describes why low latency is an essential pre-requisite to creating stable feedback control systems. The increased speed of response provided by sFlow and OpenFlow allows new classes of problem to be addressed, like dynamic load balancing, that significantly improve efficiency and performance by adapting the network to rapidly changing demands.

Network virtualization is one of the major applications for software defined networking. The article, Tunnels, describes how traffic in virtual network is tunneled over a shared physical network (and how standard sFlow monitoring is able to observe the tunneled traffic). While virtual networks are logically separate, they share the same physical infrastructure. Feedback control is essential to load balance traffic between virtual networks to ensure quality of service, reduce costs and increase scaleability by optimizing the use of the shared physical network assets.

Sunday, January 6, 2013

Performance aware software defined networking

Figure 1: Performance aware SDN applications with sFlow-RT
The article, Software defined networking, described reasons for including the sFlow standard as the visibility protocol in an SDN stack. InMon's sFlow-RT module delivers real-time network, host and application visibility to SDN applications, making possible the development of new classes of performance aware application, such as load balancing and denial of service protection.

Figure 1 shows how sFlow-RT fits into an SDN stack. The sFlow-RT module receives a continuous stream of sFlow datagrams from network devices and converts them into actionable metrics, accessible through a REST (Representational State Transfer) API. REST Applications make use of the API to detect changing network traffic and adapt the network by pushing controls to an OpenFlow controller (for example, the open source Floodlight OpenFlow Controller). The controller uses the OpenFlow protocol to communicate with the network devices and reconfigure their forwarding behavior.

Note: While this example focuses on network visibility, sFlow integrates network, server and application monitoring and sFlow-RT delivers real-time visibility into these metrics, allowing the SDN controller to be server and application aware.
Figure 2: Denial of service protection example
Figure 2 shows a simplified REST Application for protecting against denial of service attacks. The numbered arrows indicate REST commands used by the application to monitor the network and deploy controls.

1. define address groups
Address groups (defined using CIDR notation) are a useful way of categorizing traffic. In this case, allowing internal and external addresses to be identified:
curl -H "Content-Type:application/json" -X PUT --data "{external:['0.0.0.0/0'], internal:['10.0.0.0/8']}" http://localhost:8008/group/json
2. define flows
Flows are defined by naming the packet attributes used to group packets into flows (keys), a value to associate with the flow, and a filter to select specific traffic. In this case we are interested in defining incoming flows, i.e. from external source addresses to internal destination addresses:
curl -H "Content-Type:application/json" -X PUT --data "{keys:'ipsource,ipdestination', value:'frames', filter:'sourcegroup=external&destinationgroup=internal'}" http://localhost:8008/flow/incoming/json
3. define thresholds
The following command defines a threshold of 1000 packets per second on any incoming flow:
curl -H "Content-Type:application/json" -X PUT --data "{metric:'incoming', value:1000}" http://localhost:8008/threshold/incoming/json
4. receive threshold event
The application polls for events, using "long polling" to receive asynchronous notification of new events. The following command asks for any events after eventID=4, the most recent event received, waiting up to 60 seconds for a result:
curl "http://localhost:8008/events/json?eventID=4&timeout=60"
The following event shows that an incoming flow generating 1,531 packets per second has exceeded the threshold:
[{
 "agent": "10.0.0.16",
 "dataSource": "4",
 "eventID": 5,
 "metric": "incoming",
 "threshold": 1000,
 "thresholdID": "incoming",
 "timestamp": 1357169369479,
 "value": 1531.149418835524
}]
5. monitor flow
Additional information about the flow is retrieved using the information from the event, including: agent, datasource and metric:
curl http://localhost:8008/metric/10.0.0.16/4.incoming/json
The following result shows that the value is still increasing and identifies the specific flow that exceeded the threshold, a flow from external address 192.168.1.1 to internal address 10.0.0.151:
[{
 "agent": "10.0.0.16",
 "dataSource": "4",
 "metricName": "incoming",
 "metricValue": 1582.93965044338071,
 "topKeys": [
  {
   "key": "192.168.1.1,10.0.0.151",
   "updateTime": 1357169662500,
   "value": 1582.93965044338071
  },
  {
   "key": "192.168.1.4,10.0.0.151",
   "updateTime": 1357169665500,
   "value": 46.552918457198984
  }
 ],
 "updateTime": 1357169665500
}]
6. deploy control
The OpenFlow controller is instructed to drop traffic from the external attacker (192.168.1.1)

7. monitor flow
Continue to monitor the flow to verify that the control has taken effect.

8. release control
At some later time the control is removed in order to release flow table entries tied up in blocking the attack. If the attacker returns, a new event will be generated, triggering a new control.

The following Python script, ddos.py, combines the steps to demonstrate a simple application:
import requests
import json

groups = {'external':['0.0.0.0/0'],'internal':['10.0.0.0/8']}
flows = {'keys':'ipsource,ipdestination','value':'frames','filter':'sourcegroup=
external&destinationgroup=internal'}
threshold = {'metric':'incoming','value':1000}

target = 'http://localhost:8008'

r = requests.put(target + '/group/json',data=json.dumps(groups))
r = requests.put(target + '/flow/incoming/json',data=json.dumps(flow
s))
r = requests.put(target + '/threshold/incoming/json',data=json.dumps
(threshold))

eventurl = target + '/events/json?maxEvents=10&timeout=60'
eventID = -1
while 1 == 1:
  r = requests.get(eventurl + '&eventID=' + str(eventID))
  if r.status_code != 200: break
  events = r.json()
  if len(events) == 0: continue

  eventID = events[0]["eventID"]
  for e in events:
    if 'incoming' == e['metric']:
      r = requests.get(target + '/metric/' + e['agent'] + '/' + e['dataSource'] + '.' + e['metric'] + '/json')
      metric = r.json()
      if len(metric) > 0:
        print metric[0]["topKeys"][0]["key"]
Running the script generates the following output within seconds of a large flow starting:
$ python ddos.py 
192.168.1.1,10.0.0.151
sFlow-RT is free for non-commercial use. Please try out the software. Comments, questions and general discussion of performance aware SDN are welcome on the sFlow-RT group.