UDP protocols such as sFlow, syslog, NetFlow, IPFIX and SNMP traps, have many advantages for large scale network and system monitoring, see
Push vs Pull. In a typical deployment each managed element is configured to send UDP packets to a designated collector (specified by an IP address and port). For example, in a simple sFlow monitoring system all the switches might be configured to send sFlow data to UDP port 6343 on the host running the sFlow analysis application. Complex deployments may require multiple analysis applications, for example: a first application providing
analytics for software defined networking, a second focused on
host performance, a third addressing
packet capture and security, and the fourth looking at
application performance. In addition, a second copy of each application may be required for redundancy. The challenge is getting copies of the data to all the application instances in an efficient manner.
There are a number of approaches to replicating UDP data, each with limitations:
- IP Multicast - if the data is sent to an IP multicast address then each application could subscribe to the multicast channel receive a copy of the data. This sounds great in theory, but in practice configuring and maintaining IP multicast connectivity can be a challenge. In addition, all the agents and collectors would need to support IP multicast. In addition, IP multicast also doesn't address the situation where you have multiple applications running on single host and so each application has to be receive the UDP data on a different port.
- Replicate at source - each agent could be configured to send a copy of the data to each application. Replicating at source is a configuration challenge (all agents need to be reconfigured if you add an additional application). This approach is also wasteful of bandwidth - multiple copies of the same data are send across the network.
- Replicate at destination - a UDP replicator, or "samplicator" application receives the stream of UDP messages, copies them and resends them to each of the applications. This functionality may be deployed as a stand alone application, or be an integrated function within an analysis application. The replicator application is a single point of failure - if it is shut down none of the applications receive data. The replicator adds delay to the measurements and at high data rates can significantly increase UDP loss rate as the datagrams are received, sent, and received again.
This article will examine a fourth option, using software defined networking (SDN) techniques to replicate and distribute data within the network. The
Open vSwitch is implemented in the Linux kernel and includes OpenFlow and network virtualization features that will be used to build the replication network.
First, you will need a server (or virtual machine) running a recent version of Linux. Next
download and install Open vSwitch.
Next, configure the Open vSwitch to handle networking for the server:
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0
ifconfig eth0 0
ifconfig br0 10.0.0.1/24
Now configure the UDP agents to send their data to 10.0.0.1. You should be able to run a collector application for each service port (e.g. sFlow 6343, syslog 514, etc.).
The first case to consider is replicating the datagrams to a second port on the server (sending packets to
App 1 and
App 2 in the diagram). First, use the
ovs-vsctl command to list the OpenFlow port numbers on the virtual switch:
% ovs-vsctl --format json --columns name,ofport list Interface
{"data":[["eth0",1],["br1",65534]],"headings":["name","ofport"]}
We are interested in replicating packets received on
eth0 and output shows that the corresponding OpenFlow port is 1.
The Open vSwitch provides a command line utility
ovs-ofctl that uses the OpenFlow protocol to configure forwarding rules in the vSwitch. The following OpenFlow rule will replicate sFlow datagrams:
in_port=1 dl_type=0x0800 nw_proto=17 tp_dst=6343 actions=LOCAL,mod_tp_dst:7343,normal
The match part of the rule looks for packets received on port 1 (
in_port=1), where the Ethernet type is IPv4 (
dl_type=0x0800), the IP protocol is UDP (
nw_protocol=17), and the destination UDP port is 6343 (
tp_dst=6343). The actions section of the rule is the key to building the replication function. The LOCAL action delivers the original packet as intended. The destination port is then changed to 7343 (
mod_tp_dst:7343) and the modified packet is sent through the
normal processing path to be delivered to the application.
Save this rule to a file, say
replicate.txt, and then use
ovs-ofctl to apply the rule to
br0:
ovs-ofctl add-flows br0 replicate.txt
At this point a second sFlow analysis application listening for sFlow datagrams on port 7343 should start receiving data -
sflowtool is a convenient way to verify that the packets are being received:
sflowtool -p 7343
The second case to consider is replicating the datagrams to a remote host (sending packets to App 3 in the diagram).
in_port=1 dl_type=0x0800 nw_proto=17 tp_dst=6343 actions=LOCAL,mod_tp_dst:7343,normal,mod_nw_src:10.0.0.1,mod_nw_dst:10.0.0.2,normal
The extended rule includes additional actions that modify the source address of the packets (
mod_nw_src:10.0.0.1) and the destination IP address (
mod_nw_dst:10.0.0.2) and sends the packet through the
normal processing path. Since we are relying on the routing functionality in the Linux stack to deliver the packet, make sure that routing in enabled - see
How to Enable IP Forwarding in Linux.
Unicast reverse path filtering (uRPF) is mechanism that routers use to drop spoofed packets (i.e. packets where the source address doesn't belong to the subnet on the access port the packet was received on). uRPF should be enabled wherever practical because spoofing is used in a variety of security and denial of service attacks, e.g. DNS amplification attacks. By modifying the IP source address to be the address of the forwarding host (10.0.0.1) rather than the original source IP address the OpenFlow rule ensures that the packet will pass through uRPF filters, both on the host and on the access router. Rewriting the sFlow source address does not cause any problems because the sFlow protocol identifies the original source of the data within its payload and doesn't rely on the IP source address. However, other UDP protocols (for example, NetFlow/IPFIX) rely on the IP source address to identify the source of the data. In this case, removing the mod_nw_src action will leave the IP source address unchanged, but the packet may well be dropped by uRPF filters. Newer Linux distributions implement strict uRPF by default, however it can be disabled if necessary, see Reverse Path Filtering.
This article has only scratched the surface of capabilities of the Open vSwitch. In situations where passing the raw packets across the network isn't feasible the Open vSwitch can be configured to send the packets over a tunnel (sending packets to
App 4 in the diagram). Tunnels, in conjunction with OpenFlow, can be used to create a virtual UDP distribution overlay network with its own addressing scheme and topology - Open vSwitch is used by a number of network virtualization vendors (e.g. VMware NSX). In addition, more complex filters can also be implemented, forwarding datagrams based on source subnet to different collectors etc.
The replication functions don't need to be performed in software in the virtual switch. OpenFlow rules can be pushed to OpenFlow capable hardware switches which can perform the replication, or source based forwarding functions at wire speed. A full blown controller based solution isn't necessarily required, the
ovs-ofctl command can be used to push OpenFlow rules to physical switches.
More generally, building flexible UDP datagram distribution and replication networks is an interesting use case for software defined networking. The power of software defined networking is that you can adapt the network behavior to suit the needs of the application - in this case overcoming the limitations of existing UDP distribution solutions by modifying the behavior of the network.