Monday, September 9, 2019

Packet analysis using Docker

Why use sFlow for packet analysis? To rephrase the Heineken slogan, sFlow reaches the parts of the network that other technologies cannot reach. Industry standard sFlow is widely supported by switch vendors, embedding wire-speed packet monitoring throughout the network. With sFlow, any link or group of links can be remotely monitored. The alternative approach of physically attaching a probe to a SPAN/Mirror port is becoming much less feasible with increasing network sizes (10's of thousands of switch ports) and link speeds (10, 100, and 400 Gigabits). Using sFlow for packet capture doesn't replace traditional packet analysis, instead sFlow extends the capabilities of existing packet capture tools into the high speed switched network.

This article describes the sflow/tcpdump  and sflow/tshark Docker images, which provide a convenient way to analyze packets captured using sFlow.

Run the following command to analyze packets using tcpdump:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tcpdump

19:06:42.000000 ARP, Reply 10.0.0.254 is-at c0:ea:e4:89:b0:98 (oui Unknown), length 64
19:06:42.000000 IP 10.0.0.236.548 > 10.0.0.70.61719: Flags [P.], seq 3380015689:3380015713, ack 515038158, win 41992, options [nop,nop,TS val 1720029042 ecr 904769627], length 24
19:06:42.000000 IP 10.0.0.236.548 > 10.0.0.70.61719: Flags [P.], seq 149816:149832, ack 510628, win 41992, options [nop,nop,TS val 1720029087 ecr 904770068], length 16
19:06:42.000000 IP 10.0.0.236.548 > 10.0.0.70.61719: Flags [P.], seq 149816:149832, ack 510628, win 41992, options [nop,nop,TS val 1720029087 ecr 904770068], length 16
The normal tcpdump options can be used. For example, to select DNS packets:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tcpdump -vv port 53
reading from file -, link-type EN10MB (Ethernet)
19:08:49.000000 IP (tos 0x0, ttl 64, id 22316, offset 0, flags [none], proto UDP (17), length 65)
    10.0.0.70.43801 > dns.google.53: [udp sum ok] 35941+ A? clients2.google.com. (37)
19:09:00.000000 IP (tos 0x0, ttl 255, id 16813, offset 0, flags [none], proto UDP (17), length 66)
    10.0.0.64.50675 > 10.0.0.1.53: [udp sum ok] 57874+ AAAA? p49-imap.mail.me.com. (38)
The following command selects TCP SYN packets:
$ docker run -p 6343:6343/udp sflow/tcpdump 'tcp[tcpflags] == tcp-syn'
reading from file -, link-type EN10MB (Ethernet)
19:10:37.000000 IP 10.0.0.30.46786 > 10.0.0.162.1179: Flags [S], seq 2993962362, win 29200, options [mss 1460,sackOK,TS val 20531427 ecr 0,nop,wscale 9], length 0
Capture 10 packets to a file and then exit:
$ docker run -v $PWD:/pcap -p 6343:6343/udp sflow/tcpdump -w /pcap/packets.pcap -c 10
reading from file -, link-type EN10MB (Ethernet)
A tcpdump Tutorial with Examples — 50 Ways to Isolate Traffic provides an overview of the capabilities of tcpdump with useful examples.

Run the following command to analyze packets using tshark - a terminal based version of Wireshark:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark
Capturing on '-'
    1   0.000000   10.0.0.236 → 10.0.0.70    AFP 1518 [Reply without query?]
    2   0.000000   10.0.0.236 → 10.0.0.70    AFP 1518 [Reply without query?]
    3   0.000000   10.0.0.114 → 10.0.0.72    SSH 1518 Server: Encrypted packet (len=1448)
Packets can be filtered using Display Filters. For example, the following command selects DNS traffic:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -Y 'dns'
Capturing on '-'
  328  22.000000      8.8.8.8 → 10.0.0.70    DNS 136 Standard query response 0xfce4 AAAA img.youtube.com CNAME ytimg.l.google.com AAAA
  472  36.000000    10.0.0.52 → 10.0.0.1     DNS 79 Standard query 0x173e AAAA www.nytimes.com
Print ip source, destination, protocol and packet lengths:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -T fields -e ip.src -e ip.dst -e ip.proto -e ip.len
Capturing on '-'
10.0.0.70 10.0.0.236 6 1500
10.0.0.236 10.0.0.70 6 52
10.0.0.70 10.0.0.236 6 1500
10.0.0.236 10.0.0.70 6 52
10.0.0.70 10.0.0.236 6 1500
Capture 100 packets and print summary of the protocols:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -q -z io,phs -c 100
Capturing on '-'
100 packets captured

===================================================================
Protocol Hierarchy Statistics
Filter: 

eth                                      frames:100 bytes:85721
  ip                                     frames:99 bytes:85657
    tcp                                  frames:97 bytes:85119
      dsi                                frames:61 bytes:82122
        _ws.short                        frames:54 bytes:77180
        afp                              frames:6 bytes:4856
          _ws.short                      frames:5 bytes:4766
      _ws.short                          frames:15 bytes:1050
      http                               frames:1 bytes:499
        _ws.short                        frames:1 bytes:499
      iscsi                              frames:1 bytes:118
        iscsi.flags                      frames:1 bytes:118
          scsi                           frames:1 bytes:118
            _ws.short                    frames:1 bytes:118
    ipv6                                 frames:2 bytes:538
      tcp                                frames:2 bytes:538
        tls                              frames:2 bytes:538
          _ws.short                      frames:2 bytes:538
  arp                                    frames:1 bytes:64
    _ws.short                            frames:1 bytes:64
===================================================================
Capture 100 packets and print a summary of the IP traffic by address:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -q -z endpoints,ip -c 100
Capturing on '-'
100 packets captured

================================================================================
IPv4 Endpoints
Filter:
                       |  Packets  | |  Bytes  | | Tx Packets | | Tx Bytes | | Rx Packets | | Rx Bytes |
10.0.0.70                     95         81713         44           25507          51           56206   
10.0.0.236                    91         80820         50           55956          41           24864   
10.0.0.30                      6          2369          2            1508           4             861   
10.0.0.16                      1           587          1             587           0               0   
10.0.0.28                      1           587          0               0           1             587   
10.0.0.160                     1          1258          0               0           1            1258   
10.0.0.172                     1           218          1             218           0               0   
================================================================================
The following command prints packet decodes as JSON:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -T json
Capturing on '-'
[
  {
    "_index": "packets-2019-09-06",
    "_type": "pcap_file",
    "_score": null,
    "_source": {
      "layers": {
        "frame": {
          "frame.interface_id": "0",
          "frame.interface_id_tree": {
            "frame.interface_name": "-"
          },
          "frame.encap_type": "1",
          "frame.time": "Sep  6, 2019 19:41:12.000000000 UTC",
          "frame.offset_shift": "0.000000000",
          "frame.time_epoch": "1567798872.000000000",
          "frame.time_delta": "0.000000000",
          "frame.time_delta_displayed": "0.000000000",
          "frame.time_relative": "0.000000000",
          "frame.number": "1",
          "frame.len": "64",
          "frame.cap_len": "60",
          "frame.marked": "0",
          "frame.ignored": "0",
          "frame.protocols": "eth:ethertype:arp"
        },
        "eth": {
          "eth.dst": "70:10:6f:d8:13:30",
          "eth.dst_tree": {
            "eth.dst_resolved": "HewlettP_d8:13:30",
            "eth.addr": "70:10:6f:d8:13:30",
            "eth.addr_resolved": "HewlettP_d8:13:30",
            "eth.lg": "0",
            "eth.ig": "0"
          },
          "eth.src": "98:4b:e1:03:4a:61",
          "eth.src_tree": {
            "eth.src_resolved": "HewlettP_03:4a:61",
            "eth.addr": "98:4b:e1:03:4a:61",
            "eth.addr_resolved": "HewlettP_03:4a:61",
            "eth.lg": "0",
            "eth.ig": "0"
          },
          "eth.type": "0x00000806",
          "eth.padding": "00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00"
        },
        "arp": {
          "arp.hw.type": "1",
          "arp.proto.type": "0x00000800",
          "arp.hw.size": "6",
          "arp.proto.size": "4",
          "arp.opcode": "1",
          "arp.src.hw_mac": "98:4b:e1:03:4a:61",
          "arp.src.proto_ipv4": "10.0.0.30",
          "arp.dst.hw_mac": "00:00:00:00:00:00",
          "arp.dst.proto_ipv4": "10.0.0.232"
        },
        "_ws.short": "[Packet size limited during capture: Ethertype truncated]"
      }
    }
  },
The tshark -T ek option formats the JSON output as a single line per packet making the output easy to parse in scripts. For example, the following emerging.py script downloads the Emerging Threats compromised IP address database, parses the JSON records, checks to see if source and destination addresses can be found in the database, and prints out information on any matches:
#!/usr/bin/env python

from sys import stdin
from json import loads
from requests import get

blacklist = set()
r = get('https://rules.emergingthreats.net/blockrules/compromised-ips.txt')
for line in r.iter_lines():
  blacklist.add(line)

for line in stdin:
  msg = loads(line)
  try:
    time = msg['timestamp']
    layers = msg['layers']
    ip = layers["ip"]
    src = ip["ip_ip_src"]
    dst = ip["ip_ip_dst"]
    if src in blacklist or dst in blacklist:
      print "%s %s %s" % (time,src,dst)
  except KeyError:
    pass
The following command runs the script:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tshark -T ek | ./tshark.py
See the TShark man page for more options.

Forwarding using sFlow-RT describes how to set up and tear down sFlow streams using the sFlow-RT analytics engine. This is a simple way to direct a stream of sFlow to a desktop running sflowtool. For example, suppose sflowtool is running on host 10.0.0.30 and sFlow-RT is running on host 10.0.0.1, the following command would start a session:
curl -H "Content-Type:application/json" -X PUT --data '{"address":"10.0.0.30"}' \
http://10.0.0.1:8008/forwarding/tcpdump/json
and the following command would end the session:
curl -X DELETE http://10.0.0.1:8008/forwarding/tcpdump/json
Note: The sflow/sflow-rt Docker image is a convenient way to run sFlow-RT:
docker run -p 8008:8008 -p 6343:6343/udp sflow/sflow-rt
Finally, Triggered remote packet capture using filtered ERSPAN, shows how the broad visibility provided by sFlow can be combined with hardware filtering to trigger full packet capture of selected traffic.

Friday, September 6, 2019

Running sflowtool using Docker

The sflowtool command line utility is used to convert standard sFlow records into a variety of different formats. While there are a large number of native sFlow analysis applications, familiarity with sflowtool is worthwhile since it provides a simple way to verify receipt of sFlow, understand the contents of the sFlow telemetry stream, and build simple applications through custom scripting.

The sflow/sflowtool Docker image provides a simple way to run sflowtool. Run the following command to print the contents of sFlow packets:
$ docker run -p 6343:6343/udp sflow/sflowtool
startDatagram =================================
datagramSourceIP 10.0.0.111
datagramSize 144
unixSecondsUTC 1321922602
datagramVersion 5
agentSubId 0
agent 10.0.0.20
packetSequenceNo 3535127
sysUpTime 270660704
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:2
sampleType COUNTERSSAMPLE
sampleSequenceNo 228282
sourceId 0:14
counterBlock_tag 0:1
ifIndex 14
networkType 6
ifSpeed 100000000
ifDirection 0
ifStatus 3
ifInOctets 4839078
ifInUcastPkts 15205
ifInMulticastPkts 0
ifInBroadcastPkts 4294967295
ifInDiscards 0
ifInErrors 0
ifInUnknownProtos 4294967295
ifOutOctets 149581962744
ifOutUcastPkts 158884229
ifOutMulticastPkts 4294967295
ifOutBroadcastPkts 4294967295
ifOutDiscards 101
ifOutErrors 0
ifPromiscuousMode 0
endSample   ----------------------
endDatagram   =================================
The -g option flattens the output so that it is more easily filtered using grep:
$ docker run -p 6343:6343/udp sflow/sflowtool -g | grep ifInOctets
2019-09-03T22:37:21+0000 10.0.0.231 0 3203000 0:6 0:2 0:1 ifInOctets 0
2019-09-03T22:37:23+0000 10.0.0.232 0 7242462 0:5 0:2 0:1 ifInOctets 53791415069
2019-09-03T22:37:23+0000 10.0.0.253 0 8178007 0:7 0:2 0:1 ifInOctets 31663763747
2019-09-03T22:37:23+0000 10.0.0.253 0 8178007 0:3 0:2 0:1 ifInOctets 1333603780050
2019-09-03T22:37:26+0000 10.0.0.253 0 8178008 0:1 0:2 0:1 ifInOctets 9116481296
The -L option prints out CSV records with the selected fields:
$ docker run -p 6343:6343/udp sflow/sflowtool -L agent,ifIndex,ifInOctets
10.0.0.253,23,432680126074
10.0.0.30,2,54056144719
10.0.0.253,21,3860664000830
10.0.0.253,3,1345269893416
10.0.0.253,2,1910370790761
The -J option prints out the decoded sFlow datagrams as JSON (with a blank line between each datagram):
$ docker run -p 6343:6343/udp sflow/sflowtool -J
{
 "datagramSourceIP":"172.17.0.1",
 "datagramSize":"1388",
 "unixSecondsUTC":"1567707952",
 "localtime":"2019-09-05T18:25:52+0000",
 "datagramVersion":"5",
 "agentSubId":"0",
 "agent":"10.0.0.253",
 "packetSequenceNo":"8254753",
 "sysUpTime":"165436226",
 "samplesInPacket":"8",
 "samples":[{
   "sampleType_tag":"0:1",
   "sampleType":"FLOWSAMPLE",
   "sampleSequenceNo":"2594544",
   "sourceId":"0:3",
   "meanSkipCount":"500",
   "samplePool":"1622164761",
   "dropEvents":"584479",
   "inputPort":"21",
   "outputPort":"3",
   "elements":[{
     "flowBlock_tag":"0:1",
     "flowSampleType":"HEADER",
     "headerProtocol":"1",
     "sampledPacketSize":"118",
     "strippedBytes":"4",
     "headerLen":"116",
...
The -j option formats the JSON output as a single line per datagram making the output easy to parse in scripts. For example, the following emerging.py script downloads the Emerging Threats compromised IP address database, parses the JSON records, checks to see if source and destination addresses can be found in the database, and prints out information on any matches:
#!/usr/bin/env python

from sys import stdin
from json import loads
from requests import get

blacklist = set()
r = get('https://rules.emergingthreats.net/blockrules/compromised-ips.txt')
for line in r.iter_lines():
  blacklist.add(line)

for line in stdin:
  datagram = loads(line)
  localtime = datagram["localtime"]
  samples = datagram["samples"]
  for sample in samples:
    sampleType = sample["sampleType"]
    elements = sample["elements"]
    if sampleType == "FLOWSAMPLE":
      for element in elements:
        tag = element["flowBlock_tag"]
        if tag == "0:1":
          try:
            src = element["srcIP"]
            dst = element["dstIP"]
            if src in blacklist or dst in blacklist:
              print "%s %s %s" % (localtime,src,dst)
          except KeyError:
            pass
Run the command:
docker run -p 6343:6343/udp sflow/sflowtool -j | ./emerging.py
These were just a few examples, see the sflowtool home page for additional information.

Forwarding using sFlow-RT describes how to set up and tear down sFlow streams using the sFlow-RT analytics engine. This is a simple way to direct a stream of sFlow to a desktop running sflowtool. For example, suppose sflowtool is running on host 10.0.0.30 and sFlow-RT is running on host 10.0.0.1, the following command would start a session:
curl -H "Content-Type:application/json" -X PUT --data '{"address":"10.0.0.30"}' \
http://10.0.0.1:8008/forwarding/sflowtool/json
and the following command would end the session:
curl -X DELETE http://10.0.0.1:8008/forwarding/sflowtool/json
Note: The sflow/sflow-rt Docker image is a convenient way to run sFlow-RT:
docker run -p 8008:8008 -p 6343:6343/udp sflow/sflow-rt

Tuesday, September 3, 2019

Forwarding using sFlow-RT

The diagrams show different two different configurations for sFlow monitoring:
  1. Without Forwarding Each sFlow agent is configured to stream sFlow telemetry to each of the analysis applications. This configuration is appropriate when a small number of applications is being used to continuously monitor performance. However, the overhead on the network and agents increases as additional analyzers are added. Often it is not possible to increase the number of analyzers since many embedded sFlow agents have limited resources and only support a small number of sFlow streams. In addition, the complexity of configuring each agent to add or remove an analysis application can be significant since agents may reside in Ethernet switches, routers, servers, hypervisors and applications on many different platforms from a variety of vendors.
  2. With Forwarding In this case all the agents are configured to send sFlow to a forwarding module which resends the data to the analysis applications. In this case analyzers can be added and removed simply by reconfiguring the forwarder without any changes required to the agent configurations.
There are many variations between these two extremes. Typically there will be one or two analyzers used for continuous monitoring and additional tools, like Wireshark, might be deployed for troubleshooting when the continuous monitoring tools detect anomalies.

This article will demonstrate how to forward sFlow using sFlow-RT.

Download and install and install the software and configure the sFlow agents to stream telemetry to the sFlow-RT instance.
The sFlow-RT status page, accessible on HTTP port 8008, can be used to verify that sFlow is being received from the agents. Click on the API option then click on the Open REST API Explorer button to access documentation on the sFlow-RT REST API.
The following REST API call creates a forwarding session, SessionA, directing a stream of sFlow to analyzer 10.0.0.30:
curl -H "Content-Type:application/json" -X PUT --data '{"address":"10.0.0.30"}' \
http://127.0.0.1:8008/forwarding/SessionA/json
Create a second session, SessionB, to a non-standard port, 7343:
curl -H "Content-Type:application/json" \
-X PUT --data '{"address":"10.0.0.30","port":7343}' \
http://127.0.0.1:8008/forwarding/SessionB/json
Create a third session, SessionC, to forward sFlow from selected agent, 10.0.0.254:
curl -H "Content-Type:application/json" \
-X PUT --data '{"address":"10.0.0.30","port":8343,"agents":["10.0.0.254"]}' \
http://127.0.0.1:8008/forwarding/SessionC/json
See the all forwarding sessions:
curl http://127.0.0.1:8008/forwarding/json
Delete forwarding session, SessionB:
curl -X DELETE http://127.0.0.1:8008/forwarding/SessionB/json
In addition, sFlow-RT supports the complex filtering and forwarding operations needed stream per-tenant views of the sFlow telemetry in a shared network, see Multi-tenant sFlow.
Finally, the streaming analytics capabilities of sFlow-RT can be used to simultaneously deliver metrics to time series databases (e.g. Prometheus and Grafana), send events to SIEM tools like Splunk or Logstash (e.g. Exporting events using syslog), and export flow data (e.g. sFlow to IPFIX/NetFlow) while also running embedded applications to visualize data, mitigate DDoS attacks, and optimize routing.

Tuesday, August 13, 2019

sFlow-RT 3.0 released

The sFlow-RT 3.0 release has a simplified user interface that focusses on metrics needed to manage the performance of the sFlow-RT analytics software and installed applications.

Applications are available that replace features from the previous 2.3 release. The following instructions show how to install sFlow-RT 3.0 along with basic data exploration applications.

On a system with Java 1.8+ installed:
wget https://inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
./sflow-rt/get-app.sh sflow-rt flow-trend
./sflow-rt/get-app.sh sflow-rt browse-metrics
./sflow-rt/start.sh
On a system with Docker installed:
mkdir app
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt flow-trend
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt browse-metrics
docker run -v $PWD/app:/sflow-rt/app -p 6343:6343/udp -p 8008:8008 sflow/sflow-rt
The product user interface can be accessed on port 8008. The Status page, shown at the top of this article, displays key metrics about the performance of the software.
The Apps tab lists the two applications we installed, browse-metrics and flow-trend, and the green color of the buttons indicates both applications are healthy.

Click on the flow-trend button to open the application and trend traffic flows in real-time. The RESTflow article describes the flow analytics capabilities of sFlow-RT in detail.
Click on the browse-metrics button to open the application and trend statistics in real-time. The Cluster performance metrics article describes the metrics analytics capabilities of sFlow-RT in more detail.
The API tab provides a link to Writing Applications, an introductory article on programming sFlow-RT.
Clicking on the Open REST API Explorer button to access documentation on the sFlow-RT REST API and make queries.

Applications lists additional applications that can be downloaded to export metrics to Prometheus, mitigate DDoS attacks, report on performance of leaf and spine networks, monitor an Internet exchange network, visualize real-time flows, etc.

Friday, July 12, 2019

Arista BGP FlowSpec


The video of a talk by Peter Lundqvist from DKNOG9 describes BGP FlowSpec, use cases, and details of Arista's implementation.

FlowSpec for real-time control and sFlow telemetry for real-time visibility is a powerful combination that can be used to automate DDoS mitigation and traffic engineering. The article, Real-time DDoS mitigation using sFlow and BGP FlowSpec, gives an example using the sFlow-RT analytics software.

EOS 4.22 includes support for BGP FlowSpec. This article uses a virtual machine running vEOS-4.22 to demonstrate how to configure FlowSpec and sFlow so that the switch can be controlled by an sFlow-RT application (such as the DDoS mitigation application referenced earlier).

The following output shows the EOS configuration statements related to sFlow and FlowSpec:
!
service routing protocols model multi-agent
!
sflow sample 16384
sflow polling-interval 30
sflow destination 10.0.0.70
sflow run
!
interface Ethernet1
   flow-spec ipv4 ipv6
!
interface Management1
   ip address 10.0.0.96/24
!
ip routing
!
router bgp 65096
   router-id 10.0.0.96
   neighbor 10.0.0.70 remote-as 65070
   neighbor 10.0.0.70 transport remote-port 1179
   neighbor 10.0.0.70 send-community extended
   neighbor 10.0.0.70 maximum-routes 12000 
   !
   address-family flow-spec ipv4
      neighbor 10.0.0.70 activate
   !
   address-family flow-spec ipv6
      neighbor 10.0.0.70 activate
The following JavaScript statement configures the FlowSpec connection on the sFlow-RT side:
bgpAddNeighbor("10.0.0.96","65070","10.0.0.70",{flowspec:true,flowspec6:true});
The FlowSpec functionality is exposed through sFlow-RT's REST API.
The sFlow-RT REST API Explorer is a simple way to exercise the FlowSpec functionality. In this case we are going to push a rule that blocks traffic from UDP port 53 targeted at host 10.0.0.1. This type of rule is typically used to block a DNS amplification attack.

The following output on the switch verifies that the rule has been received:
localhost#sho bgp flow-spec ipv4 detail
BGP Flow Specification rules for VRF default
Router identifier 10.0.0.96, local AS number 65096
BGP Flow Specification Matching Rule for 10.0.0.1/32;*;IP:17;SP:53;
 Rule identifier: 3851506952
 Matching Rule:
   Destination Prefix: 10.0.0.1/32
   Source Prefix: *
   IP Protocol: 17
   Source Port: 53
 Paths: 1 available
  65070
    from 10.0.0.70 (10.0.0.70)
      Origin IGP, metric -, localpref 100, weight 0, valid, external, best
      Actions: Drop
In practice the process of adding and removing filtering rules can be completely automated by an sFlow-RT application. The combination of real-time sFlow analytics with the real-time control provided by FlowSpec allows DDoS attacks to be detected and blocked within seconds.

Friday, June 14, 2019

Mininet flow analytics with custom scripts

Mininet flow analytics describes how to use the sflow.py helper script that ships with the sFlow-RT analytics engine to enable sFlow telemetry, e.g.
sudo mn --custom sflow-rt/extras/sflow.py --link tc,bw=10 \
--topo tree,depth=2,fanout=2
Mininet, ONOS, and segment routing provides an example using a Custom Topology, e.g.
sudo env ONOS=10.0.0.73 mn --custom sr.py,sflow-rt/extras/sflow.py \
--link tc,bw=10 --topo=sr '--controller=remote,ip=$ONOS,port=6653'
This article describes how to incorporate sFlow monitoring in a fully custom Mininet script. Consider the following simpletest.py script based on Working with Mininet:
#!/usr/bin/python                                                                            
                                                                                             
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def build(self, n=2):
        switch = self.addSwitch('s1')
        # Python's range(N) generates 0..N-1
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

def simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    net = Mininet(topo)
    net.start()
    print "Dumping host connections"
    dumpNodeConnections(net.hosts)
    print "Testing bandwidth between h1 and h4"
    h1, h4 = net.get( 'h1', 'h4' )
    net.iperf( (h1, h4) )
    net.stop()

if __name__ == '__main__':
    # Tell mininet to print useful information
    setLogLevel('info')
    simpleTest()
Add the highlighted lines to incorporate sFlow telemetry:
#!/usr/bin/python                                                                            
                                                                                             
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel
from mininet.util import customClass
from mininet.link import TCLink

# Compile and run sFlow helper script
# - configures sFlow on OVS
# - posts topology to sFlow-RT
execfile('sflow-rt/extras/sflow.py') 

# Rate limit links to 10Mbps
link = customClass({'tc':TCLink}, 'tc,bw=10')

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def build(self, n=2):
        switch = self.addSwitch('s1')
        # Python's range(N) generates 0..N-1
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

def simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    net = Mininet(topo,link=link)
    net.start()
    print "Dumping host connections"
    dumpNodeConnections(net.hosts)
    print "Testing bandwidth between h1 and h4"
    h1, h4 = net.get( 'h1', 'h4' )
    net.iperf( (h1, h4) )
    net.stop()

if __name__ == '__main__':
    # Tell mininet to print useful information
    setLogLevel('info')
    simpleTest()
When running the script the highlighted output confirms that sFlow has been enabled and the topology has been posted to sFlow-RT:
pp@mininet:~$ sudo ./simpletest.py
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 h4 
*** Adding switches:
s1 
*** Adding links:
(10.00Mbit) (10.00Mbit) (h1, s1) (10.00Mbit) (10.00Mbit) (h2, s1) (10.00Mbit) (10.00Mbit) (h3, s1) (10.00Mbit) (10.00Mbit) (h4, s1) 
*** Configuring hosts
h1 h2 h3 h4 
*** Starting controller
c0 
*** Starting 1 switches
s1 ...(10.00Mbit) (10.00Mbit) (10.00Mbit) (10.00Mbit) 
*** Enabling sFlow:
s1
*** Sending topology
Dumping host connections
h1 h1-eth0:s1-eth1
h2 h2-eth0:s1-eth2
h3 h3-eth0:s1-eth3
h4 h4-eth0:s1-eth4
Testing bandwidth between h1 and h4
*** Iperf: testing TCP bandwidth between h1 and h4 
*** Results: ['6.32 Mbits/sec', '6.55 Mbits/sec']
*** Stopping 1 controllers
c0 
*** Stopping 4 links
....
*** Stopping 1 switches
s1 
*** Stopping 4 hosts
h1 h2 h3 h4 
*** Done
Mininet dashboard and Mininet weathermap describe the sFlow-RT Mininet Dashboard application shown at the top of this article. The tool provides a real-time visualization of traffic flowing over the Mininet network. Writing Applications describes how to develop custom analytics applications for sFlow-RT.

Wednesday, May 8, 2019

Secure forwarding of sFlow using ssh

Typically sFlow datagrams are sent unencrypted from agents embedded in switches and routers to a local collector/analyzer. Sending sFlow datagrams over the management VLAN or out of band management network generally provides adequate isolation and security within the site. Inter-site traffic within an organization is typically carried over a virtual private network (VPN) which encrypts the data and protects it from eavesdropping.

This article describes a simple method of carrying sFlow datagrams over an encrypted ssh connection which can be useful in situations where a VPN is not available, for example, sending sFlow to an analyzer in the public cloud, or to an external consultant.

The diagram shows the elements of the solution. A collector on the site receives sFlow datagrams from the network devices and uses the sflow_fwd.py script to convert the datagrams into line delimited hexadecimal strings that are sent over an ssh connection to another instance of sflow_fwd.py running on the analyzer that converts the hexadecimal strings back to sFlow datagrams.

The following sflow_fwd.py Python script accomplishes the task:
#!/usr/bin/python

import socket
import sys
import argparse

parser = argparse.ArgumentParser(description='Serialize/deserialize sFlow')
parser.add_argument('-c', '--collector', default='')
parser.add_argument('-s', '--server')
parser.add_argument('-p', '--port', type=int, default=6343)
args = parser.parse_args()

sock=socket.socket(socket.AF_INET,socket.SOCK_DGRAM)

if(args.server != None):
  while True:
    line = sys.stdin.readline()
    if not line:
      break
    buf = bytearray.fromhex(line[:-1])
    sock.sendto(buf, (args.server, args.port))
else: 
  sock.bind((args.collector,args.port))
  while True:
    buf = sock.recv(2048)
    if not buf:
      break
    print buf.encode('hex')
    sys.stdout.flush()
Create a user account on both the collector and analyzer machines, in this example the user is pp. Next copy the script to both machines.

If you log into the collector machine, following command will send sFlow to the analyzer machine:
./sflow_fwd.py | ssh pp@analyzer './sflow_fwd.py -s 127.0.0.1'
If you log into the analyzer machine, the following command will retrieve sFlow from the collector machine:
ssh pp@collector './sflow_fwd.py' | ./sflow_fwd.py -s 127.0.0.1
If a permanent connection is required, it is relatively straightforward to create a daemon using systemd. In this example, the service is being installed on the collector machine by performing the following steps:
First log into the collector generate an ssh key:
ssh-keygen
Next, install the key on the analyzer system:
ssh-copy-id pp@analyzer
Now create the systemd service file, /etc/systemd/system/sflow-tunnel.service:
[Unit]
Description=sFlow tunnel
After=network.target

[Service]
Type=simple
User=pp
ExecStart=/bin/sh -c "/home/pp/sflow_fwd.py | /usr/bin/ssh pp@analyzer './sflow_fwd.py -s 127.0.0.1'"
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target
Finally, use the systemctl command to enable and start the daemon:
sudo systemctl enable sflow-tunnel.service
sudo systemctl start sflow-tunnel.service
A simple way to confirm that sFlow is arriving on the analyzer machine is to use sflowtool.

There are numerous articles on this blog describing how the sFlow-RT analytics software can be used to integrate sFlow telemetry with popular metrics and SIEM (security information and event management) tools.