The
sFlow-RT real-time analytics platform receives a continuous telemetry stream from
sFlow Agents embedded in network devices, hosts and applications and converts the raw measurements into actionable metrics, accessible through open APIs, see
Writing Applications.
Application development is greatly simplified if you can emulate the infrastructure you want to monitor on your development machine.
Mininet flow analytics,
Mininet dashboard, and
Mininet weathermap describe how to use the open source
Mininet network emulator to simulate networks and generate a live stream of standard
sFlow telemetry data.
This article describes how to use
Docker containers as a development platform.
Docker Desktop provides a convenient method of running Docker on Mac and Windows desktops. These instructions assume you have already installed Docker.
Start a
Host sFlow agent using the pre-built
sflow/host-sflow image:
docker run --rm -d -e "COLLECTOR=host.docker.internal" -e "SAMPLING=10" \
--net=host -v /var/run/docker.sock:/var/run/docker.sock:ro \
--name=host-sflow sflow/host-sflow
Note: Host, Docker, Swarm and Kubernetes monitoring describes how to deploy Host sFlow agents to monitor large scale container environments.
Start an
iperf3 server using the pre-built
sflow/iperf3 image:
docker run --rm -d -p 5201:5201 --name iperf3 sflow/iperf3 -s
In a separate terminal window, run the following command to start sFlow-RT:
docker run --rm -p 8008:8008 -p 6343:6343/udp --name sflow-rt sflow/prometheus
Note: The
sflow/prometheus image is based on
sflow/sflow-rt, adding
applications for browsing and exporting sFlow analytics. The
sflow/sflow-rt page provides instructions for packaging your own applications with sFlow-RT.
It is helpful to run sFlow-RT in foreground during development so that you can see the log messages:
2020-03-09T23:31:23Z INFO: Starting sFlow-RT 3.0-1477
2020-03-09T23:31:23Z INFO: Version check, running latest
2020-03-09T23:31:24Z INFO: Listening, sFlow port 6343
2020-03-09T23:31:24Z INFO: Listening, HTTP port 8008
2020-03-09T23:31:24Z INFO: DNS server 192.168.65.1
2020-03-09T23:31:24Z INFO: app/prometheus/scripts/export.js started
2020-03-09T23:31:24Z INFO: app/browse-flows/scripts/top.js started
The web user interface can be accessed at
http://localhost:8008/.
The sFlow Agents count (at the top left) verifies that sFlow is being received from the Host sFlow agent. Access the pre-installed
Browse Flows application at
http://localhost:8008/app/browse-flows/html/index.html?keys=ipsource%2Cipdestination&value=bps
Run the following command in the original terminal to initiate a test and generate traffic:
docker run --rm sflow/iperf3 -c host.docker.internal
You should immediately see a spike in traffic like that shown in the
Flow Browser screen capture. See
RESTflow for an overview of the sFlow-RT flow analytics architecture and
Defining Flows for a detailed description of the options available when defining flows.
The ability to rapidly detect and act on traffic flows addresses many important challenges, for example:
Real-time DDoS mitigation using BGP RTBH and FlowSpec,
Triggered remote packet capture using filtered ERSPAN,
Exporting events using syslog,
Black hole detection. and
Troubleshooting connectivity problems in leaf and spine fabrics.
The following
elephant.js script uses the embedded JavaScript API to detect and log the start of flows greater than 10Mbits/second:
setFlow('elephant',
{keys:'ipsource,ipdestination',value:'bytes'});
setThreshold('elephant_threshold',
{metric:'elephant', value: 10000000/8, byFlow: true, timeout: 1});
setEventHandler(function(evt)
{ logInfo(evt.flowKey); }, ['elephant_threshold']);
Use
control+c to stop the sFlow-RT instance and run the following command to include the
elephant.js script:
docker run --rm -v $PWD/elephant.js:/sflow-rt/elephant.js \
-p 8008:8008 -p 6343:6343/udp --name sflow-rt \
sflow/prometheus -Dscript.file=elephant.js
Run the iperf3 test again and you should immediately see the flows logged:
2020-03-10T05:30:15Z INFO: Starting sFlow-RT 3.0-1477
2020-03-10T05:30:16Z INFO: Version check, running latest
2020-03-10T05:30:16Z INFO: Listening, sFlow port 6343
2020-03-10T05:30:17Z INFO: Listening, HTTP port 8008
2020-03-10T05:30:17Z INFO: DNS server 192.168.65.1
2020-03-10T05:30:17Z INFO: elephant.js started
2020-03-10T05:30:17Z INFO: app/browse-flows/scripts/top.js started
2020-03-10T05:30:17Z INFO: app/prometheus/scripts/export.js started
2020-03-10T05:30:25Z INFO: 172.17.0.4,192.168.1.242
2020-03-10T05:30:26Z INFO: 172.17.0.1,172.17.0.2
Alternatively, the following
elephant.py script used the REST API to perform the same function:
#!/usr/bin/env python
import requests
requests.put(
'http://localhost:8008/flow/elephant/json',
json={'keys':'ipsource,ipdestination', 'value':'bytes'}
)
requests.put(
'http://localhost:8008/threshold/elephant_threshold/json',
json={'metric':'elephant', 'value': 10000000/8, 'byFlow':True, 'timeout': 1}
)
eventurl = 'http://localhost:8008/events/json'
eventurl += '?thresholdID=elephant_threshold&maxEvents=10&timeout=60'
eventID = -1
while 1 == 1:
r = requests.get(eventurl + '&eventID=' + str(eventID))
if r.status_code != 200: break
events = r.json()
if len(events) == 0: continue
eventID = events[0]['eventID']
events.reverse()
for e in events:
print(e['flowKey'])
Run the Python script and run another iperf3 test:
./elephant.py
172.17.0.4,192.168.1.242
172.17.0.1,172.17.0.2
Another option is to replay sFlow telemetry captured in the form of a PCAP file. The
Fabric View application contains an example that can be extracted:
curl -O https://raw.githubusercontent.com/sflow-rt/fabric-view/master/demo/ecmp.pcap
Now run sFlow-RT:
docker run --rm -v $PWD/ecmp.pcap:/sflow-rt/sflow.pcap \
-p 8008:8008 --name sflow-rt sflow/prometheus -Dsflow.file=sflow.pcap
Run the
elephant.py script:
./elephant.py
10.4.1.2,10.4.2.2
10.4.1.2,10.4.2.2
Note: Fabric View contains a detailed description of the captured data.
Data from a production network can be captured using
tcpdump:
tcpdump -i eth0 -s 0 -c 10000 -w sflow.pcap udp port 6343
For example, the above command captures
10000 sFlow datagrams (UDP port 6343) from Ethernet interface
eth0 and stores them in file
sflow.pcap
The sFlow-RT analytics engine converts raw sFlow telemetry into useful metrics that can be imported into a time series database.
curl http://localhost:8008/prometheus/metrics/ALL/ALL/txt
The above command retrieves metrics for all the hosts in
Prometheus export format.
Prometheus exporter describes how run the Prometheus time series database and build Grafana dashboards using metrics retrieved from sFlow-RT.
Flow metrics with Prometheus and Grafana extends the example to include packet flow data.
InfluxDB 2.0 shows how to import and query metrics using
InfluxDB.
It only takes a few minutes to try out these examples. Work through
Writing Applications to learn about the capabilities of the sFlow-RT analytics engine and how to package applications. Publish applications on GitHub and use the
sflow/sflow-rt Docker image to deploy them in production. Join the
sFlow-RT Community to ask questions and post information new applications.