Tuesday, March 10, 2020

Docker testbed

The sFlow-RT real-time analytics platform receives a continuous telemetry stream from sFlow Agents embedded in network devices, hosts and applications and converts the raw measurements into actionable metrics, accessible through open APIs, see Writing Applications.

Application development is greatly simplified if you can emulate the infrastructure you want to monitor on your development machine. Mininet flow analyticsMininet dashboard, and Mininet weathermap describe how to use the open source Mininet network emulator to simulate networks and generate a live stream of standard sFlow telemetry data.

This article describes how to use Docker containers as a development platform. Docker Desktop provides a convenient method of running Docker on Mac and Windows desktops. These instructions assume you have already installed Docker.

First, find your host address (e.g. hostname -I, ifconfig en0, etc. depending on operating system), then open a terminal window and set the shell variable MY_IP:
MY_IP=10.0.0.70
Start a Host sFlow agent using the pre-built sflow/host-sflow image:
docker run --rm -d -e "COLLECTOR=$MY_IP" -e "SAMPLING=10" \
--net=host -v /var/run/docker.sock:/var/run/docker.sock:ro \
--name=host-sflow sflow/host-sflow
Note: Host, Docker, Swarm and Kubernetes monitoring describes how to deploy Host sFlow agents to monitor large scale container environments.

Start an iperf3 server using the pre-built sflow/iperf3 image:
docker run --rm -d -p 5201:5201 --name iperf3 sflow/iperf3 -s
In a separate terminal window, run the following command to start sFlow-RT:
docker run --rm -p 8008:8008 -p 6343:6343/udp --name sflow-rt sflow/prometheus
Note: The sflow/prometheus image is based on sflow/sflow-rt, adding applications for browsing and exporting sFlow analytics. The sflow/sflow-rt page provides instructions for packaging your own applications with sFlow-RT.

It is helpful to run sFlow-RT in foreground during development so that you can see the log messages:
2020-03-09T23:31:23Z INFO: Starting sFlow-RT 3.0-1477
2020-03-09T23:31:23Z INFO: Version check, running latest
2020-03-09T23:31:24Z INFO: Listening, sFlow port 6343
2020-03-09T23:31:24Z INFO: Listening, HTTP port 8008
2020-03-09T23:31:24Z INFO: DNS server 192.168.65.1
2020-03-09T23:31:24Z INFO: app/prometheus/scripts/export.js started
2020-03-09T23:31:24Z INFO: app/browse-flows/scripts/top.js started
The web user interface can be accessed at http://localhost:8008/.
The sFlow Agents count (at the top left) verifies that sFlow is being received from the Host sFlow agent. Access the pre-installed Browse Flows application at http://localhost:8008/app/browse-flows/html/index.html?keys=ipsource%2Cipdestination&value=bps
Run the following command in the original terminal to initiate a test and generate traffic:
docker run --rm sflow/iperf3 -c $MY_IP
You should immediately see a spike in traffic like that shown in the Flow Browser screen capture. See RESTflow for an overview of the sFlow-RT flow analytics architecture and  Defining Flows for a detailed description of the options available when defining flows.

The ability to rapidly detect and act on traffic flows addresses many important challenges, for example: Real-time DDoS mitigation using BGP RTBH and FlowSpecTriggered remote packet capture using filtered ERSPAN, Exporting events using syslogBlack hole detection. and Troubleshooting connectivity problems in leaf and spine fabrics.

The following elephant.js script uses the embedded JavaScript API to detect and log the start of flows greater than 10Mbits/second:
setFlow('elephant',
  {keys:'ipsource,ipdestination',value:'bytes'});
setThreshold('elephant_threshold',
  {metric:'elephant', value: 10000000/8, byFlow: true, timeout: 1});
setEventHandler(function(evt)
  { logInfo(evt.flowKey); }, ['elephant_threshold']);
Use control+c to stop the sFlow-RT instance and run the following command to include the elephant.js script:
docker run --rm -v $PWD/elephant.js:/sflow-rt/elephant.js \
-p 8008:8008 -p 6343:6343/udp --name sflow-rt \
sflow/prometheus -Dscript.file=elephant.js
Run the iperf3 test again and you should immediately see the flows logged:
2020-03-10T05:30:15Z INFO: Starting sFlow-RT 3.0-1477
2020-03-10T05:30:16Z INFO: Version check, running latest
2020-03-10T05:30:16Z INFO: Listening, sFlow port 6343
2020-03-10T05:30:17Z INFO: Listening, HTTP port 8008
2020-03-10T05:30:17Z INFO: DNS server 192.168.65.1
2020-03-10T05:30:17Z INFO: elephant.js started
2020-03-10T05:30:17Z INFO: app/browse-flows/scripts/top.js started
2020-03-10T05:30:17Z INFO: app/prometheus/scripts/export.js started
2020-03-10T05:30:25Z INFO: 172.17.0.4,192.168.1.242
2020-03-10T05:30:26Z INFO: 172.17.0.1,172.17.0.2
Alternatively, the following elephant.py script used the REST API to perform the same function:
#!/usr/bin/env python
import requests

requests.put(
  'http://localhost:8008/flow/elephant/json',
  json={'keys':'ipsource,ipdestination', 'value':'bytes'}
)
requests.put(
  'http://localhost:8008/threshold/elephant_threshold/json',
  json={'metric':'elephant', 'value': 10000000/8, 'byFlow':True, 'timeout': 1}
)
eventurl = 'http://localhost:8008/events/json'
eventurl += '?thresholdID=elephant_threshold&maxEvents=10&timeout=60'
eventID = -1
while 1 == 1:
  r = requests.get(eventurl + '&eventID=' + str(eventID))
  if r.status_code != 200: break
  events = r.json()
  if len(events) == 0: continue

  eventID = events[0]['eventID']
  events.reverse()
  for e in events:
    print(e['flowKey'])
Run the Python script and run another iperf3 test:
./elephant.py 
172.17.0.4,192.168.1.242
172.17.0.1,172.17.0.2
Another option is to replay sFlow telemetry captured in the form of a PCAP file. The Fabric View application contains an example that can be extracted:
curl -O https://raw.githubusercontent.com/sflow-rt/fabric-view/master/demo/ecmp.pcap
Now run sFlow-RT:
docker run --rm -v $PWD/ecmp.pcap:/sflow-rt/sflow.pcap \
-p 8008:8008 --name sflow-rt sflow/prometheus -Dsflow.file=sflow.pcap
Run the elephant.py script:
./elephant.py                                                            
10.4.1.2,10.4.2.2
10.4.1.2,10.4.2.2
Note: Fabric View contains a detailed description of the captured data.

Data from a production network can be captured using tcpdump:
tcpdump -i eth0 -s 0 -c 10000 -w sflow.pcap udp port 6343
For example, the above command captures 10000 sFlow datagrams (UDP port 6343) from Ethernet interface eth0 and stores them in file sflow.pcap

The sFlow-RT analytics engine converts raw sFlow telemetry into useful metrics that can be imported into a time series database.
curl http://localhost:8008/prometheus/metrics/ALL/ALL/txt
The above command retrieves metrics for all the hosts in Prometheus export format. Prometheus exporter describes how run the Prometheus time series database and build Grafana dashboards using metrics retrieved from sFlow-RT. Flow metrics with Prometheus and Grafana extends the example to include packet flow data. InfluxDB 2.0 shows how to import and query metrics using InfluxDB.

It only takes a few minutes to try out these examples. Work through Writing Applications to learn about the capabilities of the sFlow-RT analytics engine and how to package applications. Publish applications on GitHub and use the sflow/sflow-rt Docker image to deploy them in production. Join the sFlow-RT Community to ask questions and post information new applications.    

2 comments:

  1. Hi Peter,

    I'm trying to get streaming telemetry from Apache servers. For this purpose I installed hflowd and sflow apache module. Then I setup sFlow-RT (one is docker, another one is on CentOS7 VM). After proper configuration I verified that sFlow-RT is receiving necessary http flow samples. I even defined flow in sFlow-RT via REST API, but I can't fetch it as a metric in Prometheus. I created flow like this:
    curl -H "Content-Type:application/json" -X PUT --data "{keys:'httphost',value:'requests',t:10,n:5}" http://10.2.100.5:8989/flow/max_reqs/json

    Then I verify that this metric appears in JSON format:
    curl http://10.2.100.5:8989/metric/ALL/max_reqs/json
    [{
    "agent": "10.2.30.4",
    "metricName": "max_reqs",
    "topKeys": [
    {
    "lastUpdate": 4457,
    "value": 0.3348197580801788,
    "key": "uat.domain.com"
    },
    {
    "lastUpdate": 3485,
    "value": 0.325180637519444,
    "key": "fnc.uat.domain.com"
    },
    {
    "lastUpdate": 5380,
    "value": 0.32078859765916423,
    "key": "uat.domain.dk"
    },
    {
    "lastUpdate": 39031,
    "value": 0.007923501459216283,
    "key": "uat.domain.com"
    },
    {
    "lastUpdate": 66832,
    "value": 0.0012150408865935645,
    "key": "uat.domain.com"
    }
    ],
    "metricN": 1,
    "lastUpdate": 3485,
    "lastUpdateMax": 3485,
    "metricValue": 0.3348197580801788,
    "dataSource": "3.80",
    "lastUpdateMin": 3485
    }]

    But the problem is that I can't make Prometheus fetch this metric from sFlow-RT. I've tried to install sflow-rt/prometheus exporter as an application and reference
    metric via "/app/prometheus/scripts/export.js/flows/ALL/txt" in prometheus.yml - it returns HTTP 400 error. If I reference this metric in new style via "/prometheus/metrics/ALL/ALL/txt" it doesn't find the metric itself. In a last example, should I use PHP script that is posted here https://blog.sflow.com/2018/03/prometheus-and-grafana.html to convert JSON metrics to Prometheus txt format? If so, where should I put this script? Thank you.

    ReplyDelete
    Replies
    1. Flow metrics with Prometheus and Grafana describes how to use the prometheus app. Instead of defining the flow using sFlow-RT's REST API, you define the flow parameters in the Prometheus scrape task and the prometheus app instantiates the flow and returns results in Prometheus export format.

      Delete