Monday, December 6, 2021

Real-time Kubernetes cluster monitoring example

The Sunburst GPU chart updates every second to show a real-time view of the share of GPU resources being consumed by namespaces operating on the Nautilus hyperconverged Kubernetes cluster. The Nautilus cluster tightly couples distributes storage, GPU, and CPU resources to share among the participating research organizations.

The Sunburst Process chart provides an up to the second view of the cluster-wide share of CPU resources used by each namespace.

The Sunburst DNS chart shows a real-time view of network activity generated by each namespace. The chart is produced by looking up DNS names for network addresses observed in packet flows using the Kubernetes DNS service. The domain names contain information about the namespace, service, and node generating the packets. Most traffic is exchanges between nodes within the cluster (identified as local). The external (not local) traffic is also shown by DNS name.
The Sunburst Protocols chart shows the different network protocols being used to communicate between nodes in the cluster. The chart shows the IP over IP tunnel traffic used for network virtualization.
Clicking on a segment in the Sunburst Protocols chart allows the selected traffic to be examined in detail using the Flow Browser. In this example, DNS names are again used to translate raw packet flow data into inter-namespace flows. See Defining Flows for information on the flow analytics capabilities that can be explored using the browse-flows application.
The Discard Browser provides a detailed view of any network packets dropped in the cluster. In this chart inter-namespace dropped packets are displayed, identifying the haproxy service as the largest source of dropped packets. 
The final chart shows an up to the second view of the average power consumed by a GPU in the cluster (approximately 250 Watts per GPU).
The diagram shows the elements of the monitoring solution. Host sFlow agents deployed on each Node in the Kubernetes Cluster stream standard sFlow telemetry to an instance of the sFlow-RT real-time analytics software that provides cluster wide metrics through a REST API, where they can be viewed, or imported into time series databases like Prometheus and trended in dashboards using tools like Grafana.
Note: sFlow is widely supported by network switches and routers. Enable sFlow monitoring in the physical network infrastructure for end-to-end visibility.

Create the following sflow-rt.yml file to deploy the pre-built sflow/prometheus Docker image, bundling sFlow-RT with the applications used in this article:

apiVersion: v1
kind: Service
metadata:
  name: sflow-rt-sflow
spec:
  type: NodePort
  selector:
    name: sflow-rt
  ports:
    - protocol: UDP
      port: 6343
---
apiVersion: v1
kind: Service
metadata:
  name: sflow-rt-rest
spec:
  type: LoadBalancer
  selector:
    name: sflow-rt
  ports:
    - protocol: TCP
      port: 8008
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sflow-rt
spec:
  replicas: 1
  selector:
    matchLabels:
      name: sflow-rt
  template:
    metadata:
      labels:
        name: sflow-rt
    spec:
      containers:
      - name: sflow-rt
        image: sflow/prometheus:latest
        ports:
          - name: http
            protocol: TCP
            containerPort: 8008
          - name: sflow
            protocol: UDP
            containerPort: 6343

Run the following command to deploy the service:

kubectl apply -f sflow-rt.yml

Now create the following host-sflow.yml file to deploy the pre-built sflow/host-sflow Docker image:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: host-sflow
spec:
  selector:
    matchLabels:
      name: host-sflow
  template:
    metadata:
      labels:
        name: host-sflow
    spec:
      restartPolicy: Always
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: host-sflow
        image: sflow/host-sflow:latest
        env:
          - name: COLLECTOR
            value: "sflow-rt-sflow"
          - name: SAMPLING
            value: "10"
          - name: NET
            value: "host"
          - name: DROPMON
            value: "enable"
        volumeMounts:
          - mountPath: /var/run/docker.sock
            name: docker-sock
            readOnly: true
      volumes:
        - name: docker-sock
          hostPath:
            path: /var/run/docker.sock

Run the following command to deploy the agents:

kubectl apply -f host-sflow.yml

Telemetry should immediately start streaming as a Host sFlow agent is started on each node in the cluster.

Note: Exporting GPU performance metrics from the NVIDIA GPUs in the Nautilus cluster requires a special version of the Host sFlow agent built using the NVIDIA supplied Docker image that includes GPU drivers, see https://gitlab.nrp-nautilus.io/prp/sflow/

Access the sFlow-RT web user interface to confirm that telemetry is being received.

The sFlow-RT Status page confirms that telemetry is being received from all 180 nodes in the cluster.
Note: If you don't currently have access to a production Kubernetes cluster, you can experiment with this solution using Docker Desktop, see Kubernetes testbed.
The charts shown in this article are accessed via the sFlow-RT Apps tab.

The sFlow-RT applications are designed to explore the available metrics, but don't provide persistent storage. Prometheus export functionality allows metrics to be recorded in a time series database to drive operational dashboards, see  Flow metrics with Prometheus and Grafana.

Monday, November 1, 2021

Sunburst

The recently released open source Sunburst application provides a real-time visualization of the protocols running a network. The Sunburst application runs on the sFlow-RT real-time analytics platform, which receives standard streaming sFlow telemetry from switches and routers throughout the network to provide comprehensive visibility.
docker run -p 8008:8008 -p 6343:6343/udp sflow/prometheus
The pre-built sflow/prometheus Docker image packages sFlow-RT with the applications for exploring real-time sFlow analytics. Run the command above, configure network devices to send sFlow to the application on UDP port 6343 (the default sFlow port) and connect with a web browser to port 8008 to access the user interface.
 
The chart at the top of this article demonstrates the visibility that sFlow can provide into nested protocol stacks that result from network virtualization. For example, the most deeply nested set of protocols shown in the chart is:
  1. eth: Ethernet
  2. q: IEEE 802.1Q VLAN
  3. trill: Transparent Interconnection of Lots of Links (TRILL)
  4. eth: Ethernet
  5. q: IEEE 802.1Q VLAN
  6. ip: Internet Protocol (IP) version 4
  7. udp: User Datagram Protocol (UDP)
  8. vxlan: Virtual eXtensible Local Area Network (VXLAN)
  9. eth: Ethernet
  10. ip Internet Protocol (IP) version 4
  11. esp IPsec Encapsulating Security Payload (ESP)
Click on a segment in the sunburst chart to further explore the selected protocol using the Flow Browser application.
The Flow Browser allows the full set of flow attributes to be explored, see Defining Flows for details. In this example, the filter was added by clicking on a segment in the Sunburst application and additional keys were entered to show inner and outer IP addresses in the tunnel.

Thursday, October 21, 2021

InfluxDB Cloud


InfluxDB Cloud is a cloud hosted version of InfluxDB. The free tier makes it easy to try out the service and has enough capability to satisfy simple use cases. In this article we will explore how metrics based on sFlow streaming telemetry can be pushed into InfluxDB Cloud.

The diagram shows the elements of the solution. Agents in host and network devices are configured to stream sFlow telemetry to an sFlow-RT real-time analytics engine instance. The Telegraf Agent queries sFlow-RT's REST API for metrics and pushes them to InfluxDB Cloud.

docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus

Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.

Create an InfluxDB Cloud account. Click the Data tab. Click on the Telegraf option and the InfluxDB Output Plugin button to get the URL to post data. Click the API Tokens option and generate a token.
[agent]
  interval = "15s"
  round_interval = true
  metric_batch_size = 5000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = "1s"
  hostname = ""
  omit_hostname = true

[[outputs.influxdb_v2]]
  urls = ["INFLUXDB_CLOUD_URL"]
  token = "INFLUXDB_CLOUD_TOKEN"
  organization = "INFLUXDB_CLOUD_USER"
  bucket = "sflow"

[[inputs.prometheus]]
  urls = ["http://host.docker.internal:8008/prometheus/metrics/ALL/ifinutilization,ifoututilization/txt"]
  metric_version = 2

Create a telegraf.conf file. Substitute INFLUXDB_CLOUD_URL, INFLUXDB_CLOUD_TOKEN, and INFLUXDB_CLOUD_USER with values retrieved from the InfluxDB Cloud account.

docker run -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro \
-d --name telegraf telegraf

Use Docker to run the telegraf agent.

Data should start appearing in InfluxDB Cloud. Use the Explore tab to see what data is available and to create charts. In this case we are plotting ingress / egress utilization for each switch port in the network.

Telegraf sFlow input plugin describes why you would normally bypass Telegraf and have InfluxDB directly retrieve metrics from sFlow-RT. However, in the case of InfluxDB Cloud, Telegraf acts as a secure gateway, retrieving metrics locally using the inputs.prometheus module, and forwarding to the InfluxDB Cloud using the outputs.influxdb_v2 module. InfluxDB 2.0 released describes the settings used in the inputs.prometheus module.

Modify the urls setting in the inputs.prometheus section of the telegraf.conf file to add additional metrics and/or define flows.

There are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of the metrics collection service. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines / containers are frequently added and removed. Instead, sFlow-RT can be instructed to track all the members of the pool, calculates summary statistics for the pool, and log the summary statistics. This pre-processing can significantly reduce storage requirements, lowering costs and increasing query performance. 

Host, Docker, Swarm and Kubernetes monitoring describes how to deploy sFlow agents to monitor compute infrastructure.

The sFlow-RT Prometheus Exporter application exposes a REST API that allows metrics to be summarized, filtered, and synthesized. Exposing these capabilities through a REST API allows the Telegraf inputs.prometheus module to control the behavior of the sFlow-RT analytics pipeline and retrieve a small set of hight value metrics tailored to your requirements.

Wednesday, October 20, 2021

Telegraf sFlow input plugin

The Telegraf agent is bundled with an SFlow Input Plugin for importing sFlow telemetry into the InfluxDB time series database. However, the plugin has major caveats that severely limit the value that can be derived from sFlow telemetry.

Currently only Flow Samples of Ethernet / IPv4 & IPv4 TCP & UDP headers are turned into metrics. Counters and other header samples are ignored.

Series Cardinality Warning

This plugin may produce a high number of series which, when not controlled for, will cause high load on your database.

InfluxDB 2.0 released describes how to use sFlow-RT to convert sFlow telemetry into useful InfluxDB metrics.

Using sFlow-RT overcomes the limitations of the Telegraf sFlow Input Plugin, making it possible to fully realize the value of sFlow monitoring:

  • Counters are a major component of sFlow, efficiently streaming detailed network counters that would otherwise need to be polled via SNMP. Counter telemetry is ingested by sFlow-RT and used to compute an extensive set of Metrics that can be imported into InfluxDB.
  • Flow Samples are fully decoded by sFlow-RT, yielding visibility that extends beyond the basic Ethernet / IPv4 / TCP / UDP header metrics supported by the Telegraf plugin to include ARP, ICMP, IPv6, DNS, VxLAN tunnels, etc. The high cardinality of raw flow data is mitigated by sFlow-RT's programmable real-time flow analytics pipeline, exposing high value, low cardinality, flow metrics tailored to business requirements.
In addition, there are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of InfluxDB. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines / containers are frequently added and removed. Instead, sFlow-RT can be instructed to track all the members of the pool, calculates summary statistics for the pool, and log the summary statistics. This pre-processing can significantly reduce storage requirements, lowering costs and increasing query performance.

Tuesday, October 12, 2021

Grafana Cloud


Grafana Cloud is a cloud hosted version of Grafana, Prometheus, and Loki. The free tier makes it easy to try out the service and has enough capability to satisfy simple use cases. In this article we will explore how metrics based on sFlow streaming telemetry can be pushed into Grafana Cloud.

The diagram shows the elements of the solution. Agents in host and network devices are configured to stream sFlow telemetry to an sFlow-RT real-time analytics engine instance. The Grafana Agent queries sFlow-RT's REST API for metrics and pushes them to Grafana Cloud.
docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus
Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.
Create a Grafana Cloud account. Click on the Agent button on the home page to get the configuration settings for the Grafana Agent.
Click on the Prometheus button to get the configuration to forward metrics from the Grafana Agent.
Enter a name and click on the Create API key button to generate configuration settings that include a URL, username, and password that will be used in the Grafana Agent configuration.
server:
  log_level: info
  http_listen_port: 12345
prometheus:
  wal_directory: /tmp/wal
  global:
    scrape_interval: 15s
  configs:
    - name: agent
      host_filter: false
      scrape_configs:
        - job_name: 'sflow-rt-analyzer'
          metrics_path: /prometheus/analyzer/txt
          static_configs:
            - targets: ['host.docker.internal:8008']
        - job_name: 'sflow-rt-metrics'
          metrics_path: /prometheus/metrics/ALL/ALL/txt
          static_configs:
            - targets: ['host.docker.internal:8008']
          metric_relabel_configs:
            - source_labels: ['agent', 'datasource']
              separator: ':'
              target_label: instance
        - job_name: 'sflow-rt-countries'
          metrics_path: /app/prometheus/scripts/export.js/flows/ALL/txt
          static_configs:
            - targets: ['host.docker.internal:8008']
          params:
            metric: ['sflow_country_bps']
            key: ['null:[country:ipsource:both]:unknown','null:[country:ipdestination:both]:unknown']
            label: ['src','dst']
            value: ['bytes']
            scale: ['8']
            aggMode: ['sum']
            minValue: ['1000']
            maxFlows: ['100']
        - job_name: 'sflow-rt-asns'
          metrics_path: /app/prometheus/scripts/export.js/flows/ALL/txt
          static_configs:
            - targets: ['host.docker.internal:8008']
          params:
            metric: ['sflow_asn_bps']
            key: ['null:[asn:ipsource:both]:unknown','null:[asn:ipdestination:both]:unknown']
            label: ['src','dst']
            value: ['bytes']
            scale: ['8']
            aggMode: ['sum']
            minValue: ['1000']
            maxFlows: ['100']
      remote_write:
        - url: API_URL
          basic_auth:
            username: API_USERID
            password: API_KEY
Create an agent.yaml configuration file. Substitute the API_URL, API_USERID, and API_KEY with values from the API Key settings obtained previosly.
docker run -v $PWD/data:/etc/agent/data -v $PWD/agent.yaml:/etc/agent/agent.yaml \
--name grafana-agent -d grafana/agent
Use Docker to run the Grafana Agent.
Data should start appearing in Grafana Cloud. Install the sFlow-RT Health, sFlow-RT Countries and Networks, and sFlow-RT Network Interfaces dashboards to view the data. For example, the Countries and Networks dashboard above shows traffic entering and leaving your network broken out by network and country. Flow metrics with Prometheus and Grafana describes how to build Prometheus scrape_configs that will cause sFlow-RT to export custom traffic flow metrics. 
There are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of the metrics collection service. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines / containers are frequently added and removed. Instead, sFlow-RT can be instructed to track all the members of the pool, calculates summary statistics for the pool, and log the summary statistics. This pre-processing can significantly reduce storage requirements, lowering costs and increasing query performance. 
Host, Docker, Swarm and Kubernetes monitoring describes how to deploy sFlow agents to monitor compute infrastructure.
The sFlow-RT Prometheus Exporter application exposes a REST API that allows metrics to be summarized, filtered, and synthesized. Exposing these capabilities through a REST API allows Prometheus scrape_configs to control the behavior of the sFlow-RT analytics pipeline and retrieve a small set of hight value metrics tailored to your requirements.

Thursday, October 7, 2021

DDoS protection quickstart guide

DDoS Protect is an open source denial of service mitigation tool that uses industry standard sFlow telemetry from routers to detect attacks and automatically deploy BGP remotely triggered blackhole (RTBH) and BGP Flowspec filters to block attacks within seconds.

This document pulls together links to a number of articles that describe how you can quickly try out DDoS Protect and get it running in your environment:

DDoS Protect is a lightweight solution that uses standard telemetry and control (sFlow and BGP) capabilities of routers to automatically block disruptive volumetric denial of service attacks. You can quickly evaluate the technology on your laptop or in a test lab. The solution leverages standard features of modern routing hardware to scale easily to large high traffic networks.

Monday, September 20, 2021

Containernet

Containernet is a fork of the Mininet network emulator that uses Docker containers as hosts in emulated network topologies.

Multipass describes how build a Mininet testbed that provides real-time traffic visbility using sFlow-RT. This article adapts the testbed for Containernet.

multipass launch --name=containernet bionic
multipass exec containernet -- sudo apt update
multipass exec containernet -- sudo apt -y install ansible git aptitude default-jre
multipass exec containernet -- git clone https://github.com/containernet/containernet.git
multipass exec containernet -- sudo ansible-playbook -i "localhost," -c local containernet/ansible/install.yml
multipass exec containernet -- sudo /bin/sh -c "cd containernet; make develop"
multipass exec containernet -- wget https://inmon.com/products/sFlow-RT/sflow-rt.tar.gz
multipass exec containernet -- tar -xzf sflow-rt.tar.gz
multipass exec containernet -- ./sflow-rt/get-app.sh sflow-rt mininet-dashboard

Run the above commands in a terminal to create the Containernet virtual machine. 

multipass list

List the virtual machines

Name                    State             IPv4             Image
primary                 Stopped           --               Ubuntu 20.04 LTS
containernet            Running           192.168.64.12    Ubuntu 18.04 LTS
                                          172.17.0.1

Find the IP address of the mininet virtual machine we just created (192.168.64.12).

multipass exec containernet -- ./sflow-rt/start.sh

Start sFlow-RT. Use a web browser to connect to the VM and access the Mininet Dashboad application running on sFlow-RT, in this case http://192.168.64.12:8008/app/mininet-dashboard/html/

Open a second shell.

multipass shell containernet

Connet to the Containernet virtual machine.

cp containernet/examples/containernet_example.py .

Copy the Containernet Get started example script.

#!/usr/bin/python
"""
This is the most simple example to showcase Containernet.
"""
from mininet.net import Containernet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.log import info, setLogLevel
setLogLevel('info')

exec(open("./sflow-rt/extras/sflow.py").read())

net = Containernet(controller=Controller)
info('*** Adding controller\n')
net.addController('c0')
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty")
info('*** Adding switches\n')
s1 = net.addSwitch('s1')
s2 = net.addSwitch('s2')
info('*** Creating links\n')
net.addLink(d1, s1)
net.addLink(s1, s2, cls=TCLink, delay='100ms', bw=1)
net.addLink(s2, d2)
info('*** Starting network\n')
net.start()
info('*** Testing connectivity\n')
net.ping([d1, d2])
info('*** Running CLI\n')
CLI(net)
info('*** Stopping network')
net.stop()

Edit the copy and add the highlighted line to enable sFlow monitoring:

sudo python3 containernet_example.py

Run the Containernet example script.

Finally, the network topology will appear under the Mininet Dashboard topology tab.

Tuesday, August 31, 2021

Netdev 0x15


The recent Netdev 0x15 conference included a number of papers diving into the technology behind Linux as a network operating system. Slides and videos are now available on the conference web site.
Network wide visibility with Linux networking and sFlow describes the Linux switchdev driver used to integrate network hardware with Linux. The talk focuses on network telemetry, showing how standard Linux APIs are used to configure hardware instrumentation and stream telemetry using the industry standard sFlow protocol for data center wide visibility.
Switchdev in the wild describes Yandex's experience of deploying Linux switchdev based switches in production at scale. The diagram from the talk shows the three layer leaf and spine network architecture used in their data centers. Yandex operates multiple data centers, each containing up to 100,000 servers.
Switchdev Offload Workshop provides updates about the latest developments in the switchdev community. 
FRR Workshop discusses the latest development in the FRRouting project, the open source routing software that is now a defacto standard on Linux network operating systems.

Wednesday, August 18, 2021

Nokia Service Router Linux


Nokia Service Router Linux (SR-Linux) is an open source network operating system running on Nokia's merchant silicon based data center switches.

The following commands configure SR-Linux to sample packets at 1-in-10000, poll counters every 20 seconds and stream standard sFlow telemetry to an analyzer (192.168.10.20) using the default sFlow port 6343:

system {
    sflow {
        admin-state enable
        sample-rate 10000
        collector 1 {
            collector-address 192.168.10.20
            network-instance default
            source-address 192.168.1.1
            port 6343
        }
    }
}

For each interface:

interface ethernet-1/1 {
    admin-state enable
    sflow {
        admin-state enable
    }
}

Enable sFlow on all switches and ports in the data center fabric for comprehensive visibility.

An instance of the sFlow-RT real-time analytics software converts the raw sFlow telemetry into actionable measurements to drive operational dashboards and automation (e.g. DDoS mitigation, traffic engineering, etc.).
docker run --name sflow-rt -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus
A simple way to get started is to run the Docker sflow/prometheus image on the sFlow analyzer host (192.168.10.20 in the example config) to run sFlow-RT with useful applications to explore the telemetry. Access the web interface at http://192.168.10.20:8008.

Tuesday, June 15, 2021

DDoS mitigation using a Linux switch

Linux as a network operating system describes the benefits of using standard Linux as a network operating system for hardware switches. A key benefit is that the behavior of the physical network can be efficiently emulated using standard Linux virtual machines and/or containers.

In this article, CONTAINERlab will be used to create a simple testbed that can be used to develop a real-time DDoS mitigation controller. This solution is highly scaleable. Each hardware switch can monitor and filter terabits per second of traffic and a single controller instance can monitor and control hundreds of switches.

Create test network

The following ddos.yml file specifies the testbed topology (shown in the screen shot at the top of this article):

name: ddos
topology:
  nodes:
    router:
      kind: linux
      image: sflow/frr
    attacker:
      kind: linux
      image: sflow/hping3
    victim:
      kind: linux
      image: alpine:latest
  links:
    - endpoints: ["router:swp1","attacker:eth1"]
    - endpoints: ["router:swp2","victim:eth1"]

Run the following command to run the emulation:

sudo containerlab deploy ddos.yml

Configure interfaces on router:

interface swp1
 ip address 192.168.1.1/24
!
interface swp2
 ip address 192.168.2.1/24
!

Configure attacker interface:

ip addr add 192.168.1.2/24 dev eth1
ip route add 192.168.2.0/24 via 192.168.1.1

Configure victim interface:

ip addr add 192.168.2.2/24 dev eth1
ip route add 192.168.1.0/24 via 192.168.2.1

Verify connectivity between the attacker and the victim:

sudo docker exec -it clab-ddos-attacker ping 192.168.2.2
PING 192.168.2.1 (192.168.2.1): 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=64 time=0.069 ms

Install visibility and control applications on router

The advantage of using Linux as a network operating system is that you can develop and install software to tailor the network to address specific requirements. In this case, for DDoS mitigation, we need real-time visibility to detect DDoS attacks and real-time control to filter out the attack traffic.

Open a shell on router:

sudo docker exec -it clab-ddos-router sh

Install and configure Host sFlow agent:

apk --update add build-base linux-headers openssl-dev dbus-dev gcc git
git clone https://github.com/sflow/host-sflow.git
cd host-sflow
make FEATURES="DENT"
make install

Edit /etc/hsflowd.conf

sflow {
  agent = eth0
  collector { ip=172.20.20.1 udpport=6343 }
  dent { sw=on switchport=swp.* }
}

Note: On a hardware switch, set sw=off to offload packet sampling to hardware.

Start hsflowd:

hsflowd

Download and run tc_server Python script for adding and removing tc flower filters using a REST API:

wget https://raw.githubusercontent.com/sflow-rt/tc_server/master/tc_server
nohup python3 tc_server > /dev/null &

The following command shows the Linux tc filters used in this example:

# tc filter show dev swp1 ingress
filter protocol all pref 1 matchall chain 0 
filter protocol all pref 1 matchall chain 0 handle 0x1 
  not_in_hw
	action order 1: sample rate 1/10000 group 1 trunc_size 128 continue
	 index 3 ref 1 bind 1

filter protocol ip pref 14 flower chain 0 
filter protocol ip pref 14 flower chain 0 handle 0x1 
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.2.2
  src_port 53
  not_in_hw
	action order 1: gact action drop
	 random type none pass val 0
	 index 1 ref 1 bind 1

The output shows the standard Linux tc-matchall and tc-flower filters used to monitor and drop traffic on the router. The Host sFlow agent automatically installs a matchall rule on each interface in order to sample packets. The tc_server script adds and removes a flower filters to drop unwanted traffic. On a hardware router, the filters are offloaded by the Linux switchdev driver to the router ASIC for line rate performance.

Test REST API

Add filter:

curl -X PUT -H "Content-Type: application/json" \
--data '{"ip_proto":"udp","dst_ip":"10.0.2.2","src_port":"53"}' \
http://clab-ddos-router:8081/swp1/10

Show filters:

curl http://clab-ddos-router:8081/swp1

Remove filter:

curl -X DELETE http://clab-ddos-router:8081/swp1/10

Build an automated DDoS mitigation controller

The following sFlow-RT ddos.js script automatically detects and drops UDP amplification attacks:

var block_minutes = 1;
var thresh = 10000;

setFlow('udp_target',{keys:'ipdestination,udpsourceport',value:'frames'});

setThreshold('attack',{metric:'udp_target', value:thresh, byFlow:true, timeout:2});

var id = 10;
var controls = {};
setEventHandler(function(evt) {
  var key = evt.flowKey;
  if(controls[key]) return;

  var prt = ifName(evt.agent,evt.dataSource);
  if(!prt) return;

  var [dst_ip,src_port] = key.split(',');
  var filter = {
    // uncomment following line for hardware routers
    // 'skip_sw':'skip_sw',
    'ip_proto':'udp',
    'dst_ip':dst_ip,
    'src_port':src_port
  };
  var url = 'http://'+evt.agent+':8081/'+prt+'/'+id++;
  try {
    http(url,'put','application/json',JSON.stringify(filter));
  } catch(e) {
    logWarning(url + ' put failed');
  }
  controls[key] = {time:evt.timestamp, evt:evt, url:url};
  logInfo('block ' + evt.flowKey);
},['attack']);

setIntervalHandler(function(now) {
  for(var key in controls) {
    var control = controls[key];
    if(now - control.time < 1000 * 60 * block_minutes) continue;
    var evt = control.evt;
    if(thresholdTriggered(evt.thresholdID,evt.agent,evt.dataSource+'.'+evt.metric,evt.flowKey)) {
      // attack is ongoing - keep control
      continue;
    }
    try {
      http(control.url,'delete');
    } catch(e) {
      logWarning(control.url + ' delete failed');
    }
    delete controls[key];
    logInfo('allow '+control.evt.flowKey);
  }
});

See Writing Applications for more information on the script.

Run the controller script on the CONTAINERlab host using the sFlow-RT real-time analytics engine:

sudo docker run --network=host -v $PWD/ddos.js:/sflow-rt/ddos.js \
sflow/prometheus -Dscript.file=ddos.js

Verify that sFlow is being received by the checking the sFlow-RT status page, http://containerlab_ip:8008/

Test controller

Monitor for attack traffic on the victim:

sudo docker exec -it clab-ddos-victim sh
apk --update add tcpdump
tcpdump -n -i eth1 udp port 53

Start attack:

sudo docker exec -it clab-ddos-attacker \
hping3 --flood --udp -k -s 53 --rand-source 192.168.2.2

There should be a brief flurry of packets seen at the victim before the controller detects and blocks the attack. The entire period between launching the attack and the attack traffic being blocked is under a second.

Thursday, May 20, 2021

Linux as a network operating system


NVIDIA Linux Switch enables any standard Linux distribution to be used as the operating system on the NVIDIA Spectrum™ switches. Unlike network operating systems that are Linux based, where you are limited to a specific version of Linux and control of the hardware is restricted to vendor specific software modules, Linux Switch allows you to install an unmodified version of your favorite Linux distribution along with familiar Linux monitoring and orchestration tools. 

The key to giving Linux control of the switch hardware is the switchdev module - a standard part of recent Linux kernels. Linux switchdev is an in-kernel driver model for switch devices which offload the forwarding (data) plane from the kernel. Integrating switch ASIC drivers in the Linux kernel makes switch ports appear as additional Linux network interfaces that can be configured and managed using standard Linux tools.

The mlxsw wiki provides instructions for installing Linux using ONIE or PXE boot on Mellanox switch hardware, for example, on NVIDIA® Spectrum®-3 based SN4000 series switches, providing 1G - 400G port speeds to handle scale-out data center applications.

Major benefits of using standard Linux as the switch operating system include:

  • no licensing fees, feature restrictions, or license management complexity associated proprietary network operating systems
  • large ecosystem of open source and commercial software available for Linux
  • software updates and security patches available through Linux distribution
  • install same Linux distribution on the switches and servers to reduce operational complexity and leverage existing expertise
  • run instances of the Linux distribution as virtual machines or containers to test configurations and develop automation scripts
  • standard Linux APIs, and availability of Linux developers, lowers the barrier to customization, making it possible to tailor network behavior to address application / business requirements

The switchdev driver for NVIDIA Spectrum ASICs exposes advanced dataplane instrumentation through standard Linux APIs. This article will explore how the open source Host sFlow agent uses the standard Linux APIs to stream real-time telemetry from the ASIC using industry standard sFlow.

The diagram shows the elements of the solution. Host sFlow agents installed on servers and switches stream sFlow telemetry to an instance of the sFlow-RT real-time analytics engine. The analytics provide a comprehensive, up to the second, view of performance to drive automation.

Note: If you are unfamiliar with sFlow, or want to hear about the latest developments, Real-time network telemetry for automation provides an overview and includes a demonstration of monitoring and troubleshooting network and system performance of a GPU cluster.

Download the latest Host sFlow agent sources:

git clone https://github.com/sflow/host-sflow.git

INSTALL.Linux provides information on compiling Host sFlow on Linux. The following instructions assume a DEB based distrubution (Debian, Ubuntu):

cd host-sflow/
make deb FEATURES=DENT

It isn't necessary to install development tools on the switch. All major Linux distributions are available as Docker images. Select a Docker image that matches the operating system version on the switch and use it to build the package.

Copy the resulting hsflowd package to the switch and install:

sudo dpkg -i hsflowd_2.0.34-3_amd64.deb

Next, edit the /etc/hsflowd.conf file to configure the agent:

sflow {
  collector { ip=10.0.0.1 }
  systemd { }
  psample { group=1 egress=on }
  dropmon { group=1 start=on sw=off hw=on }
  dent { sw=off switchport=swp.* }
}

In this case, 10.0.0.1 is the address of the sFlow collector and swp.* is a regular expression used to identify front panel switch ports. The systemd{} module monitors services running on the switch - see Monitoring Linux services, the psample{} module receives randomly sampled packets from the switch ASIC - see Linux 4.11 kernel extends packet sampling support, the dropmon{} module receives dropped packet notifications - see Using sFlow to monitor dropped packets, and the dent{} module automaticallly configures packet sampling of traffic on front panel switch ports - see Packet Sampling.

Note: The same configuration file can be used for for every switch in the network, making configuration of the agents easy to automate. 

Enable and start the agent.

sudo systemctl enable hsflowd.service
sudo systemctl start hsflowd.service

Finally, use the pre-built sflow/prometheus Docker image to start a copy the sFlow-RT real-time analytics software on the collector host (10.0.0.1):

docker run -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus

The web interface is accessible on port 8008.

The included Metric Browser application lets you explore the metrics that are being streamed. The chart update in real-time as data arrives and in this case identifies the interface in the network with the greatest utilization. The standard set of metrics exported by the Host sFlow agent include interface counters as well as host cpu, memory, disk and service performance metrics. Metrics lists the set of available metrics.

The included Flow Browser application provides an up to the second view traffic flows. Defining Flows describes the fields that can be used to break out traffic. 

Note: The NVIDIA Spectrum 2/3 ASIC includes packet transit delay, selected queue and queue depth with each sampled packet. This information is delivered via the Linux PSAMPLE netlink channel to the Host sFlow agent and included in the sFlow telemetry. These fields are accessible when defining flows in sFlow-RT. See Transit delay and queueing for details.

The included Discard Browser is used to explore packets that are being dropped in the network.

Note: The NVIDIA Spectrum 2/3 ASIC includes instrumentation to capture dropped packets and the reason they were dropped. The information is delivered via the Linux drop_monitor netlink channel to the Host sFlow agent and included in the sFlow telemetry. See Real-time trending of dropped packets for more information.

The included Prometheus application exports metrics to the Prometheus time series database where they can be used to drive Grafana dashboards (e.g. sFlow-RT Countries and Networks, sFlow-RT Health, and sFlow-RT Network Interfaces).

Linux as a network operating system is an exciting advancement if you are interested in simplifying network and system management. Using the Linux networking APIs as a common abstraction layer on servers and switches makes it possible to manage network and compute infrastructure as a unified system.

Monday, May 3, 2021

Cisco 8000 Series routers


Cisco 8000 Series routers are "400G optimized platforms that scale from 10.8 Tbps to 260 Tbps." The routers are built around Cisco Silicon One™ ASICs. The Silicon One ASIC includes the instrumentation needed to support industry standard sFlow real-time streaming telemetry.
Note: The Cisco 8000 Series routers also support Cisco Netflow. Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX describes why you should choose sFlow if you are interested in real-time monitoring and control applications.
The following commands configure a Cisco 8000 series router to sample packets at 1-in-20,000 and stream telemetry to an sFlow analyzer (192.127.0.1) on UDP port 6343.
flow exporter-map SF-EXP-MAP-1
 version sflow v5
 !
 packet-length 1468
 transport udp 6343
 source GigabitEthernet0/0/0/1
 destination 192.127.0.1
 dfbit set
!

Configure the sFlow analyzer address in an exporter-map.

flow monitor-map SF-MON-MAP
 record sflow
 sflow options
  extended-router
  extended-gateway
  if-counters polling-interval 300
  input ifindex physical
  output ifindex physical
 !
 exporter SF-EXP-MAP-1
!

Configure sFlow options in a monitor-map.

sampler-map SF-SAMP-MAP
 random 1 out-of 20000
!

Define the sampling rate in a sampler-map.

interface GigabitEthernet0/0/0/3
 flow datalinkframesection monitor-map SF-MON-MAP sampler SF-SAMP-MAP ingress

Enable sFlow on each interface for complete visibilty into network traffic.

The above configuration instructions are for IOS-XR. Cisco goes SONiC on Cisco 8000 describes Cisco's suppport for the open source SONiC network operating system. SONiC describes how sFlow is implemented and configured on SONiC.

The diagram shows the general architecture of an sFlow monitoring deployment. All the switches stream sFlow telemetry to a central sFlow analyzer for network wide visibililty. Host sFlow agents installed on servers can extend visibilty into the compute infrastructure, and provide network visibility from virtual machines in the public cloud. In this instance, the sFlow-RT real-time analyzer provides an up to the second view of performance that can be used to drive operational dashboards and network automation. The recommended sFlow configuration settings are optimized for real-time monitoring of the large scale networks targetted by Cisco 8000 routers.

docker run -p 8008:8008 -p 6343:6343/udp sflow/prometheus

Getting started with sFlow-RT is very simple, for example, the above command uses the pre-built sflow/prometheus Docker image to start analyzing sFlow. Real-time DDoS mitigation using BGP RTBH and FlowSpec, Monitoring leaf and spine fabric performance, and Flow metrics with Prometheus and Grafana describe additional use cases for real-time sFlow analytics.

Note: There is a wide range of options for sFlow analysis. See sFlow Collectors for a list of open source and commercial software.

Cisco first introduced sFlow support in the Nexus 3000 Series in 2012. Today, there is a range of Cisco products that include sFlow support. The inclusion of sFlow instrumentation in Silicon One is likely expand support across the range of upcoming products based on these ASICs. The broad support for sFlow by Cisco and other leading vendors (e.g. A10, Arista, Aruba, Cumulus, Edge-Core, Extreme, Huawei,  Juniper, NEC, Netgear, Nokia, Quanta, and ZTE) makes sFlow an attractive option for multi-vendor network performance monitoring, particularly for those interested in real-time monitoring and automation.

Monday, April 5, 2021

CONTAINERlab

CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. This article describes how to build and monitor the leaf and spine topology shown above.

Note: Docker testbed describes a simple testbed for experimenting with sFlow analytics using Docker Desktop, but it doesn't have the ability to construct complex topologies. 

multipass launch --cpus 2 --mem 4G --name containerlab
multipass shell containerlab

The above commands use the multipass command line tool to create an Ubuntu virtual machine and open shell access.

sudo apt update
sudo apt -y install docker.io
bash -c "$(curl -sL https://get-clab.srlinux.dev)"

Type the above commands into the shell to install CONTAINERlab.

Note: Multipass describes how to build a Mininet network emulator to experiment with software defined networking.

name: test
topology:
  nodes:
    leaf1:
      kind: linux
      image: sflow/frr
    leaf2:
      kind: linux
      image: sflow/frr
    spine1:
      kind: linux
      image: sflow/frr
    spine2:
      kind: linux
      image: sflow/frr
    h1:
      kind: linux
      image: alpine:latest
    h2:
      kind: linux
      image: alpine:latest
  links: 
    - endpoints: ["leaf1:eth1","spine1:eth1"]
    - endpoints: ["leaf1:eth2","spine2:eth1"]
    - endpoints: ["leaf2:eth1","spine1:eth2"]
    - endpoints: ["leaf2:eth2","spine2:eth2"]
    - endpoints: ["h1:eth1","leaf1:eth3"]
    - endpoints: ["h2:eth1","leaf2:eth3"]

The test.yml file shown above specifies the topology. In this case we are using FRRouting (FRR) containers for the leaf and spine switches and Alpine Linux containers for the two hosts.

sudo containerlab deploy --topo test.yml

The above command creates the virtual network and starts containers for each of the network nodes.

sudo containerlab inspect --topo test.yml

Type the command above to list the container instances in the topology.

The table shows each of the containers and the assigned IP addresses.

sudo docker exec -it clab-test-leaf1 vtysh

Type the command above to run the FRR VTY shell so that the switch can be configured.

leaf1# show running-config 
Building configuration...

Current configuration:
!
frr version 7.5_git
frr defaults datacenter
hostname leaf1
log stdout
!
interface eth3
 ip address 172.16.1.1/24
!
router bgp 65006
 bgp router-id 172.20.20.6
 bgp bestpath as-path multipath-relax
 bgp bestpath compare-routerid
 neighbor fabric peer-group
 neighbor fabric remote-as external
 neighbor fabric description Internal Fabric Network
 neighbor fabric capability extended-nexthop
 neighbor eth1 interface peer-group fabric
 neighbor eth2 interface peer-group fabric
 !
 address-family ipv4 unicast
  network 172.16.1.0/24
 exit-address-family
!
route-map ALLOW-ALL permit 100
!
ip nht resolve-via-default
!
line vty
!
end

The BGP configuration for leaf1 is shown above.

Note: We are using BGP unnumbered to simplify the configuration so peers are automatically discovered.

The other switches, leaf2spine1, and spine2 have similar configurations.

Next we need to configure the hosts.

sudo docker exec -it clab-test-h1 sh

Open a shell on h1

ip addr add 172.16.1.2/24 dev eth1
ip route add 172.16.2.0/24 via 172.16.1.1

Configure networking on h1. The other host, h2, has a similar configuration.

sudo docker exec -it clab-test-h1 ping 172.16.2.2
PING 172.16.2.2 (172.16.2.2): 56 data bytes
64 bytes from 172.16.2.2: seq=0 ttl=61 time=0.928 ms
64 bytes from 172.16.2.2: seq=1 ttl=61 time=0.160 ms
64 bytes from 172.16.2.2: seq=2 ttl=61 time=0.201 ms

Use ping to verify that there is connectivity between h1 and h2.

apk add iperf3

Install iperf3 on h1 and h2

iperf3 -s --bind 172.16.2.2

Run an iperf3 server on h2

iperf3 -c 172.16.2.2
Connecting to host 172.16.2.2, port 5201
[  5] local 172.16.1.2 port 52066 connected to 172.16.2.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.41 GBytes  12.1 Gbits/sec    0   1.36 MBytes       
[  5]   1.00-2.00   sec  1.41 GBytes  12.1 Gbits/sec    0   1.55 MBytes       
[  5]   2.00-3.00   sec  1.44 GBytes  12.4 Gbits/sec    0   1.55 MBytes       
[  5]   3.00-4.00   sec  1.44 GBytes  12.3 Gbits/sec    0   2.42 MBytes       
[  5]   4.00-5.00   sec  1.46 GBytes  12.6 Gbits/sec    0   3.28 MBytes       
[  5]   5.00-6.00   sec  1.42 GBytes  12.2 Gbits/sec    0   3.28 MBytes       
[  5]   6.00-7.00   sec  1.44 GBytes  12.4 Gbits/sec    0   3.28 MBytes       
[  5]   7.00-8.00   sec  1.28 GBytes  11.0 Gbits/sec    0   3.28 MBytes       
[  5]   8.00-9.00   sec  1.40 GBytes  12.0 Gbits/sec    0   3.28 MBytes       
[  5]   9.00-10.00  sec  1.25 GBytes  10.7 Gbits/sec    0   3.28 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  13.9 GBytes  12.0 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  13.9 GBytes  12.0 Gbits/sec                  receiver

Run an iperf3 test on h1

Now that we have a working test network, it's time to add some monitoring.

We will be installing sFlow agents on the switches and hosts that will stream telemetry to sFlow-RT analytics software which will provide a real-time network-wide view of performance.

sudo docker exec -it clab-test-leaf1 sh

Open a shell on leaf1

apk --update add libpcap-dev build-base linux-headers gcc git
git clone https://github.com/sflow/host-sflow.git
cd host-sflow/
make FEATURES="PCAP"
make install

Install Host sFlow agent on leaf1.

Note: The steps above could be included in a Dockerfile in order to create an image with built-in instrumentation.

vi /etc/hsflowd.conf

Edit the Host sFlow configuration file.

sflow {
  polling = 30
  sampling = 400
  collector { ip = 172.20.20.1 }
  pcap { dev = eth1 }
  pcap { dev = eth2 }
  pcap { dev = eth3 }
}

The above settings enable packet sampling on interfaces eth1, eth2 and eth3

sudo docker exec -d clab-test-leaf1 /usr/sbin/hsflowd -d

Start the Host sFlow agent on leaf1.

Install and run Host sFlow agents on the remaining switches and hosts, leaf2, spine1, spine2, h1, and h2.

sudo docker run --rm -d -p 6343:6343/udp -p 8008:8008 --name sflow-rt sflow/prometheus

Use the pre-built sflow/prometheus container to start an instance of sFlow-RT to collect and analyze the telemetry.

multipass list

List the multipass virtual machines.

containerlab            Running           192.168.64.7     Ubuntu 20.04 LTS
                                          172.17.0.1
                                          172.20.20.1

Use a web browser to connect to the sFlow-RT web interface. In this case at http://192.168.64.7:8008 

The sFlow-RT dashboard verifies that telemetry is being received from 6 agents (the four switches and two hosts).

The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. 

The chart shows that the traffic flows via spine2. Repeated tests showed that traffic traffic was never taking the path via spine1, indicating that the ECMP hash function was not taking into account the TCP ports.

sudo docker exec clab-test-leaf1 sysctl -w net.ipv4.fib_multipath_hash_policy=1
sudo docker exec clab-test-leaf2 sysctl -w net.ipv4.fib_multipath_hash_policy=1

We are using a newer Linux kernel, so running the above commands changes the hashing algorithm to include the layer 4 headers, see Celebrating ECMP in Linux — part one and Celebrating ECMP in Linux — part two.

Topology describes how knowledge of network topology can be used to enhance the analytics capabilities of sFlow-RT.

{
  "links": {
    "link1": { "node1":"leaf1","port1":"eth1","node2":"spine1","port2":"eth1"},
    "link2": { "node1":"leaf1","port1":"eth2","node2":"spine2","port2":"eth1"},
    "link3": { "node1":"leaf2","port1":"eth1","node2":"spine1","port2":"eth2"},
    "link4": { "node1":"leaf2","port1":"eth2","node2":"spine2","port2":"eth2"}
  }
}

The links specification in the test.yml file can easily be converted into sFlow-RT's JSON format.

CONTAINERlab is a very promising tool for efficiently emulating complex networks. CONTAINERlab supports NokiaSR-Linux, Juniper vMX, Cisco IOS XRv9k and Arista vEOS, as well as Linux containers. Many of the proprietary network operating systems are only delivered as virtual machines and Vrnetlab integration makes it possible for CONTAINERlab to run these virtual machines. However, virtual machine nodes require considerably more resources than simple containers.

Linux with open source routing software (FRRouting) is an accessible alternative to vendor routing stacks (no registration / license required, no restriction on copying means you can share images on Docker Hub, no need for virtual machines).  FRRouting is popular in production network operating systems (e.g. Cumulus Linux, SONiC, DENT, etc.) and the VTY shell provides an industry standard CLI for configuration, so labs built around FRR allow realistic network configurations to be explored.

Monday, March 22, 2021

In-band Network Telemetry (INT)

The recent addition of in-band streaming telemetry (INT) measurements to the sFlow industry standard simplifies deployment by addressing the operational challenges of in-band monitoring.

The diagram shows the basic elements of In-band Network Telemetry (INT) in which the ingress switch is programmed to insert a header containing measurements to packets entering the network. Each switch in the path is programmed to append additional measurements to the packet header. The egress switch is programmed to remove the header so that the packet can be delivered to its destination. The egress switch is responsible for processing the measurements or sending them on to analytics software.

There are currently two competing specifications for in-band telemetry:

  1. In-band Network Telemetry (INT) Dataplane Specification
  2. Data Fields for In-situ OAM

Common telemetry attributes from both standards include:

  1. node id
  2. ingress port
  3. egress port
  4. transit delay (egress timestamp - ingress timestamp)
  5. queue depth

Visibility into network forwarding performance is very useful, however, there are practical issues that should be considered with the in-band telemetry approach for collecting the measurements:

  1. Transporting measurement headers is complex with different encapsulations for each transport protocol:  Geneve, VxLAN, GRE, UDP, TCP etc.
  2. Addition of headers increases the size of packets and risks causing traffic to be dropped downstream due to maximum transmission unit (MTU) restrictions.
  3. The number of measurements that can be added by each switch and the number of switches adding measurements in the path needs to be limited.
  4. In-band telemetry cannot be incrementally deployed. Ideally, all devices need to participate, or at a minimum, the ingress and egress devices need to be in-band telemetry aware.
  5. In-band telemetry transports data from the data plane to the control/management planes, providing a potential attack surface that could be exploited by crafting malicious packets with fake measurement headers.
  6. There is no standard mechanism for transporting measurements from the egress switch for analysis.
  7. There is no data model to link in-band telemetry to other sources of data (NETCONF, SNMP, etc.)

The sFlow Transit Delay Structures extension addresses these issues by defining how the in-band network telemetry attributes can be exported in real-time using the industry standard sFlow protocol.

The sFlow architecture, shown at the top of this article, provides an out of band alternative for transporting the per packet forwarding plane measurements. The switch ASIC attaches performance measurements as metadata to sampled packets sent to the sFlow Agent instead of adding the measurements to the egress packet. The sFlow Agent immediately forwards the additional packet metadata as part of the standard sFlow telemetry stream to a central sFlow analyzer. The sFlow Analyzer provides a real-time view of the performance of the entire network.

Using sFlow as the telemetry transport has a number of benefits:

  1. Simple to deploy since there is no modification of packets (no issues with encapsulations, MTU, number of measurements, path length, incremental deployment, etc.)
  2. Extensibility of sFlow protocol allows additional forwarding plane measurements to augment existing sFlow measurements, fully integrating the new measurements with sFlow data exported from other switches in the network (Arista, Aruba, Cisco, Dell, Huawei, Juniper, etc.)
  3. sFlow's is a unidirectional telemetry transport protocol originates from the device management plane, can be sent out of band, limiting possible attack surfaces.
  4. Measurements are delivered in real-time directly to the sFlow Analyzer.
  5. sFlow data model links telemetry to external data (SNMP, NETCONF, OpenConfig, etc.)

Transit delay and queueing describes the new sFlow measurements in more detail and demonstrates a working implementation. The instrumentation to support these measurements is widely available in current generation network ASICs. If you are interested in visibility into network performance, ask your network vendor about their plans to implement the sFlow Transit Delay Structures extension.