Friday, June 14, 2019

Mininet flow analytics with custom scripts

Mininet flow analytics describes how to use the sflow.py helper script that ships with the sFlow-RT analytics engine to enable sFlow telemetry, e.g.
sudo mn --custom sflow-rt/extras/sflow.py --link tc,bw=10 \
--topo tree,depth=2,fanout=2
Mininet, ONOS, and segment routing provides an example using a Custom Topology, e.g.
sudo env ONOS=10.0.0.73 mn --custom sr.py,sflow-rt/extras/sflow.py \
--link tc,bw=10 --topo=sr '--controller=remote,ip=$ONOS,port=6653'
This article describes how to incorporate sFlow monitoring in a fully custom Mininet script. Consider the following simpletest.py script based on Working with Mininet:
#!/usr/bin/python                                                                            
                                                                                             
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def build(self, n=2):
        switch = self.addSwitch('s1')
        # Python's range(N) generates 0..N-1
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

def simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    net = Mininet(topo)
    net.start()
    print "Dumping host connections"
    dumpNodeConnections(net.hosts)
    print "Testing bandwidth between h1 and h4"
    h1, h4 = net.get( 'h1', 'h4' )
    net.iperf( (h1, h4) )
    net.stop()

if __name__ == '__main__':
    # Tell mininet to print useful information
    setLogLevel('info')
    simpleTest()
Add the highlighted lines to incorporate sFlow telemetry:
#!/usr/bin/python                                                                            
                                                                                             
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel
from mininet.util import customClass
from mininet.link import TCLink

# Compile and run sFlow helper script
# - configures sFlow on OVS
# - posts topology to sFlow-RT
execfile('sflow-rt/extras/sflow.py') 

# Rate limit links to 10Mbps
link = customClass({'tc':TCLink}, 'tc,bw=10')

class SingleSwitchTopo(Topo):
    "Single switch connected to n hosts."
    def build(self, n=2):
        switch = self.addSwitch('s1')
        # Python's range(N) generates 0..N-1
        for h in range(n):
            host = self.addHost('h%s' % (h + 1))
            self.addLink(host, switch)

def simpleTest():
    "Create and test a simple network"
    topo = SingleSwitchTopo(n=4)
    net = Mininet(topo,link=link)
    net.start()
    print "Dumping host connections"
    dumpNodeConnections(net.hosts)
    print "Testing bandwidth between h1 and h4"
    h1, h4 = net.get( 'h1', 'h4' )
    net.iperf( (h1, h4) )
    net.stop()

if __name__ == '__main__':
    # Tell mininet to print useful information
    setLogLevel('info')
    simpleTest()
When running the script the highlighted output confirms that sFlow has been enabled and the topology has been posted to sFlow-RT:
pp@mininet:~$ sudo ./simpletest.py
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 h4 
*** Adding switches:
s1 
*** Adding links:
(10.00Mbit) (10.00Mbit) (h1, s1) (10.00Mbit) (10.00Mbit) (h2, s1) (10.00Mbit) (10.00Mbit) (h3, s1) (10.00Mbit) (10.00Mbit) (h4, s1) 
*** Configuring hosts
h1 h2 h3 h4 
*** Starting controller
c0 
*** Starting 1 switches
s1 ...(10.00Mbit) (10.00Mbit) (10.00Mbit) (10.00Mbit) 
*** Enabling sFlow:
s1
*** Sending topology
Dumping host connections
h1 h1-eth0:s1-eth1
h2 h2-eth0:s1-eth2
h3 h3-eth0:s1-eth3
h4 h4-eth0:s1-eth4
Testing bandwidth between h1 and h4
*** Iperf: testing TCP bandwidth between h1 and h4 
*** Results: ['6.32 Mbits/sec', '6.55 Mbits/sec']
*** Stopping 1 controllers
c0 
*** Stopping 4 links
....
*** Stopping 1 switches
s1 
*** Stopping 4 hosts
h1 h2 h3 h4 
*** Done
Mininet dashboard and Mininet weathermap describe the sFlow-RT Mininet Dashboard application shown at the top of this article. The tool provides a real-time visualization of traffic flowing over the Mininet network. Writing Applications describes how to develop custom analytics applications for sFlow-RT.

Wednesday, May 8, 2019

Secure forwarding of sFlow using ssh

Typically sFlow datagrams are sent unencrypted from agents embedded in switches and routers to a local collector/analyzer. Sending sFlow datagrams over the management VLAN or out of band management network generally provides adequate isolation and security within the site. Inter-site traffic within an organization is typically carried over a virtual private network (VPN) which encrypts the data and protects it from eavesdropping.

This article describes a simple method of carrying sFlow datagrams over an encrypted ssh connection which can be useful in situations where a VPN is not available, for example, sending sFlow to an analyzer in the public cloud, or to an external consultant.

The diagram shows the elements of the solution. A collector on the site receives sFlow datagrams from the network devices and uses the sflow_fwd.py script to convert the datagrams into line delimited hexadecimal strings that are sent over an ssh connection to another instance of sflow_fwd.py running on the analyzer that converts the hexadecimal strings back to sFlow datagrams.

The following sflow_fwd.py Python script accomplishes the task:
#!/usr/bin/python

import socket
import sys
import argparse

parser = argparse.ArgumentParser(description='Serialize/deserialize sFlow')
parser.add_argument('-c', '--collector', default='')
parser.add_argument('-s', '--server')
parser.add_argument('-p', '--port', type=int, default=6343)
args = parser.parse_args()

sock=socket.socket(socket.AF_INET,socket.SOCK_DGRAM)

if(args.server != None):
  while True:
    line = sys.stdin.readline()
    if not line:
      break
    buf = bytearray.fromhex(line[:-1])
    sock.sendto(buf, (args.server, args.port))
else: 
  sock.bind((args.collector,args.port))
  while True:
    buf = sock.recv(2048)
    if not buf:
      break
    print buf.encode('hex')
    sys.stdout.flush()
Create a user account on both the collector and analyzer machines, in this example the user is pp. Next copy the script to both machines.

If you log into the collector machine, following command will send sFlow to the analyzer machine:
./sflow_fwd.py | ssh pp@analyzer './sflow_fwd.py -s 127.0.0.1'
If you log into the analyzer machine, the following command will retrieve sFlow from the collector machine:
ssh pp@collector './sflow_fwd.py' | ./sflow_fwd.py -s 127.0.0.1
If a permanent connection is required, it is relatively straightforward to create a daemon using systemd. In this example, the service is being installed on the collector machine by performing the following steps:
First log into the collector generate an ssh key:
ssh-keygen
Next, install the key on the analyzer system:
ssh-copy-id pp@analyzer
Now create the systemd service file, /etc/systemd/system/sflow-tunnel.service:
[Unit]
Description=sFlow tunnel
After=network.target

[Service]
Type=simple
User=pp
ExecStart=/bin/sh -c "/home/pp/sflow_fwd.py | /usr/bin/ssh pp@analyzer './sflow_fwd.py -s 127.0.0.1'"
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target
Finally, use the systemctl command to enable and start the daemon:
sudo systemctl enable sflow-tunnel.service
sudo systemctl start sflow-tunnel.service
A simple way to confirm that sFlow is arriving on the analyzer machine is to use sflowtool.

There are numerous articles on this blog describing how the sFlow-RT analytics software can be used to integrate sFlow telemetry with popular metrics and SIEM (security information and event management) tools.

Tuesday, April 23, 2019

Prometheus exporter

Prometheus is an open source time series database optimized to collect large numbers of metrics from cloud infrastructure. This article will explore how industry standard sFlow telemetry streaming supported by network devices (Arista, Aruba, Cisco, Dell, Huawei, Juniper, etc.) and Host sFlow agents (Linux, Windows, FreeBSD, AIX, Solaris, Docker, Systemd, Hyper-V, KVM, Nutanix AHV, Xen) can be integrated with Prometheus to extend visibility into the network.

The diagram above shows the elements of the solution: sFlow telemetry streams from hosts and switches to an instance of sFlow-RT. The sFlow-RT analytics software converts the raw measurements into metrics that are accessible through a REST API. The sflow-rt/prometheus application extends the REST API to include native Prometheus exporter functionality allowing Prometheus to retrieve metrics. Prometheus stores metrics in a time series database that can be queries by Grafana to build dashboards.

The Docker sflow/prometheus image provides a simple way to run the application:
docker run --name sflow-rt -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus
Configure sFlow agents to send data to the collector, 10.0.0.70, on port 6343.

Verify that the metrics are available using cURL:
$ curl http://10.0.0.70:8008/app/prometheus/scripts/export.js/dump/ALL/ALL/txt
ifinucastpkts{agent="10.0.0.30",datasource="2",host="server",ifname="enp3s0"} 9.44
ifoutdiscards{agent="10.0.0.30",datasource="2",host="server",ifname="enp3s0"} 0
ifoutbroadcastpkts{agent="10.0.0.30",datasource="2",host="server",ifname="enp3s0"} 0
ifinerrors{agent="10.0.0.30",datasource="2",host="server",ifname="enp3s0"} 0
If the sFlow agents don't provide host and ifname information, enable SNMP to retrieve sysName and ifName data to populate these fields:
docker run --name sflow-rt -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus \
-Dsnmp.ifname=yes
By default SNMP version 2c will be used with the public community string. Additional System Properties can be used to override these defaults.
Now define a metrics "scraping" job in the Prometheus configuration file, prometheus.yml
global:
  scrape_interval:     15s
  evaluation_interval: 15s

rule_files:
  # - "first.rules"
  # - "second.rules"

scrape_configs:
  - job_name: 'sflow-rt'
    metrics_path: /app/prometheus/scripts/export.js/dump/ALL/ALL/txt
    static_configs:
      - targets: ['10.0.0.70:8008']
Now start Prometheus:
docker run --name prometheus --rm -v $PWD/data:/prometheus \
-v $PWD/prometheus.yml:/etc/prometheus/prometheus.yml \
-p 9090:9090 -d prom/prometheus
The screen capture above shows the Prometheus web interface (accessed on port 9090).
Grafana is open source time series analysis software. The ability to pull data from many data sources and the extensive range of charting options makes Grafana an attractive tool for building operations dashboards.

The following command shows how to run Grafana under Docker:
docker run --name grafana -v $PWD/data:/var/lib/grafana \
-p 3000:3000 -d grafana/grafana
Access the Grafana web interface on port 3000, configure a data source for the Prometheus database, and start building dashboards. The screen capture above shows the same chart built earlier using the native Prometheus interface.

Wednesday, March 6, 2019

Loggly

Loggly is a cloud logging and and analysis platform. This article will demonstrate how to integrate network events generated from industry standard sFlow instrumentation build into network switches.
Loggly offers a free 14 day evaluation, so you can try this example at no cost.
ICMP unreachable describes how monitoring ICMP destination unreachable messages can help identify misconfigured hosts and scanning behavior. The article uses the sFlow-RT real-time analytics software to process the raw sFlow and report on unreachable messages.

The following script, loggly.js, modifies the sFlow-RT script from the article to send events to the Loggly HTTP/S Event Endpoint:
var token = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';
  
var url = 'https://logs-01.loggly.com/inputs/'+token+'/tag/http/';

var keys = [
  'icmpunreachablenet',
  'icmpunreachablehost',
  'icmpunreachableprotocol',
  'icmpunreachableport'
];

for (var i = 0; i < keys.length; i++) {
  var key = keys[i];
  setFlow(key, {
    keys:'macsource,ipsource,macdestination,ipdestination,' + key,
    value:'frames',
    log:true,
    flowStart:true
  });
}

setFlowHandler(function(rec) {
  var keys = rec.flowKeys.split(',');
  var msg = {
    flow_type:rec.name,
    src_mac:keys[0],
    src_ip:keys[1],
    dst_mac:keys[2],
    dst_ip:keys[3],
    unreachable:keys[4]
  };

  try { http(url,'post','application/json',JSON.stringify(msg)); }
  catch(e) { logWarning(e); };
}, keys);
Some notes on the script:
  1. Modify the script to use the correct token for your Loggly account.
  2. Including MAC addresses can help identify hosts even if they spoof IP addresses
  3. See Writing Applications for more information.
Run the script using the sflow/sflow-rt docker image:
docker run -p 6343:6343/udp -v $PWD/loggly.js:/loggly.js \
sflow/sflow-rt -Dscript.file=/loggly.js
Events should now start appearing in Loggly.
The Loggly Live Tail page can be used to verify that the logs are being received. The screen capture at the start of this article shows a chart trending events by the host that triggered them, identifying 10.0.0.30 as the source of the network scan.

The loggly.js script can easily be modified to track and log different types of network activity. For example, Blacklists describes how to download a set of blacklisted addresses, match traffic against the blacklist and generate events for the matches.

Intranet DDoS attacks describes the threats posed by IoT (Internet of Things) devices and the need for visibility throughout the network in order to tackle these threats. Incorporating sFlow in the monitoring strategy extends visibility beyond the firewalls to the entire network.

In addition to generating events, sFlow analytics can be used to deliver performance metrics. The article, Cloud analytics, describes how to use sFlow-RT to send performance metrics to the Librato cloud service - also part of Solarwinds.

Monday, December 10, 2018

sFlow to JSON

The latest version of sflowtool can convert sFlow datagrams into JSON, making it easy to write scripts to process the standard sFlow telemetry streaming from devices in the network.

Download and compile the latest version of sflowtool:
git clone https://github.com/sflow/sflowtool.git
cd sflowtool/
./boot.sh 
./configure 
make
sudo make install
The -J option formats the JSON output to be human readable:
$ sflowtool -J
{
 "datagramSourceIP":"10.0.0.162",
 "datagramSize":"396",
 "unixSecondsUTC":"1544241239",
 "localtime":"2018-12-07T19:53:59-0800",
 "datagramVersion":"5",
 "agentSubId":"0",
 "agent":"10.0.0.231",
 "packetSequenceNo":"1068783",
 "sysUpTime":"1338417874",
 "samplesInPacket":"2",
 "samples":[
  {
   "sampleType_tag":"0:2",
   "sampleType":"COUNTERSSAMPLE",
   "sampleSequenceNo":"148239",
   "sourceId":"0:3",
   "elements":[
    {
     "counterBlock_tag":"0:1",
     "ifIndex":"3",
     "networkType":"6",
     "ifSpeed":"1000000000",
     "ifDirection":"1",
     "ifStatus":"3",
     "ifInOctets":"4162076356",
     "ifInUcastPkts":"16312256",
     "ifInMulticastPkts":"187789",
     "ifInBroadcastPkts":"2566",
     "ifInDiscards":"0",
     "ifInErrors":"0",
     "ifInUnknownProtos":"0",
     "ifOutOctets":"2115351089",
     "ifOutUcastPkts":"7087570",
     "ifOutMulticastPkts":"4453258",
     "ifOutBroadcastPkts":"6141715",
     "ifOutDiscards":"0",
     "ifOutErrors":"0",
     "ifPromiscuousMode":"0"
    },
    {
     "counterBlock_tag":"0:2",
     "dot3StatsAlignmentErrors":"0",
     "dot3StatsFCSErrors":"0",
     "dot3StatsSingleCollisionFrames":"0",
     "dot3StatsMultipleCollisionFrames":"0",
     "dot3StatsSQETestErrors":"0",
     "dot3StatsDeferredTransmissions":"0",
     "dot3StatsLateCollisions":"0",
     "dot3StatsExcessiveCollisions":"0",
     "dot3StatsInternalMacTransmitErrors":"0",
     "dot3StatsCarrierSenseErrors":"0",
     "dot3StatsFrameTooLongs":"0",
     "dot3StatsInternalMacReceiveErrors":"0",
     "dot3StatsSymbolErrors":"0"
    }
   ]
  },
  {
   "sampleType_tag":"0:1",
   "sampleType":"FLOWSAMPLE",
   "sampleSequenceNo":"11791",
   "sourceId":"0:3",
   "meanSkipCount":"2000",
   "samplePool":"34185160",
   "dropEvents":"0",
   "inputPort":"3",
   "outputPort":"10",
   "elements":[
    {
     "flowBlock_tag":"0:1",
     "flowSampleType":"HEADER",
     "headerProtocol":"1",
     "sampledPacketSize":"102",
     "strippedBytes":"0",
     "headerLen":"104",
     "headerBytes":"0C-AE-4E-98-0B-89-05-B6-D8-D9-A2-66-80-00-54-00-00-45-08-12-04-00-04-10-4A-FB-A0-00-00-BC-A0-00-00-EF-80-00-DE-B1-E7-26-00-20-75-04-B0-C5-00-00-00-00-96-01-20-00-00-00-00-00-01-11-21-31-41-51-61-71-81-91-A1-B1-C1-D1-E1-F1-02-12-22-32-42-52-62-72-82-92-A2-B2-C2-D2-E2-F2-03-13-23-33-43-53-63-73-1A-1D-4D-76-00-00",
     "dstMAC":"0cae4e980b89",
     "srcMAC":"05b6d8d9a266",
     "IPSize":"88",
     "ip.tot_len":"84",
     "srcIP":"10.0.0.203",
     "dstIP":"10.0.0.254",
     "IPProtocol":"1",
     "IPTOS":"0",
     "IPTTL":"64",
     "IPID":"8576",
     "ICMPType":"8",
     "ICMPCode":"0"
    },
    {
     "flowBlock_tag":"0:1001",
     "extendedType":"SWITCH",
     "in_vlan":"1",
     "in_priority":"0",
     "out_vlan":"1",
     "out_priority":"0"
    }
   ]
  }
 ]
}
The output shows the JSON representation of a single sFlow datagram containing one counter sample and one flow sample.

The -j option output formats the JSON output as a single line per datagram making the output easy to parse in scripts. For example, the following Python script, flow.py, runs sflowtool and parses the JSON output:
#!/usr/bin/env python

import subprocess
from json import loads

p = subprocess.Popen(
  ['/usr/local/bin/sflowtool','-j'],
  stdout=subprocess.PIPE,
  stderr=subprocess.STDOUT
)
lines = iter(p.stdout.readline,'')
for line in lines:
  datagram = loads(line)
  localtime = datagram["localtime"]
  samples = datagram["samples"]
  for sample in samples:
    sampleType = sample["sampleType"]
    elements = sample["elements"]
    if sampleType == "FLOWSAMPLE":
      for element in elements:
        tag = element["flowBlock_tag"]
        if tag == "0:1":
          try:
            src = element["srcIP"]
            dst = element["dstIP"]
            pktsize = element["sampledPacketSize"]
            print "%s %s %s %s" % (localtime,src,dst,pktsize)
          except KeyError:
            pass
Running the script prints flow records showing time, source, destination and number of bytes:
$ ./flow.py 
2018-12-07T20:53:06-0800 10.0.0.70 10.0.0.238 110
2018-12-07T20:53:06-0800 10.0.0.70 10.0.0.238 70
2018-12-07T20:53:06-0800 10.0.0.70 10.0.0.238 70
2018-12-07T20:53:06-0800 10.0.0.238 10.0.0.70 90
The script can easily be modified to add additional fields, push data into an SIEM tool (e.g. Logstash), push counter data into a time series database (e.g. InfluxDB), or perform additional analysis in Python. For example, the following script builds on the example, downloading the Emerging Threats compromised address list and logging any flows that match the list:
#!/usr/bin/env python

import subprocess
from json import loads
from requests import get

blacklist = set()
r = get('https://rules.emergingthreats.net/blockrules/compromised-ips.txt')
for line in r.iter_lines():
  blacklist.add(line)

p = subprocess.Popen(
  ['/usr/local/bin/sflowtool','-j'],
  stdout=subprocess.PIPE,
  stderr=subprocess.STDOUT
)
lines = iter(p.stdout.readline,'')
for line in lines:
  datagram = loads(line)
  localtime = datagram["localtime"]
  samples = datagram["samples"]
  for sample in samples:
    sampleType = sample["sampleType"]
    elements = sample["elements"]
    if sampleType == "FLOWSAMPLE":
      for element in elements:
        tag = element["flowBlock_tag"]
        if tag == "0:1":
          try:
            src = element["srcIP"]
            dst = element["dstIP"]
            if src in blacklist or dst in blacklist:
              print "%s %s %s" % (localtime,src,dst)
          except KeyError:
            pass
The open source Host sFlow agent provides a convenient means of experimenting with sFlow if you don't have access to network devices. The Host sFlow agent is also a simple way to gather real-time telemetry from public cloud virtual machine instances where access to the physical network infrastructure is not permitted.

Finally, for advanced sFlow analytics, try sFlow-RT, a real-time analytics engine that exposes a REST API.

Thursday, November 15, 2018

Mininet, ONOS, and segment routing

Leaf and spine traffic engineering using segment routing and SDN and CORD: Open-source spine-leaf Fabric describe a demonstration at the 2015 Open Networking Summit using the ONOS SDN controller and a physical network of 8 switches.

This article will describe how to emulate a leaf and spine network using Mininet and configure the ONOS segment routing application to provide equal cost multi-path (ECMP) routing of flows across the fabric. The Mininet Dashboard application running on the sFlow-RT real-time analytics platform is used to provide visibility into traffic flows across the emulated network.

First, run ONOS using Docker:
docker run --name onos --rm -p 6653:6653 -p 8181:8181 -d onosproject/onos
Use the graphical interface, http://onos:8181, to enable the OpenFlow Provider Suite, Network Config Host Provider, Network Config Link Provider, and Segment Routing applications. The screen shot above shows the resulting set of enabled services.

Next, install sFlow-RT and the Mininet Dashboard application on host with Mininet:
wget https://inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
./sflow-rt/get-app.sh sflow-rt mininet-dashboard
Start sFlow-RT:
./sflow-rt/start.sh
Download the sr.py script:
wget https://raw.githubusercontent.com/sflow-rt/onos-sr/master/sr.py
Start Mininet:
sudo env ONOS=10.0.0.73 mn --custom sr.py,sflow-rt/extras/sflow.py \
--link tc,bw=10 --topo=sr '--controller=remote,ip=$ONOS,port=6653'
The sr.py script is used to create a leaf and spine topology in Mininet and send the network configuration to the ONOS controller. The sflow.py script enables sFlow monitoring of the switches and sends the network topology to sFlow-RT.

The leaf and spine topology will appear in the ONOS web interface.
The topology will also appear in the Mininet Dashboard application:
Run an iperf test using the Mininet cli:
mininet> iperf h1 h3
The path that the traffic takes is highlighted on the Mininet Dashboard topology:
In this case the traffic flowed between leaf1 and leaf2 via spine1. Since ONOS segment routing uses equal cost multi-path routing, subsequent iperf tests may take the alternative via spine2.
Switch to the Charts tab to see traffic trend charts. In this case, the trend charts show the results of six iperf tests. The Traffic chart shows the top flows and the Topology charts show the busy links and the network diameter.

See Writing Applications for an introduction to programming sFlow-RT's analytics engine. Mininet flow analytics provides a simple example of detecting large (elephant) flows.

Wednesday, November 14, 2018

Real-time visibility at 400 Gigabits/s

The chart above demonstrates real-time, up to the second, flow monitoring on a 400 gigabit per second link. The chart shows that the traffic is composed of four, roughly equal, 100 gigabit per second flows.

The data was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network in the world.
This year, the SCinet network includes recently announced 400 gigabit switches from Arista networks, see Arista Introduces 400 Gigabit Platforms. Each switch delivers 32 400G ports in a 1U form factor.
NRE-36 University of Southern California network topology for SuperComputing 2018
The switches are part of 400G demonstration network connecting USC, Caltech and StarLight booths. The chart shows traffic on a link connecting the USC and Caltech booths.

Providing the visibility needed to manage large scale high speed networks is a significant challenge. In this example, line rate traffic of 80 million packets per second is being monitored on the 400G port. The maximum packet rate for 64 byte packets on a 400 Gigabit, full duplex, link is approximately 1.2 billion packet per second (600 million in each direction). Monitoring all 32 ports requires a solution that can handle over 38 billion packets per second.

In this case, industry standard sFlow instrumentation built into the Broadcom Tomahawk 3 ASICs in the Arista switches provides line rate visibility. Real-time sFlow telemetry from all ports on all switches in the network stream to a central sFlow analyzer that provides network wide visibility. The overall bandwidth capacity delivered to SC18 exhibitors is 9.322 terabits per second.
The chart was generated using the open source Flow Trend application running on sFlow-RT. The sFlow-RT analytics software takes streaming sFlow telemetry from all the devices in the network, providing real-time visibility to orchestration, DevOps and SDN systems.