Monday, September 20, 2021


Containernet is a fork of the Mininet network emulator that uses Docker containers as hosts in emulated network topologies.

Multipass describes how build a Mininet testbed that provides real-time traffic visbility using sFlow-RT. This article adapts the testbed for Containernet.

multipass launch --name=containernet bionic
multipass exec containernet -- sudo apt update
multipass exec containernet -- sudo apt -y install ansible git aptitude default-jre
multipass exec containernet -- git clone
multipass exec containernet -- sudo ansible-playbook -i "localhost," -c local containernet/ansible/install.yml
multipass exec containernet -- sudo /bin/sh -c "cd containernet; make develop"
multipass exec containernet -- wget
multipass exec containernet -- tar -xzf sflow-rt.tar.gz
multipass exec containernet -- ./sflow-rt/ sflow-rt mininet-dashboard

Run the above commands in a terminal to create the Containernet virtual machine. 

multipass list

List the virtual machines

Name                    State             IPv4             Image
primary                 Stopped           --               Ubuntu 20.04 LTS
containernet            Running     Ubuntu 18.04 LTS

Find the IP address of the mininet virtual machine we just created (

multipass exec containernet -- ./sflow-rt/

Start sFlow-RT. Use a web browser to connect to the VM and access the Mininet Dashboad application running on sFlow-RT, in this case

Open a second shell.

multipass shell containernet

Connet to the Containernet virtual machine.

cp containernet/examples/ .

Copy the Containernet Get started example script.

This is the most simple example to showcase Containernet.
from import Containernet
from mininet.node import Controller
from mininet.cli import CLI
from import TCLink
from mininet.log import info, setLogLevel


net = Containernet(controller=Controller)
info('*** Adding controller\n')
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='', dimage="ubuntu:trusty")
d2 = net.addDocker('d2', ip='', dimage="ubuntu:trusty")
info('*** Adding switches\n')
s1 = net.addSwitch('s1')
s2 = net.addSwitch('s2')
info('*** Creating links\n')
net.addLink(d1, s1)
net.addLink(s1, s2, cls=TCLink, delay='100ms', bw=1)
net.addLink(s2, d2)
info('*** Starting network\n')
info('*** Testing connectivity\n')[d1, d2])
info('*** Running CLI\n')
info('*** Stopping network')

Edit the copy and add the highlighted line to enable sFlow monitoring:

sudo python3

Run the Containernet example script.

Finally, the network topology will appear under the Mininet Dashboard topology tab.

Tuesday, August 31, 2021

Netdev 0x15

The recent Netdev 0x15 conference included a number of papers diving into the technology behind Linux as a network operating system. Slides and videos are now available on the conference web site.
Network wide visibility with Linux networking and sFlow describes the Linux switchdev driver used to integrate network hardware with Linux. The talk focuses on network telemetry, showing how standard Linux APIs are used to configure hardware instrumentation and stream telemetry using the industry standard sFlow protocol for data center wide visibility.
Switchdev in the wild describes Yandex's experience of deploying Linux switchdev based switches in production at scale. The diagram from the talk shows the three layer leaf and spine network architecture used in their data centers. Yandex operates multiple data centers, each containing up to 100,000 servers.
Switchdev Offload Workshop provides updates about the latest developments in the switchdev community. 
FRR Workshop discusses the latest development in the FRRouting project, the open source routing software that is now a defacto standard on Linux network operating systems.

Wednesday, August 18, 2021

Nokia Service Router Linux

Nokia Service Router Linux (SR-Linux) is an open source network operating system running on Nokia's merchant silicon based data center switches.

The following commands configure SR-Linux to sample packets at 1-in-10000, poll counters every 20 seconds and stream standard sFlow telemetry to an analyzer ( using the default sFlow port 6343:

system {
    sflow {
        admin-state enable
        sample-rate 10000
        collector 1 {
            network-instance default
            port 6343

For each interface:

interface ethernet-1/1 {
    admin-state enable
    sflow {
        admin-state enable

Enable sFlow on all switches and ports in the data center fabric for comprehensive visibility.

An instance of the sFlow-RT real-time analytics software converts the raw sFlow telemetry into actionable measurements to drive operational dashboards and automation (e.g. DDoS mitigation, traffic engineering, etc.).
docker run --name sflow-rt -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus
A simple way to get started is to run the Docker sflow/prometheus image on the sFlow analyzer host ( in the example config) to run sFlow-RT with useful applications to explore the telemetry. Access the web interface at

Tuesday, June 15, 2021

DDoS mitigation using a Linux switch

Linux as a network operating system describes the benefits of using standard Linux as a network operating system for hardware switches. A key benefit is that the behavior of the physical network can be efficiently emulated using standard Linux virtual machines and/or containers.

In this article, CONTAINERlab will be used to create a simple testbed that can be used to develop a real-time DDoS mitigation controller. This solution is highly scaleable. Each hardware switch can monitor and filter terabits per second of traffic and a single controller instance can monitor and control hundreds of switches.

Create test network

The following ddos.yml file specifies the testbed topology (shown in the screen shot at the top of this article):

name: ddos
      kind: linux
      image: sflow/frr
      kind: linux
      image: sflow/hping3
      kind: linux
      image: alpine:latest
    - endpoints: ["router:swp1","attacker:eth1"]
    - endpoints: ["router:swp2","victim:eth1"]

Run the following command to run the emulation:

sudo containerlab deploy ddos.yml

Configure interfaces on router:

interface swp1
 ip address
interface swp2
 ip address

Configure attacker interface:

ip addr add dev eth1
ip route add via

Configure victim interface:

ip addr add dev eth1
ip route add via

Verify connectivity between the attacker and the victim:

sudo docker exec -it clab-ddos-attacker ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.069 ms

Install visibility and control applications on router

The advantage of using Linux as a network operating system is that you can develop and install software to tailor the network to address specific requirements. In this case, for DDoS mitigation, we need real-time visibility to detect DDoS attacks and real-time control to filter out the attack traffic.

Open a shell on router:

sudo docker exec -it clab-ddos-router sh

Install and configure Host sFlow agent:

apk --update add build-base linux-headers openssl-dev dbus-dev gcc git
git clone
cd host-sflow
make install

Edit /etc/hsflowd.conf

sflow {
  agent = eth0
  collector { ip= udpport=6343 }
  dent { sw=on switchport=swp.* }

Note: On a hardware switch, set sw=off to offload packet sampling to hardware.

Start hsflowd:


Download and run tc_server Python script for adding and removing tc flower filters using a REST API:

nohup python3 tc_server > /dev/null &

The following command shows the Linux tc filters used in this example:

# tc filter show dev swp1 ingress
filter protocol all pref 1 matchall chain 0 
filter protocol all pref 1 matchall chain 0 handle 0x1 
	action order 1: sample rate 1/10000 group 1 trunc_size 128 continue
	 index 3 ref 1 bind 1

filter protocol ip pref 14 flower chain 0 
filter protocol ip pref 14 flower chain 0 handle 0x1 
  eth_type ipv4
  ip_proto udp
  src_port 53
	action order 1: gact action drop
	 random type none pass val 0
	 index 1 ref 1 bind 1

The output shows the standard Linux tc-matchall and tc-flower filters used to monitor and drop traffic on the router. The Host sFlow agent automatically installs a matchall rule on each interface in order to sample packets. The tc_server script adds and removes a flower filters to drop unwanted traffic. On a hardware router, the filters are offloaded by the Linux switchdev driver to the router ASIC for line rate performance.


Add filter:

curl -X PUT -H "Content-Type: application/json" \
--data '{"ip_proto":"udp","dst_ip":"","src_port":"53"}' \

Show filters:

curl http://clab-ddos-router:8081/swp1

Remove filter:

curl -X DELETE http://clab-ddos-router:8081/swp1/10

Build an automated DDoS mitigation controller

The following sFlow-RT ddos.js script automatically detects and drops UDP amplification attacks:

var block_minutes = 1;
var thresh = 10000;


setThreshold('attack',{metric:'udp_target', value:thresh, byFlow:true, timeout:2});

var id = 10;
var controls = {};
setEventHandler(function(evt) {
  var key = evt.flowKey;
  if(controls[key]) return;

  var prt = ifName(evt.agent,evt.dataSource);
  if(!prt) return;

  var [dst_ip,src_port] = key.split(',');
  var filter = {
    // uncomment following line for hardware routers
    // 'skip_sw':'skip_sw',
  var url = 'http://'+evt.agent+':8081/'+prt+'/'+id++;
  try {
  } catch(e) {
    logWarning(url + ' put failed');
  controls[key] = {time:evt.timestamp, evt:evt, url:url};
  logInfo('block ' + evt.flowKey);

setIntervalHandler(function(now) {
  for(var key in controls) {
    var control = controls[key];
    if(now - control.time < 1000 * 60 * block_minutes) continue;
    var evt = control.evt;
    if(thresholdTriggered(evt.thresholdID,evt.agent,evt.dataSource+'.'+evt.metric,evt.flowKey)) {
      // attack is ongoing - keep control
    try {
    } catch(e) {
      logWarning(control.url + ' delete failed');
    delete controls[key];
    logInfo('allow '+control.evt.flowKey);

See Writing Applications for more information on the script.

Run the controller script on the CONTAINERlab host using the sFlow-RT real-time analytics engine:

sudo docker run --network=host -v $PWD/ddos.js:/sflow-rt/ddos.js \
sflow/prometheus -Dscript.file=ddos.js

Verify that sFlow is being received by the checking the sFlow-RT status page, http://containerlab_ip:8008/

Test controller

Monitor for attack traffic on the victim:

sudo docker exec -it clab-ddos-victim sh
apk --update add tcpdump
tcpdump -n -i eth1 udp port 53

Start attack:

sudo docker exec -it clab-ddos-attacker \
hping3 --flood --udp -k -s 53 --rand-source

There should be a brief flurry of packets seen at the victim before the controller detects and blocks the attack. The entire period between launching the attack and the attack traffic being blocked is under a second.

Thursday, May 20, 2021

Linux as a network operating system

NVIDIA Linux Switch enables any standard Linux distribution to be used as the operating system on the NVIDIA Spectrum™ switches. Unlike network operating systems that are Linux based, where you are limited to a specific version of Linux and control of the hardware is restricted to vendor specific software modules, Linux Switch allows you to install an unmodified version of your favorite Linux distribution along with familiar Linux monitoring and orchestration tools. 

The key to giving Linux control of the switch hardware is the switchdev module - a standard part of recent Linux kernels. Linux switchdev is an in-kernel driver model for switch devices which offload the forwarding (data) plane from the kernel. Integrating switch ASIC drivers in the Linux kernel makes switch ports appear as additional Linux network interfaces that can be configured and managed using standard Linux tools.

The mlxsw wiki provides instructions for installing Linux using ONIE or PXE boot on Mellanox switch hardware, for example, on NVIDIA® Spectrum®-3 based SN4000 series switches, providing 1G - 400G port speeds to handle scale-out data center applications.

Major benefits of using standard Linux as the switch operating system include:

  • no licensing fees, feature restrictions, or license management complexity associated proprietary network operating systems
  • large ecosystem of open source and commercial software available for Linux
  • software updates and security patches available through Linux distribution
  • install same Linux distribution on the switches and servers to reduce operational complexity and leverage existing expertise
  • run instances of the Linux distribution as virtual machines or containers to test configurations and develop automation scripts
  • standard Linux APIs, and availability of Linux developers, lowers the barrier to customization, making it possible to tailor network behavior to address application / business requirements

The switchdev driver for NVIDIA Spectrum ASICs exposes advanced dataplane instrumentation through standard Linux APIs. This article will explore how the open source Host sFlow agent uses the standard Linux APIs to stream real-time telemetry from the ASIC using industry standard sFlow.

The diagram shows the elements of the solution. Host sFlow agents installed on servers and switches stream sFlow telemetry to an instance of the sFlow-RT real-time analytics engine. The analytics provide a comprehensive, up to the second, view of performance to drive automation.

Note: If you are unfamiliar with sFlow, or want to hear about the latest developments, Real-time network telemetry for automation provides an overview and includes a demonstration of monitoring and troubleshooting network and system performance of a GPU cluster.

Download the latest Host sFlow agent sources:

git clone

INSTALL.Linux provides information on compiling Host sFlow on Linux. The following instructions assume a DEB based distrubution (Debian, Ubuntu):

cd host-sflow/

It isn't necessary to install development tools on the switch. All major Linux distributions are available as Docker images. Select a Docker image that matches the operating system version on the switch and use it to build the package.

Copy the resulting hsflowd package to the switch and install:

sudo dpkg -i hsflowd_2.0.34-3_amd64.deb

Next, edit the /etc/hsflowd.conf file to configure the agent:

sflow {
  collector { ip= }
  systemd { }
  psample { group=1 egress=on }
  dropmon { group=1 start=on sw=off hw=on }
  dent { sw=off switchport=swp.* }

In this case, is the address of the sFlow collector and swp.* is a regular expression used to identify front panel switch ports. The systemd{} module monitors services running on the switch - see Monitoring Linux services, the psample{} module receives randomly sampled packets from the switch ASIC - see Linux 4.11 kernel extends packet sampling support, the dropmon{} module receives dropped packet notifications - see Using sFlow to monitor dropped packets, and the dent{} module automaticallly configures packet sampling of traffic on front panel switch ports - see Packet Sampling.

Note: The same configuration file can be used for for every switch in the network, making configuration of the agents easy to automate. 

Enable and start the agent.

sudo systemctl enable hsflowd.service
sudo systemctl start hsflowd.service

Finally, use the pre-built sflow/prometheus Docker image to start a copy the sFlow-RT real-time analytics software on the collector host (

docker run -p 8008:8008 -p 6343:6343/udp -d sflow/prometheus

The web interface is accessible on port 8008.

The included Metric Browser application lets you explore the metrics that are being streamed. The chart update in real-time as data arrives and in this case identifies the interface in the network with the greatest utilization. The standard set of metrics exported by the Host sFlow agent include interface counters as well as host cpu, memory, disk and service performance metrics. Metrics lists the set of available metrics.

The included Flow Browser application provides an up to the second view traffic flows. Defining Flows describes the fields that can be used to break out traffic. 

Note: The NVIDIA Spectrum 2/3 ASIC includes packet transit delay, selected queue and queue depth with each sampled packet. This information is delivered via the Linux PSAMPLE netlink channel to the Host sFlow agent and included in the sFlow telemetry. These fields are accessible when defining flows in sFlow-RT. See Transit delay and queueing for details.

The included Discard Browser is used to explore packets that are being dropped in the network.

Note: The NVIDIA Spectrum 2/3 ASIC includes instrumentation to capture dropped packets and the reason they were dropped. The information is delivered via the Linux drop_monitor netlink channel to the Host sFlow agent and included in the sFlow telemetry. See Real-time trending of dropped packets for more information.

The included Prometheus application exports metrics to the Prometheus time series database where they can be used to drive Grafana dashboards (e.g. sFlow-RT Countries and Networks, sFlow-RT Health, and sFlow-RT Network Interfaces).

Linux as a network operating system is an exciting advancement if you are interested in simplifying network and system management. Using the Linux networking APIs as a common abstraction layer on servers and switches makes it possible to manage network and compute infrastructure as a unified system.

Monday, May 3, 2021

Cisco 8000 Series routers

Cisco 8000 Series routers are "400G optimized platforms that scale from 10.8 Tbps to 260 Tbps." The routers are built around Cisco Silicon One™ ASICs. The Silicon One ASIC includes the instrumentation needed to support industry standard sFlow real-time streaming telemetry.
Note: The Cisco 8000 Series routers also support Cisco Netflow. Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX describes why you should choose sFlow if you are interested in real-time monitoring and control applications.
The following commands configure a Cisco 8000 series router to sample packets at 1-in-20,000 and stream telemetry to an sFlow analyzer ( on UDP port 6343.
flow exporter-map SF-EXP-MAP-1
 version sflow v5
 packet-length 1468
 transport udp 6343
 source GigabitEthernet0/0/0/1
 dfbit set

Configure the sFlow analyzer address in an exporter-map.

flow monitor-map SF-MON-MAP
 record sflow
 sflow options
  if-counters polling-interval 300
  input ifindex physical
  output ifindex physical
 exporter SF-EXP-MAP-1

Configure sFlow options in a monitor-map.

sampler-map SF-SAMP-MAP
 random 1 out-of 20000

Define the sampling rate in a sampler-map.

interface GigabitEthernet0/0/0/3
 flow datalinkframesection monitor-map SF-MON-MAP sampler SF-SAMP-MAP ingress

Enable sFlow on each interface for complete visibilty into network traffic.

The above configuration instructions are for IOS-XR. Cisco goes SONiC on Cisco 8000 describes Cisco's suppport for the open source SONiC network operating system. SONiC describes how sFlow is implemented and configured on SONiC.

The diagram shows the general architecture of an sFlow monitoring deployment. All the switches stream sFlow telemetry to a central sFlow analyzer for network wide visibililty. Host sFlow agents installed on servers can extend visibilty into the compute infrastructure, and provide network visibility from virtual machines in the public cloud. In this instance, the sFlow-RT real-time analyzer provides an up to the second view of performance that can be used to drive operational dashboards and network automation. The recommended sFlow configuration settings are optimized for real-time monitoring of the large scale networks targetted by Cisco 8000 routers.

docker run -p 8008:8008 -p 6343:6343/udp sflow/prometheus

Getting started with sFlow-RT is very simple, for example, the above command uses the pre-built sflow/prometheus Docker image to start analyzing sFlow. Real-time DDoS mitigation using BGP RTBH and FlowSpec, Monitoring leaf and spine fabric performance, and Flow metrics with Prometheus and Grafana describe additional use cases for real-time sFlow analytics.

Note: There is a wide range of options for sFlow analysis. See sFlow Collectors for a list of open source and commercial software.

Cisco first introduced sFlow support in the Nexus 3000 Series in 2012. Today, there is a range of Cisco products that include sFlow support. The inclusion of sFlow instrumentation in Silicon One is likely expand support across the range of upcoming products based on these ASICs. The broad support for sFlow by Cisco and other leading vendors (e.g. A10, Arista, Aruba, Cumulus, Edge-Core, Extreme, Huawei,  Juniper, NEC, Netgear, Nokia, Quanta, and ZTE) makes sFlow an attractive option for multi-vendor network performance monitoring, particularly for those interested in real-time monitoring and automation.

Monday, April 5, 2021


CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. This article describes how to build and monitor the leaf and spine topology shown above.

Note: Docker testbed describes a simple testbed for experimenting with sFlow analytics using Docker Desktop, but it doesn't have the ability to construct complex topologies. 

multipass launch --cpus 2 --mem 4G --name containerlab
multipass shell containerlab

The above commands use the multipass command line tool to create an Ubuntu virtual machine and open shell access.

sudo apt update
sudo apt -y install
bash -c "$(curl -sL"

Type the above commands into the shell to install CONTAINERlab.

Note: Multipass describes how to build a Mininet network emulator to experiment with software defined networking.

name: test
      kind: linux
      image: sflow/frr
      kind: linux
      image: sflow/frr
      kind: linux
      image: sflow/frr
      kind: linux
      image: sflow/frr
      kind: linux
      image: alpine:latest
      kind: linux
      image: alpine:latest
    - endpoints: ["leaf1:eth1","spine1:eth1"]
    - endpoints: ["leaf1:eth2","spine2:eth1"]
    - endpoints: ["leaf2:eth1","spine1:eth2"]
    - endpoints: ["leaf2:eth2","spine2:eth2"]
    - endpoints: ["h1:eth1","leaf1:eth3"]
    - endpoints: ["h2:eth1","leaf2:eth3"]

The test.yml file shown above specifies the topology. In this case we are using FRRouting (FRR) containers for the leaf and spine switches and Alpine Linux containers for the two hosts.

sudo containerlab deploy --topo test.yml

The above command creates the virtual network and starts containers for each of the network nodes.

sudo containerlab inspect --topo test.yml

Type the command above to list the container instances in the topology.

The table shows each of the containers and the assigned IP addresses.

sudo docker exec -it clab-test-leaf1 vtysh

Type the command above to run the FRR VTY shell so that the switch can be configured.

leaf1# show running-config 
Building configuration...

Current configuration:
frr version 7.5_git
frr defaults datacenter
hostname leaf1
log stdout
interface eth3
 ip address
router bgp 65006
 bgp router-id
 bgp bestpath as-path multipath-relax
 bgp bestpath compare-routerid
 neighbor fabric peer-group
 neighbor fabric remote-as external
 neighbor fabric description Internal Fabric Network
 neighbor fabric capability extended-nexthop
 neighbor eth1 interface peer-group fabric
 neighbor eth2 interface peer-group fabric
 address-family ipv4 unicast
route-map ALLOW-ALL permit 100
ip nht resolve-via-default
line vty

The BGP configuration for leaf1 is shown above.

Note: We are using BGP unnumbered to simplify the configuration so peers are automatically discovered.

The other switches, leaf2spine1, and spine2 have similar configurations.

Next we need to configure the hosts.

sudo docker exec -it clab-test-h1 sh

Open a shell on h1

ip addr add dev eth1
ip route add via

Configure networking on h1. The other host, h2, has a similar configuration.

sudo docker exec -it clab-test-h1 ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=61 time=0.928 ms
64 bytes from seq=1 ttl=61 time=0.160 ms
64 bytes from seq=2 ttl=61 time=0.201 ms

Use ping to verify that there is connectivity between h1 and h2.

apk add iperf3

Install iperf3 on h1 and h2

iperf3 -s --bind

Run an iperf3 server on h2

iperf3 -c
Connecting to host, port 5201
[  5] local port 52066 connected to port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.41 GBytes  12.1 Gbits/sec    0   1.36 MBytes       
[  5]   1.00-2.00   sec  1.41 GBytes  12.1 Gbits/sec    0   1.55 MBytes       
[  5]   2.00-3.00   sec  1.44 GBytes  12.4 Gbits/sec    0   1.55 MBytes       
[  5]   3.00-4.00   sec  1.44 GBytes  12.3 Gbits/sec    0   2.42 MBytes       
[  5]   4.00-5.00   sec  1.46 GBytes  12.6 Gbits/sec    0   3.28 MBytes       
[  5]   5.00-6.00   sec  1.42 GBytes  12.2 Gbits/sec    0   3.28 MBytes       
[  5]   6.00-7.00   sec  1.44 GBytes  12.4 Gbits/sec    0   3.28 MBytes       
[  5]   7.00-8.00   sec  1.28 GBytes  11.0 Gbits/sec    0   3.28 MBytes       
[  5]   8.00-9.00   sec  1.40 GBytes  12.0 Gbits/sec    0   3.28 MBytes       
[  5]   9.00-10.00  sec  1.25 GBytes  10.7 Gbits/sec    0   3.28 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  13.9 GBytes  12.0 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  13.9 GBytes  12.0 Gbits/sec                  receiver

Run an iperf3 test on h1

Now that we have a working test network, it's time to add some monitoring.

We will be installing sFlow agents on the switches and hosts that will stream telemetry to sFlow-RT analytics software which will provide a real-time network-wide view of performance.

sudo docker exec -it clab-test-leaf1 sh

Open a shell on leaf1

apk --update add libpcap-dev build-base linux-headers gcc git
git clone
cd host-sflow/
make install

Install Host sFlow agent on leaf1.

Note: The steps above could be included in a Dockerfile in order to create an image with built-in instrumentation.

vi /etc/hsflowd.conf

Edit the Host sFlow configuration file.

sflow {
  polling = 30
  sampling = 400
  collector { ip = }
  pcap { dev = eth1 }
  pcap { dev = eth2 }
  pcap { dev = eth3 }

The above settings enable packet sampling on interfaces eth1, eth2 and eth3

sudo docker exec -d clab-test-leaf1 /usr/sbin/hsflowd -d

Start the Host sFlow agent on leaf1.

Install and run Host sFlow agents on the remaining switches and hosts, leaf2, spine1, spine2, h1, and h2.

sudo docker run --rm -d -p 6343:6343/udp -p 8008:8008 --name sflow-rt sflow/prometheus

Use the pre-built sflow/prometheus container to start an instance of sFlow-RT to collect and analyze the telemetry.

multipass list

List the multipass virtual machines.

containerlab            Running      Ubuntu 20.04 LTS

Use a web browser to connect to connect to the sFlow-RT web interface. In this case at 

The sFlow-RT dashboard verifies that telemetry is being received from 6 agents (the four switches and two hosts).

The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. 

The chart shows that the traffic flows via spine2. Repeated tests showed that traffic traffic was never taking the path via spine1, indicating that the ECMP hash function was not taking into account the TCP ports.

sudo docker exec clab-test-leaf1 sysctl -w net.ipv4.fib_multipath_hash_policy=1
sudo docker exec clab-test-leaf2 sysctl -w net.ipv4.fib_multipath_hash_policy=1

We are using a newer Linux kernel, so running the above commands changes the hashing algorithm to include the layer 4 headers, see Celebrating ECMP in Linux — part one and Celebrating ECMP in Linux — part two.

Topology describes how knowledge of network topology can be used to enhance the analytics capabilities of sFlow-RT.

  "links": {
    "link1": { "node1":"leaf1","port1":"eth1","node2":"spine1","port2":"eth1"},
    "link2": { "node1":"leaf1","port1":"eth2","node2":"spine2","port2":"eth1"},
    "link3": { "node1":"leaf2","port1":"eth1","node2":"spine1","port2":"eth2"},
    "link4": { "node1":"leaf2","port1":"eth2","node2":"spine2","port2":"eth2"}

The links specification in the test.yml file can easily be converted into sFlow-RT's JSON format.

CONTAINERlab is a very promising tool for efficiently emulating complex networks. CONTAINERlab supports NokiaSR-Linux, Juniper vMX, Cisco IOS XRv9k and Arista vEOS, as well as Linux containers. Many of the proprietary network operating systems are only delivered as virtual machines and Vrnetlab integration makes it possible for CONTAINERlab to run these virtual machines. However, virtual machine nodes require considerably more resources than simple containers.

Linux with open source routing software (FRRouting) is an accessible alternative to vendor routing stacks (no registration / license required, no restriction on copying means you can share images on Docker Hub, no need for virtual machines).  FRRouting is popular in production network operating systems (e.g. Cumulus Linux, SONiC, DENT, etc.) and the VTY shell provides an industry standard CLI for configuration, so labs built around FRR allow realistic network configurations to be explored.