Monday, February 28, 2022

Topology aware fabric analytics

Real-time telemetry from a 5 stage Clos fabric describes how to emulate and monitor the topology shown above using Containerlab and sFlow-RT. This article extends the example to demonstrate how topology awareness enhances analytics.
docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml
Download the Containerlab topology file.
sed -i "s/prometheus/topology/g" clos5.yml
Change the sFlow-RT image from sflow/prometheus to sflow/topology in the Containerlab topology. The sflow/topology image packages sFlow-RT with useful applications that combine topology awareness with analytics.
containerlab deploy -t clos5.yml
Deploy the topology.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.json
Download the sFlow-RT topology file.
curl -X PUT -H "Content-Type: application/json" -d @clos5.json \
http://localhost:8008/topology/json
Post the topology to sFlow-RT.
Connect to the sFlow-RT Topology application, http://localhost:8008/app/topology/html/. The dashboard confirms that all the links and nodes in the topology are streaming telemetry. There is currently no traffic on the network, so none of the nodes in the topology are sending flow data.
docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2
Generate traffic. You should see the Nodes No Flows number drop as switches in the traffic path stream flow data. In a production network traffic flows through all the switches and so all switches will be streaming flow telemetry.
Click on the Locate tab and enter the address 172.16.4.2. Click submit to find the location of the address. Topology awareness allows sFlow-RT to examine streaming flow data from all the links in the network and determine switch ports and MAC addresses associated with the point of ingress into the fabric.
Connect to the sFlow-RT Fabric Metrics application, http://localhost:8008/app/fabric-metrics/html/. The Traffic charts identifies each iperf3 test as a large (elephant) flow - see SDN and large flows for a discussion of the importance of large flow visibility. The Busy Links chart reports the number of links in the topology with a utilization exceeding a threshold (in this case 80% utilization).
The Ports tab contains charts showing switch ports with the highest ingress/egress utilization, packet discards, and packet errors.
Connect to the sFlow-RT Trace Flow application,  http://localhost:8008/app/trace-flow/html/.
ipsource=172.16.1.2
Enter a traffic filter and click Submit.
docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2
Run an iperf3 test and the path the traffic takes across the network topology will be immediately displayed.
Connect to the sFlow-RT Flow Browser application, http://localhost:8008/app/browse-flows/html/index.html?keys=ipsource&value=bps. Run a sequence of iperf3 tests and the traffic will be immediately shown. Defining Flows describes flow attributes that can be added to the query to examine inner / outer tunneled traffic, VxLAN, MPLS, GRE, etc.

Trace Flow demonstrated that flow telemetry is streamed by each switch port along the path from source to destination host to deliver end-to-end visibility. Simply summing the data would overestimate the size of the flow. Adding topology information allows sFlow-RT to intelligently combine data, eliminating duplicate measurements, to provide accurate measurements of traffic crossing the fabric.

containerlab destroy -t clos5.yml
When you are finished, run the above command to stop the containers and free the resources associated with the emulation.

Moving the monitoring solution from Containerlab to production is straightforward since sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE. In addition, the open source Host sFlow agent makes it easy to extend visibility beyond the physical network into the compute infrastructure.

Monday, February 21, 2022

Real-time telemetry from a 5 stage Clos fabric

CONTAINERlab described how to use FRRouting and Host sFlow in a Docker container to emulate switches in a Clos (leaf/spine) fabric. The recently released open source project, https://github.com/sflow-rt/containerlab, simplifies and automates the steps needed to build and monitor topologies.
docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Run the above command to start Containerlab if you already have Docker installed; the ~/clab directory will be created to persist settings. Otherwise, Installation provides detailed instructions for a variety of platforms.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml
Next, download the topology file for the 5 stage Clos fabric shown at the top of this article.
containerlab deploy -t clos5.yml
Finally, deploy the topology.
Note: The 3 stage Clos topology, clos3.yml, described in the previous article is also available.
The initial launch may take a couple of minutes as the container images are downloaded for the first time. Once the images are downloaded, the topology deploys in around 10 seconds.
An instance of the sFlow-RT real-time analytics engine receives industry standard sFlow telemetry from all the switches in the network. All of the switches in the topology are configured to send sFlow to the sFlow-RT instance. In this case, Containerlab is running the pre-built sflow/prometheus image which packages sFlow-RT with useful applications for exploring the data.
Connect to the web interface, http://localhost:8008. The sFlow-RT dashboard verifies that telemetry is being received from 10 agents (the 10 switches in the Clos fabric). See the sFlow-RT Quickstart guide for more information.
The screen capture shows a real-time view of traffic flowing across the network during a series iperf3 tests. Click on the sFlow-RT Apps menu and select the browse-flows application, or click here for a direct link to a chart with the settings shown above.
docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2
Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between h1 and h4.
docker exec -it clab-clos5-leaf1 vtysh
Linux with open source routing software (FRRouting) is an accessible alternative to vendor routing stacks (no registration / license required, no restriction on copying means you can share images on Docker Hub, no need for virtual machines).  FRRouting is popular in production network operating systems (e.g. Cumulus Linux, SONiC, DENT, etc.) and the VTY shell provides an industry standard CLI for configuration, so labs built around FRR allow realistic network configurations to be explored.
containerlab destroy -t clos5.yml
When you are finished, run the above command to stop the containers and free the resources associated with the emulation.

Moving the monitoring solution from Containerlab to production is straightforward since sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE. In addition, the open source Host sFlow agent makes it easy to extend visibility beyond the physical network into the compute infrastructure.

Monday, February 14, 2022

UDP vs TCP for real-time streaming telemetry

This article compares UDP and TCP and their suitability for transporting real-time network telemetry. The results obtained demonstrate that poor throughput and high message latency in the face of packet loss makes TCP unsuitable for providing visibility during congestion events. We demonstrate that the use of UDP transport by the sFlow telemetry standard overcomes the limitations of TCP to deliver robust real-time visibility during extreme traffic events when visibility is most needed.
Summary of the AWS Service Event in the Northern Virginia (US-EAST-1) Region, "This congestion immediately impacted the availability of real-time monitoring data for our internal operations teams, which impaired their ability to find the source of congestion and resolve it." December 10th, 2021

The data in these charts was created using Mininet to simulate packet loss in a simple network. If you are interested in replicating these results, Multipass describes how to run Mininet on your laptop.

sudo mn --link tc,loss=5

For example, the above command simulates a simple network consisting of two hosts connected by a switch. A packet loss rate of 5% is configured for each link.

Simple Python scripts running on the simulated hosts were used to simulate transfer of network telemetry.

#!/usr/bin/env python3

import socket
import time
import sys
import struct

HOST = '10.0.0.1'
PORT = 65432

buf = bytearray(1000)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
  s.connect((HOST, PORT))
  while True:
    now = time.time()
    buf[0:8] = struct.pack('>d',now)
    s.sendall(buf)
    time.sleep(0.01)

The above sender.py script builds a 1000 byte timestamped message, sends the message, and sleeps for 0.01 seconds, and repeats the steps in a continuous loop. This level of message traffic is typical of what you might see from an sFlow agent embedded in a datacenter switch on a moderately busy network.

#!/usr/bin/env python3

import socket
import time
import struct

HOST = '10.0.0.1'
PORT = 65432 

def recv_msg(sock,n):
  data = bytearray()
  while len(data) < n:
    packet = sock.recv(n - len(data))
    if not packet:
      return None
    data.extend(packet)
  return data

with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
  s.bind((HOST, PORT))
  s.listen()
  conn, addr = s.accept()
  with conn:
    while True:
      data = recv_msg(conn,1000)
      if not data:
        break
      time_sent = struct.unpack('>d',data[0:8])[0]
      time_now = time.time()
      print("%f %f" % (time_now,time_now - time_sent), flush=True)

The above receiver.py script accepts a connection from the sender, reads messages, and computes latency using the encoded sender timestamp.

The average delay is a constant 160 µS for UDP. TCP message delay exceeds 1 second average at 14% packet loss and averages 30 seconds at 20% packet loss.


The maximum measurement delay chart tells a dire story. While UDP never sees a message delay greater than 8 mS, maximum TCP message delay reaches 14 minutes at 20% packet loss!

Both TCP and UDP achieve 96 messages per second throughput at 0% packet loss. UDP throughput declines linearly with packet loss as one would expect, dropping to 77 messages per second at 20% packet loss. The throughput of TCP stays at 96 message per second until the packet loss rate reaches 10%, where it drops to 90 messages per second. At a packet loss of 12% the throughput of TCP / UDP is equal at 85 messages per second. TCP throughput drops off rapidly and falls to 10 messages per second at 20% packet loss.

The results demonstrate that TCP based telemetry fails to provide adequate visibility in the face of packet loss, with a steep drop in throughput and a dramatic increase in delay that makes the measurements useless for real-time troubleshooting and automated control. UDP based telemetry on the other hand provides consistently low message delay and maintains high throughput at the cost of moderate message loss.

The sFlow standard is designed for use with UDP, specifying measurements that degrade gracefully in the presence of expected packet loss:

  1. Monotonic Counters For example, the total number of packets and bytes sent and received on an interface are sent, leaving it up to the sFlow analyzer to compute packet rates and interface utilizations. If a message is lost there is no need for it to be retransmitted since the next message will contain the current value of the counter.
  2. Packet samples sFlow's packet sampling mechanism treats record loss as a decrease in the sampling probability. The sFlow records contain information that allows the traffic analyzer to measure the effective sampling rate, compensate for the packet loss, and generate corrected values. Each sFlow record represents a single packet event and large flows of traffic will generate a number of sFlow records. Thus, the loss of an sFlow record does not represent a significant loss of data and doesn't affect the overall accuracy of traffic measurements.

In addition, recent extensions to the sFlow protocol enhance visibility into sources of delay and packet loss in the network, see: Transit delay and queueing and Real-time trending of dropped packets. As we have demonstrated, services running in the datacenter that rely on TCP will be severely impacted by packet loss and so the ability to monitor and control packet loss is essential for maintaining service levels.

The following articles discuss how sFlow relates to, and complements, emerging telemetry standards:

Deploying an sFlow monitoring solution is straightforward since sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE. In addition, the open source Host sFlow agent makes it easy to extend visibility beyond the physical network into the compute infrastructure.

The diagram shows the high level architecture for real-time datacenter-wide sFlow analytics. Telemetry continuously streams from all network, host and application instances to a central analyzer which constructs a real-time view of performance, identifying hot spots, and driving automated remediation. In this case the sFlow-RT analytics engine continuously monitors tens of thousands of industry standard sFlow telemetry sources and responds within a seconds to performance challenges.

Finally, a couple of popular use cases give an idea of the solutions that can be built on sFlow real-time telemetry.

DDoS protection quickstart guide describes how to deploy sFlow along with BGP RTBH/Flowspec to automatically detect and mitigate DDoS flood attacks. The use of sFlow provides sub-second visibility into network traffic during the periods of high packet loss experienced during a DDoS attack. The result is a system that can reliably detect and respond to attacks in real-time.

Flow metrics with Prometheus and Grafana describes how to import user defined flow metrics into the Prometheus time series data base and build real-time dashboards using Grafana. Here gain, reliable, low latency measurements during periods of packet loss ensure that operators have the information needed to find the root cause and quickly respond.

Tuesday, February 8, 2022

Cisco NCS 5500 Series Routers

Cisco already supports industry standard sFlow telemetry across a range of products and the recent IOS-XR Release 7.5.1 extends support to Cisco NCS 5500 series routers.

Note: The NCS 5500 series routers also support Cisco Netflow. Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX describes why you should choose sFlow if you are interested in real-time monitoring and control applications.
The following commands configure an NCS 5500 series router to sample packets at 1-in-20,000 and stream telemetry to an sFlow analyzer (192.127.0.1) on UDP port 6343.
flow exporter-map SF-EXP-MAP-1
 version sflow v5
 !
 packet-length 1468
 transport udp 6343
 source GigabitEthernet0/0/0/1
 destination 192.127.0.1
 dfbit set
!

Configure the sFlow analyzer address in an exporter-map.

flow monitor-map SF-MON-MAP
 record sflow
 sflow options
  extended-router
  extended-gateway
  if-counters polling-interval 300
  input ifindex physical
  output ifindex physical
 !
 exporter SF-EXP-MAP-1
!

Configure sFlow options in a monitor-map.

sampler-map SF-SAMP-MAP
 random 1 out-of 20000
!

Define the sampling rate in a sampler-map.

interface GigabitEthernet0/0/0/3
 flow datalinkframesection monitor-map SF-MON-MAP sampler SF-SAMP-MAP ingress

Enable sFlow on each interface for complete visibilty into network traffic.

The diagram shows the general architecture of an sFlow monitoring deployment. All the switches stream sFlow telemetry to a central sFlow analyzer for network wide visibililty. Host sFlow agents installed on servers can extend visibilty into the compute infrastructure, and provide network visibility from virtual machines in the public cloud. In this instance, the sFlow-RT real-time analyzer provides an up to the second view of performance that is used to drive operational dashboards and network automation. The recommended sFlow configuration settings are optimized for real-time monitoring of the large scale networks targetted by Cisco NCS 5500 series routers.

docker run -p 8008:8008 -p 6343:6343/udp sflow/prometheus

Getting started with sFlow-RT is very simple, for example, the above command uses the pre-built sflow/prometheus Docker image to start analyzing sFlow. Real-time DDoS mitigation using BGP RTBH and FlowSpec, Monitoring leaf and spine fabric performance, and Flow metrics with Prometheus and Grafana describe additional use cases for real-time sFlow analytics.

Note: There is a wide range of options for sFlow analysis. See sFlow Collectors for a list of open source and commercial software.

Cisco first introduced sFlow support in the Nexus 3000 Series in 2012. Today, there is a range of Cisco products that include sFlow support. The broad support for sFlow by Cisco and other leading vendors (e.g. A10, Arista, Aruba, Edge-Core, Extreme, Huawei,  Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE) makes sFlow an attractive option for multi-vendor network performance monitoring, particularly for those interested in real-time monitoring and automation.