CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. This article describes how to build and monitor the leaf and spine topology shown above.
Note: Docker testbed describes a simple testbed for experimenting with sFlow analytics using Docker Desktop, but it doesn't have the ability to construct complex topologies.
multipass launch --cpus 2 --mem 4G --name containerlab multipass shell containerlab
The above commands use the multipass command line tool to create an Ubuntu virtual machine and open shell access.
sudo apt update sudo apt -y install docker.io bash -c "$(curl -sL https://get-clab.srlinux.dev)"
Type the above commands into the shell to install CONTAINERlab.
Note: Multipass describes how to build a Mininet network emulator to experiment with software defined networking.
name: test topology: nodes: leaf1: kind: linux image: sflow/frr leaf2: kind: linux image: sflow/frr spine1: kind: linux image: sflow/frr spine2: kind: linux image: sflow/frr h1: kind: linux image: alpine:latest h2: kind: linux image: alpine:latest links: - endpoints: ["leaf1:eth1","spine1:eth1"] - endpoints: ["leaf1:eth2","spine2:eth1"] - endpoints: ["leaf2:eth1","spine1:eth2"] - endpoints: ["leaf2:eth2","spine2:eth2"] - endpoints: ["h1:eth1","leaf1:eth3"] - endpoints: ["h2:eth1","leaf2:eth3"]
The test.yml file shown above specifies the topology. In this case we are using FRRouting (FRR) containers for the leaf and spine switches and Alpine Linux containers for the two hosts.
sudo containerlab deploy --topo test.yml
The above command creates the virtual network and starts containers for each of the network nodes.
sudo containerlab inspect --topo test.yml
Type the command above to list the container instances in the topology.
The table shows each of the containers and the assigned IP addresses.
sudo docker exec -it clab-test-leaf1 vtysh
Type the command above to run the FRR VTY shell so that the switch can be configured.
leaf1# show running-config Building configuration... Current configuration: ! frr version 7.5_git frr defaults datacenter hostname leaf1 log stdout ! interface eth3 ip address 172.16.1.1/24 ! router bgp 65006 bgp router-id 172.20.20.6 bgp bestpath as-path multipath-relax bgp bestpath compare-routerid neighbor fabric peer-group neighbor fabric remote-as external neighbor fabric description Internal Fabric Network neighbor fabric capability extended-nexthop neighbor eth1 interface peer-group fabric neighbor eth2 interface peer-group fabric ! address-family ipv4 unicast network 172.16.1.0/24 exit-address-family ! route-map ALLOW-ALL permit 100 ! ip nht resolve-via-default ! line vty ! end
The BGP configuration for leaf1 is shown above.
Note: We are using BGP unnumbered to simplify the configuration so peers are automatically discovered.
The other switches, leaf2, spine1, and spine2 have similar configurations.
Next we need to configure the hosts.
sudo docker exec -it clab-test-h1 sh
Open a shell on h1
ip addr add 172.16.1.2/24 dev eth1 ip route add 172.16.2.0/24 via 172.16.1.1
Configure networking on h1. The other host, h2, has a similar configuration.
sudo docker exec -it clab-test-h1 ping 172.16.2.2 PING 172.16.2.2 (172.16.2.2): 56 data bytes 64 bytes from 172.16.2.2: seq=0 ttl=61 time=0.928 ms 64 bytes from 172.16.2.2: seq=1 ttl=61 time=0.160 ms 64 bytes from 172.16.2.2: seq=2 ttl=61 time=0.201 ms
Use ping to verify that there is connectivity between h1 and h2.
apk add iperf3
Install iperf3 on h1 and h2
iperf3 -s --bind 172.16.2.2
Run an iperf3 server on h2
iperf3 -c 172.16.2.2 Connecting to host 172.16.2.2, port 5201 [ 5] local 172.16.1.2 port 52066 connected to 172.16.2.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.41 GBytes 12.1 Gbits/sec 0 1.36 MBytes [ 5] 1.00-2.00 sec 1.41 GBytes 12.1 Gbits/sec 0 1.55 MBytes [ 5] 2.00-3.00 sec 1.44 GBytes 12.4 Gbits/sec 0 1.55 MBytes [ 5] 3.00-4.00 sec 1.44 GBytes 12.3 Gbits/sec 0 2.42 MBytes [ 5] 4.00-5.00 sec 1.46 GBytes 12.6 Gbits/sec 0 3.28 MBytes [ 5] 5.00-6.00 sec 1.42 GBytes 12.2 Gbits/sec 0 3.28 MBytes [ 5] 6.00-7.00 sec 1.44 GBytes 12.4 Gbits/sec 0 3.28 MBytes [ 5] 7.00-8.00 sec 1.28 GBytes 11.0 Gbits/sec 0 3.28 MBytes [ 5] 8.00-9.00 sec 1.40 GBytes 12.0 Gbits/sec 0 3.28 MBytes [ 5] 9.00-10.00 sec 1.25 GBytes 10.7 Gbits/sec 0 3.28 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 13.9 GBytes 12.0 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 13.9 GBytes 12.0 Gbits/sec receiver
Run an iperf3 test on h1
Now that we have a working test network, it's time to add some monitoring.
We will be installing sFlow agents on the switches and hosts that will stream telemetry to sFlow-RT analytics software which will provide a real-time network-wide view of performance.
sudo docker exec -it clab-test-leaf1 sh
Open a shell on leaf1
apk --update add libpcap-dev build-base linux-headers gcc git git clone https://github.com/sflow/host-sflow.git cd host-sflow/ make FEATURES="PCAP" make install
Install Host sFlow agent on leaf1.
Note: The steps above could be included in a Dockerfile in order to create an image with built-in instrumentation.
vi /etc/hsflowd.conf
Edit the Host sFlow configuration file.
sflow { polling = 30 sampling = 400 collector { ip = 172.20.20.1 } pcap { dev = eth1 } pcap { dev = eth2 } pcap { dev = eth3 } }
The above settings enable packet sampling on interfaces eth1, eth2 and eth3
sudo docker exec -d clab-test-leaf1 /usr/sbin/hsflowd -d
Start the Host sFlow agent on leaf1.
Install and run Host sFlow agents on the remaining switches and hosts, leaf2, spine1, spine2, h1, and h2.
sudo docker run --rm -d -p 6343:6343/udp -p 8008:8008 --name sflow-rt sflow/prometheus
Use the pre-built sflow/prometheus container to start an instance of sFlow-RT to collect and analyze the telemetry.
multipass list
List the multipass virtual machines.
containerlab Running 192.168.64.7 Ubuntu 20.04 LTS 172.17.0.1 172.20.20.1
Use a web browser to connect to the sFlow-RT web interface. In this case at http://192.168.64.7:8008
The sFlow-RT dashboard verifies that telemetry is being received from 6 agents (the four switches and two hosts).
The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test.
The chart shows that the traffic flows via spine2. Repeated tests showed that traffic traffic was never taking the path via spine1, indicating that the ECMP hash function was not taking into account the TCP ports.
sudo docker exec clab-test-leaf1 sysctl -w net.ipv4.fib_multipath_hash_policy=1 sudo docker exec clab-test-leaf2 sysctl -w net.ipv4.fib_multipath_hash_policy=1
We are using a newer Linux kernel, so running the above commands changes the hashing algorithm to include the layer 4 headers, see Celebrating ECMP in Linux — part one and Celebrating ECMP in Linux — part two.
Topology describes how knowledge of network topology can be used to enhance the analytics capabilities of sFlow-RT.
{ "links": { "link1": { "node1":"leaf1","port1":"eth1","node2":"spine1","port2":"eth1"}, "link2": { "node1":"leaf1","port1":"eth2","node2":"spine2","port2":"eth1"}, "link3": { "node1":"leaf2","port1":"eth1","node2":"spine1","port2":"eth2"}, "link4": { "node1":"leaf2","port1":"eth2","node2":"spine2","port2":"eth2"} } }
The links specification in the test.yml file can easily be converted into sFlow-RT's JSON format.
CONTAINERlab is a very promising tool for efficiently emulating complex networks. CONTAINERlab supports NokiaSR-Linux, Juniper vMX, Cisco IOS XRv9k and Arista vEOS, as well as Linux containers. Many of the proprietary network operating systems are only delivered as virtual machines and Vrnetlab integration makes it possible for CONTAINERlab to run these virtual machines. However, virtual machine nodes require considerably more resources than simple containers.
Linux with open source routing software (FRRouting) is an accessible alternative to vendor routing stacks (no registration / license required, no restriction on copying means you can share images on Docker Hub, no need for virtual machines). FRRouting is popular in production network operating systems (e.g. Cumulus Linux, SONiC, DENT, etc.) and the VTY shell provides an industry standard CLI for configuration, so labs built around FRR allow realistic network configurations to be explored.