Friday, July 1, 2022

SR Linux in Containerlab

This article uses Containerlab to emulate a simple network and experiment with Nokia SR Linux and sFlow telemetry. Containerlab provides a convenient method of emulating network topologies and configurations before deploying into production on physical switches.

curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/srlinux.yml

Download the Containerlab topology file.

containerlab deploy -t srlinux.yml

Deploy the topology.

docker exec -it clab-srlinux-h1 traceroute 172.16.2.2

Run traceroute on h1 to verify path to h2.

traceroute to 172.16.2.2 (172.16.2.2), 30 hops max, 46 byte packets
 1  172.16.1.1 (172.16.1.1)  2.234 ms  *  1.673 ms
 2  172.16.2.2 (172.16.2.2)  0.944 ms  0.253 ms  0.152 ms

Results show path to h2 (172.16.2.2) via router interface (172.16.1.1).

docker exec -it clab-srlinux-switch sr_cli

Access SR Linux command line on switch.

Using configuration file(s): []
Welcome to the srlinux CLI.
Type 'help' (and press <ENTER>) if you need any help using this.
--{ + running }--[  ]--                                                                                                                                                              
A:switch#

SR Linux CLI describes how to use the interface.

A:switch# show system sflow status

Get status of sFlow telemetry.

-------------------------------------------------------------------------
Admin State            : enable
Sample Rate            : 10
Sample Size            : 256
Total Samples          : <Unknown>
Total Collector Packets: 0
-------------------------------------------------------------------------
  collector-id     : 1
  collector-address: 172.100.100.5
  network-instance : default
  source-address   : 172.100.100.6
  port             : 6343
  next-hop         : <Unresolved>
-------------------------------------------------------------------------
-------------------------------------------------------------------------
--{ + running }--[  ]--

The output shows settings and operational status for sFlow configured on the switch.

Connect to the sFlow-RT Metric Browser application, http://localhost:8008/app/browse-metrics/html/index.html?metric=ifinoctets.

docker exec -it clab-srlinux-h1 iperf3 -c 172.16.2.2

Run an iperf3 throughput test between h1 and h2.

Connecting to host 172.16.2.2, port 5201
[  5] local 172.16.1.2 port 38522 connected to 172.16.2.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   675 KBytes  5.52 Mbits/sec   54   22.6 KBytes       
[  5]   1.00-2.00   sec   192 KBytes  1.57 Mbits/sec   18   12.7 KBytes       
[  5]   2.00-3.00   sec   255 KBytes  2.09 Mbits/sec   16   1.41 KBytes       
[  5]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec    9   1.41 KBytes       
[  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec   18   1.41 KBytes       
[  5]   5.00-6.00   sec   255 KBytes  2.08 Mbits/sec   17   1.41 KBytes       
[  5]   6.00-7.00   sec   191 KBytes  1.56 Mbits/sec   10   8.48 KBytes       
[  5]   7.00-8.00   sec   191 KBytes  1.56 Mbits/sec    7   7.07 KBytes       
[  5]   8.00-9.00   sec   191 KBytes  1.56 Mbits/sec    7   8.48 KBytes       
[  5]   9.00-10.00  sec   191 KBytes  1.56 Mbits/sec   10   7.07 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.09 MBytes  1.75 Mbits/sec  166             sender
[  5]   0.00-10.27  sec  1.59 MBytes  1.30 Mbits/sec                  receiver

iperf Done.

The iperf3 test results show that the SR Linux dataplane implementation in the Docker container has severely limited forwarding performance, making the image unsuitable for emulating network traffic. In addition, the emulated forwarding plane does not support the packet sampling mechanism available on hardware switches that allows sFlow to provide real-time visibility into traffic flowing through the switch.

For readers interested in performance monitoring, the https://github.com/sflow-rt/containerlab repository provides examples showing performance monitoring of data center leaf / spine fabrics, EVPN visibility, and DDoS mitigation using Containerlab. These examples use Linux as a network operating system. In this case, the containers running on Containerlab use the Linux dataplane for maximumum performance. On physical switch hardware the Linux kernel can offload dataplane forwarding to the switch ASIC for line rate performance. 

containerlab destroy -t srlinux.yml

When you are finished, run the above command to stop the containers and free the resources associated with the emulation.

Thursday, June 2, 2022

Using Ixia-c to test RTBH DDoS mitigation

Remote Triggered Black Hole Scenario describes how to use the Ixia-c traffic generator to simulate a DDoS flood attack. Ixia-c supports the Open Traffic Generator API that is used in the article to program two traffic flows: the first representing normal user traffic (shown in blue) and the second representing attack traffic (show in red).

The article goes on to demonstrate the use of remotely triggered black hole (RTBH) routing to automatically mitigate the simulated attack. The chart above shows traffic levels during two simulated attacks. The DDoS mitigation controller is disabled during the first attack. Enabling the controller for the second attack causes to attack traffic to be dropped the instant it crosses the threshold.

The diagram shows the Containerlab topology used in the Remote Triggered Black Hole Scenario lab (which can run on a laptop). The Ixia traffic generator's eth1 interface represents the Internet and its eth2 interface represents the Customer Network being attacked. Industry standard sFlow telemetry from the Customer router, ce-router, streams to the DDoS mitigation controller (running an instance of DDoS Protect). When the controller detects a denial of service attack it pushed a control via BGP to the ce-router, which in turn pushes the control upstream to the service provider router, pe-router, which drops the attack traffic to prevent flooding of the ISP Circuit that would otherwise disrupt access to the Customer Network.

Arista, Cisco, and Juniper have added sFlow support to their BGP routers, see Real-time flow telemetry for routers, making it straightforward to take this solution from the lab to production. Support for Open Traffic Generator API across a range of platforms makes it possible to develop automated tests in the lab environment and apply them to production hardware.

Tuesday, April 26, 2022

BGP Remotely Triggered Blackhole (RTBH)

DDoS attacks and BGP Flowspec responses describes how to simulate and mitigate common DDoS attacks. This article builds on the previous examples to show how BGP Remotely Triggered Blackhole (RTBH) controls can be applied in situations where BGP Flowpsec is not available, or is unsuitable as a mitigation response.
docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
sed -i "s/\\.ip_flood\\.action=filter/\\.ip_flood\\.action=drop/g" ddos.yml
Change mitigation policy for IP Flood attacks from Flowspec filter to RTBH.
containerlab deploy -t ddos.yml
Deploy the topology.
Access the DDoS Protect screen at http://localhost:8008/app/ddos-protect/html/
docker exec -it clab-ddos-attacker hping3 \
--flood --rawip -H 47 192.0.2.129
Launch an IP Flood attack. The DDoS Protect dashboard shows that as soon as the ip_flood attack traffic reaches the threshold a control is implemented and the attack traffic is immediately dropped. The entire process between the attack being launched, detected, and mitigated happens within a second, ensuring minimal impact on network capacity and services.
docker exec -it clab-ddos-sp-router vtysh -c "show running-config"
See sp-router configuration.
Building configuration...

Current configuration:
!
frr version 8.2.2_git
frr defaults datacenter
hostname sp-router
no ipv6 forwarding
log stdout
!
ip route 203.0.113.2/32 Null0
!
interface eth2
 ip address 198.51.100.1/24
exit
!
router bgp 64496
 bgp bestpath as-path multipath-relax
 bgp bestpath compare-routerid
 neighbor fabric peer-group
 neighbor fabric remote-as external
 neighbor fabric description Internal Fabric Network
 neighbor fabric ebgp-multihop 255
 neighbor fabric capability extended-nexthop
 neighbor eth1 interface peer-group fabric
 no neighbor eth1 capability extended-nexthop
 !
 address-family ipv4 unicast
  redistribute connected route-map HOST_ROUTES
  neighbor fabric route-map RTBH in
 exit-address-family
 !
 address-family ipv4 flowspec
  neighbor fabric activate
 exit-address-family
exit
!
bgp community-list standard BLACKHOLE seq 5 permit blackhole
!
route-map HOST_ROUTES permit 10
 match interface eth2
exit
!
route-map RTBH permit 10
 match community BLACKHOLE
 set ip next-hop 203.0.113.2
exit
!
route-map RTBH permit 20
exit
!
ip nht resolve-via-default
!
end
The configuration creates null route for 203.0.113.2/32 and rewrites the next-hop address to 203.0.113.2 for routes that are marked with the BGP blackhole community.
docker exec -it clab-ddos-sp-router vtysh -c "show ip route"
Show the forwarding state on sp-router.
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

K>* 0.0.0.0/0 [0/0] via 172.100.100.1, eth0, 12:36:08
C>* 172.100.100.0/24 is directly connected, eth0, 12:36:08
B>* 192.0.2.0/24 [20/0] via fe80::a8c1:abff:fe32:b21e, eth1, weight 1, 12:36:03
B>  192.0.2.129/32 [20/0] via 203.0.113.2 (recursive), weight 1, 00:00:04
  *                         unreachable (blackhole), weight 1, 00:00:04
C>* 198.51.100.0/24 is directly connected, eth2, 12:36:08
S>* 203.0.113.2/32 [1/0] unreachable (blackhole), weight 1, 12:36:08
Traffic to the victim IP address, 192.0.2.129, is directed to the 203.0.113.2 next-hop, where it is discarded before it can saturate the link to the customer router, ce-router.

Monday, April 4, 2022

Real-time flow telemetry for routers

The last few years have seen leading router vendors add support for sFlow monitoring technology that has long been the industry standard for switch monitoring. Router implementations of sFlow include:
  • Arista 7020R Series Routers, 7280R Series Routers, 7500R Series Routers, 7800R3 Series Routers
  • Cisco 8000 Series Routers, ASR 9000 Series Routers, NCS 5500 Series Routers
  • Juniper ACX Series Routers, MX Series Routers, PTX Series Routers
  • Huawei NetEngine 8000 Series Routers
Broad support of sFlow in both switching and routing platforms ensures comprehensive end-to-end monitoring of traffic, see sFlow.org Network Equipment for a list of vendors and products.
Note: Most routers also support Cisco Netflow/IPFIX. Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX describes why you should choose sFlow if you are interested in real-time monitoring and control applications.
DDoS mitigation is a popular use case for sFlow telemetry in routers. The combination of sFlow for real-time DDoS detection with BGP RTBH / Flowspec mitigation on routing platforms makes for a compelling solution.
DDoS protection quickstart guide describes how to deploy sFlow along with BGP RTBH/Flowspec to automatically detect and mitigate DDoS flood attacks. The use of sFlow provides sub-second visibility into network traffic during the periods of high packet loss experienced during a DDoS attack. The result is a system that can reliably detect and respond to attacks in real-time.

The following links provide detailed configuration examples:

The examples above make use of the sFlow-RT real-time analytics platform, which provides real-time visibility to drive Software Defined Networking (SDN), DevOps and Orchestration tasks.

Tuesday, March 22, 2022

DDoS attacks and BGP Flowspec responses

This article describes how to use the Containerlab DDoS testbed to simulate variety of flood attacks and observe the automated mitigation action designed to eliminate the attack traffic.

docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
containerlab deploy -t ddos.yml
Deploy the topology and access the DDoS Protect screen at http://localhost:8008/app/ddos-protect/html/
docker exec -it clab-ddos-sp-router vtysh -c "show bgp ipv4 flowspec detail"

At any time, run the command above to see the BGP Flowspec rules installed on the sp-router. Simulate the volumetric attacks using hping3.

Note: While the hping3 --rand-source option to generate packets with random source addresses would create a more authentic DDoS attack simulation, the option is not used in these examples because the victims responses to the attack packets (ICMP Port Unreachable) will be sent back to the random addresses and may leak out of the Containerlab test network. Instead varying source / destination ports are used to create entropy in the attacks. 

When you are finished trying the examples below, run the following command to stop the containers and free the resources associated with the emulation.

containerlab destroy -t ddos.yml

Moving the DDoS mitigation solution from Containerlab to production is straighforward since sFlow and BGP Flowspec are widely available in routing platforms. The articles Real-time DDoS mitigation using BGP RTBH and FlowSpec, DDoS Mitigation with Cisco, sFlow, and BGP Flowspec, DDoS Mitigation with Juniper, sFlow, and BGP Flowspec, provide configuration examples for Arista, Cisco, Juniper routers respectively.

IP Flood

IP packets by target address and protocol. This is a catch all signature that will catch any volumetric attack against a protected address. Setting a high threshold for the generic flood attack allows the other, more targetted, signatures to trigger first and provide a more nuanced response.

docker exec -it clab-ddos-attacker hping3 \
--flood --rawip -H 47 192.0.2.129
Launch simulated IP flood attack against 192.0.2.129 using IP protocol 47 (GRE).
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 47 
	FS:rate 0.000000
	received for 00:00:12
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks all IP protocol 47 (GRE) traffic to the targetted address 192.0.2.129.

IP Fragmentation

IP fragments by target address and protocol. There should be very few fragmented packets on a well configured network and the attack is typically designed to exhaust host resources, so a low threshold can be used to quickly mitigate these attacks.

docker exec -it clab-ddos-attacker hping3 \
--flood -f  -p ++1024 192.0.2.129
Launch simulated IP fragmentation attack against 192.0.2.129.
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 6 
	Packet Fragment = 2
	FS:rate 0.000000
	received for 00:00:10
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks fragemented packets to the targetted address 192.0.2.129.

ICMP Flood

ICMP packets by target address and type. Examples include Ping Flood and Smurf attacks. 

docker exec -it clab-ddos-attacker hping3 \
--flood --icmp -C 0 192.0.2.129
Launch simulated ICMP flood attack using ICMP type 0 (Echo Reply) packets against 192.0.2.129.
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 1 
	ICMP Type = 0 
	FS:rate 0.000000
	received for 00:00:13
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks ICMP type 0 packets to the targetted address 192.0.2.129.

UDP Flood

UDP packets by target address and destination port. The UDP flood attack can be designed to overload the targetted service or exhaust resources on middlebox devices. A UDP flood attack can also trigger a flood of ICMP Destination Port Unreachable responses from the targetted host to the, often spoofed, UDP packet source addresses.

docker exec -it clab-ddos-attacker hping3 \
--flood --udp -p 53 192.0.2.129
Launch simulated UDP flood attack against port 53 (DNS) on 192.0.2.129.
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 17 
	Destination Port = 53 
	FS:rate 0.000000
	received for 00:00:13
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks UDP packets with destination port 53 to the targetted address 192.0.2.129.

UDP Amplification

UDP packets by target address and source port. Examples include DNS, NTP, SSDP, SNMP, Memcached, and CharGen reflection attacks. 

docker exec -it clab-ddos-attacker hping3 \
--flood --udp -k -s 53 -p ++1024 192.0.2.129

Launch simulated UDP amplification attack using port 53 (DNS) as an amplifier to target 192.0.2.129.

BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 17 
	Source Port = 53 
	FS:rate 0.000000
	received for 00:00:43
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks UDP packets with source port 53 to the targetted address 192.0.2.129.

TCP Flood

TCP packets by target address and destination port. A TCP flood attack can also trigger a flood of ICMP Destination Port Unreachable responses from the targetted host to the, often spoofed, TCP packet source addresses.

This signature does not look at TCP flags, for example to identify SYN flood attacks, since Flowspec filters are stateless and filtering all packets with the SYN flag set would effectively block all connections to the target host. However, this control can help mitigate large volume TCP flood attacks (through the use of a limit or redirect Flowspec action) so that the traffic doesn't overwhelm layer 4 mitigation running on load balancers or hosts, using SYN cookies for example.

docker exec -it clab-ddos-attacker hping3 \
--flood -p 80 192.0.2.129

Launch simulated TCP flood attack against port 80 (HTTP) on 192.0.2.129.

BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 6 
	Destination Port = 80 
	FS:rate 0.000000
	received for 00:00:17
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks TCP packets with destination port 80 to the targetted address 192.0.2.129.

TCP Amplification

TCP SYN-ACK packets by target address and source port. In this case filtering on the TCP flags is very useful, effectively blocking the reflection attack, while allowing connections to the target host. Recent examples target vulerable middlebox devices to amplify the TCP reflection attack.

docker exec -it clab-ddos-attacker hping3 \
--flood -k -s 80 -p ++1024 -SA 192.0.2.129
Launch simulated TCP amplification attack using port 80 (HTTP) as an amplifier to target 192.0.2.129.
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 6 
	Source Port = 80 
	TCP Flags = 18
	FS:rate 0.000000
	received for 00:00:11
	not installed in PBR

Resulting Flowspec entry in sp-router. The filter blocks TCP packets with source port 80 to the targetted address 192.0.2.129.

Wednesday, March 16, 2022

Containerlab DDoS testbed

Real-time telemetry from a 5 stage Clos fabric describes lightweight emulation of realistic data center switch topologies using Containerlab. This article extends the testbed to experiment with distributed denial of service (DDoS) detection and mitigation techniques described in Real-time DDoS mitigation using BGP RTBH and FlowSpec.
docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
containerlab deploy -t ddos.yml
Finally, deploy the topology.
Connect to the web interface, http://localhost:8008. The sFlow-RT dashboard verifies that telemetry is being received from 1 agent (the Customer Network, ce-router, in the diagram above). See the sFlow-RT Quickstart guide for more information.
Now access the DDoS Protect application at http://localhost:8008/app/ddos-protect/html/. The BGP chart at the bottom right verifies that BGP connection has been established so that controls can be sent to the Customer Router, ce-router.
docker exec -it clab-ddos-attacker hping3 --flood --udp -k -s 53 192.0.2.129
Start a simulated DNS amplification attack using hping3.
The udp_amplification chart shows that traffic matching the attack signature has crossed the threshold. The Controls chart shows that a control blocking the attack is Active.
Clicking on the Controls tab shows a list of the active rules. In this case the target of the attack 192.0.2.129 and the protocol 53 (DNS) has been identified.
docker exec -it clab-ddos-sp-router vtysh -c "show bgp ipv4 flowspec detail"
The above command inspects the BGP Flowspec rules on Service Provider, sp-router, router.
BGP flowspec entry: (flags 0x498)
	Destination Address 192.0.2.129/32
	IP Protocol = 17 
	Source Port = 53 
	FS:rate 0.000000
	received for 00:01:41
	not installed in PBR

Displayed  1 flowspec entries
The output verifies that the filtering rule to block the DDoS attack has been received by the Transit Provider router, sp-router, where it can block the traffic and protect the customer network. However, the not installed in PBR message indicates that the filter hasn't been installed since the FRRouting software being used for this demonstration currently doesn't have the required functionality. Once FRRouting adds support for filtering using Linux tc flower, it will be possible to use BGP Flowspec to block attacks at line rate on commodity white box hardware, see  Linux as a network operating system.
containerlab destroy -t ddos.yml
When you are finished, run the above command to stop the containers and free the resources associated with the emulation.

Moving the DDoS mitigation solution from Containerlab to production is straighforward since sFlow and BGP Flowspec are widely available in routing platforms. The articles Real-time DDoS mitigation using BGP RTBH and FlowSpec, DDoS Mitigation with Cisco, sFlow, and BGP Flowspec, DDoS Mitigation with Juniper, sFlow, and BGP Flowspec, provide configuration examples for Arista, Cisco, Juniper routers respectively.

Monday, March 14, 2022

Real-time EVPN fabric visibility

Real-time telemetry from a 5 stage Clos fabric describes lightweight emulation of realistic data center switch topologies using Containerlab. This article builds on the example to demonstrate visibility into Ethernet Virtual Private Network (EVPN) traffic as it crosses a routed leaf and spine fabric.
docker run --rm -it --privileged --network host --pid="host" \
  -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
  -v ~/clab:/home/clab -w /home/clab \
  ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/evpn3.yml
Download the Containerlab topology file.
containerlab deploy -t evpn3.yml
Finally, deploy the topology.
docker exec -it clab-evpn3-leaf1 vtysh -c "show running-config"
See configuration of leaf1 switch.
Building configuration...

Current configuration:
!
frr version 8.1_git
frr defaults datacenter
hostname leaf1
no ipv6 forwarding
log stdout
!
router bgp 65001
 bgp bestpath as-path multipath-relax
 bgp bestpath compare-routerid
 neighbor fabric peer-group
 neighbor fabric remote-as external
 neighbor fabric description Internal Fabric Network
 neighbor fabric capability extended-nexthop
 neighbor eth1 interface peer-group fabric
 neighbor eth2 interface peer-group fabric
 !
 address-family ipv4 unicast
  network 192.168.1.1/32
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor fabric activate
  advertise-all-vni
 exit-address-family
exit
!
ip nht resolve-via-default
!
end
The loopback address on the switch, 192.168.1.1/32, is advertised to neighbors so that the VxLAN tunnel endpoint is known to switches in the fabric. The address-family l2vpn evpn setting exchanges bridge tables across BGP connections so that they operate as a single virtual bridge.
docker exec -it clab-evpn3-h1 ping -c 3 172.16.10.2
Ping h2 from h1
PING 172.16.10.2 (172.16.10.2): 56 data bytes
64 bytes from 172.16.10.2: seq=0 ttl=64 time=0.346 ms
64 bytes from 172.16.10.2: seq=1 ttl=64 time=0.466 ms
64 bytes from 172.16.10.2: seq=2 ttl=64 time=0.152 ms

--- 172.16.10.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.152/0.321/0.466 ms
The results verify that there is layer 2 connectivity between the two hosts.
docker exec -it clab-evpn3-leaf1 vtysh -c "show evpn vni"
List the Virtual Network Identifiers (VNIs) on leaf1.
VNI        Type VxLAN IF              # MACs   # ARPs   # Remote VTEPs  Tenant VRF                           
10         L2   vxlan10               2        0        1               default
We can see one virtual network, VNI 10.
docker exec -it clab-evpn3-leaf1 vtysh -c "show evpn mac vni 10"
Show the virtual bridge table for VNI 10 on leaf1.
Number of MACs (local and remote) known for this VNI: 2
Flags: N=sync-neighs, I=local-inactive, P=peer-active, X=peer-proxy
MAC               Type   Flags Intf/Remote ES/VTEP            VLAN  Seq #'s
aa:c1:ab:25:7f:a2 remote       192.168.1.2                          0/0
aa:c1:ab:25:76:ee local        eth3                                 0/0
The MAC address, aa:c1:ab:25:76:ee, is reported as locally attached to port eth3. The MAC address, aa:c1:ab:25:7f:a2, is reported as remotely accessable through the VxLAN tunnel to 192.168.1.2, the loopback address for leaf2.
The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. Connect to the sFlow-RT Flow Browser application, http://localhost:8008/app/browse-flows/html/, or click here for a direct link to a chart with the settings shown above.

The chart shows VxLAN encapsulated Ethernet packets routed across the leaf and spine fabric. The inner and outer addresses are shown, allowing the flow to be traced end-to-end. See Defining Flows for more information.

docker exec -it clab-evpn3-h1 iperf3 -c 172.16.10.2
Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between h1 and h2. The flow should immediately appear in the Flow Browser chart.
containerlab destroy -t evpn3.yml
When you are finished, run the above command to stop the containers and free the resources associated with the emulation.

Moving the monitoring solution from Containerlab to production is straightforward since sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE.