docker run --rm -it --privileged --network host --pid="host" \ -v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \ -v ~/clab:/home/clab -w /home/clab \ ghcr.io/srl-labs/clab bashStart Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/evpn3.ymlDownload the Containerlab topology file.
containerlab deploy -t evpn3.ymlFinally, deploy the topology.
docker exec -it clab-evpn3-leaf1 vtysh -c "show running-config"See configuration of leaf1 switch.
Building configuration... Current configuration: ! frr version 8.1_git frr defaults datacenter hostname leaf1 no ipv6 forwarding log stdout ! router bgp 65001 bgp bestpath as-path multipath-relax bgp bestpath compare-routerid neighbor fabric peer-group neighbor fabric remote-as external neighbor fabric description Internal Fabric Network neighbor fabric capability extended-nexthop neighbor eth1 interface peer-group fabric neighbor eth2 interface peer-group fabric ! address-family ipv4 unicast network 192.168.1.1/32 exit-address-family ! address-family l2vpn evpn neighbor fabric activate advertise-all-vni exit-address-family exit ! ip nht resolve-via-default ! endThe loopback address on the switch, 192.168.1.1/32, is advertised to neighbors so that the VxLAN tunnel endpoint is known to switches in the fabric. The address-family l2vpn evpn setting exchanges bridge tables across BGP connections so that they operate as a single virtual bridge.
docker exec -it clab-evpn3-h1 ping -c 3 172.16.10.2Ping h2 from h1
PING 172.16.10.2 (172.16.10.2): 56 data bytes 64 bytes from 172.16.10.2: seq=0 ttl=64 time=0.346 ms 64 bytes from 172.16.10.2: seq=1 ttl=64 time=0.466 ms 64 bytes from 172.16.10.2: seq=2 ttl=64 time=0.152 ms --- 172.16.10.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.152/0.321/0.466 msThe results verify that there is layer 2 connectivity between the two hosts.
docker exec -it clab-evpn3-leaf1 vtysh -c "show evpn vni"List the Virtual Network Identifiers (VNIs) on leaf1.
VNI Type VxLAN IF # MACs # ARPs # Remote VTEPs Tenant VRF 10 L2 vxlan10 2 0 1 defaultWe can see one virtual network, VNI 10.
docker exec -it clab-evpn3-leaf1 vtysh -c "show evpn mac vni 10"Show the virtual bridge table for VNI 10 on leaf1.
Number of MACs (local and remote) known for this VNI: 2 Flags: N=sync-neighs, I=local-inactive, P=peer-active, X=peer-proxy MAC Type Flags Intf/Remote ES/VTEP VLAN Seq #'s aa:c1:ab:25:7f:a2 remote 192.168.1.2 0/0 aa:c1:ab:25:76:ee local eth3 0/0The MAC address, aa:c1:ab:25:76:ee, is reported as locally attached to port eth3. The MAC address, aa:c1:ab:25:7f:a2, is reported as remotely accessable through the VxLAN tunnel to 192.168.1.2, the loopback address for leaf2. The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. Connect to the sFlow-RT Flow Browser application, http://localhost:8008/app/browse-flows/html/, or click here for a direct link to a chart with the settings shown above.
The chart shows VxLAN encapsulated Ethernet packets routed across the leaf and spine fabric. The inner and outer addresses are shown, allowing the flow to be traced end-to-end. See Defining Flows for more information.
docker exec -it clab-evpn3-h1 iperf3 -c 172.16.10.2Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between h1 and h2. The flow should immediately appear in the Flow Browser chart.
containerlab destroy -t evpn3.ymlWhen you are finished, run the above command to stop the containers and free the resources associated with the emulation.
Moving the monitoring solution from Containerlab to production is straightforward since sFlow is widely implemented in datacenter equipment from vendors including: A10, Arista, Aruba, Cisco, Edge-Core, Extreme, Huawei, Juniper, NEC, Netgear, Nokia, NVIDIA, Quanta, and ZTE.
Thanks for the nice blog. Could you please provide more details on how the evpn.yml file's image type : sflow/clab-iperf3 is read by Containerlab or how the yml is basically parsed? I see it points to a docker file but are those already defined for, say, sflow/prometheus ? A bit confused about how to change the image for leaf/spine to, say any other virtual image, for example Cisco NXOS virtual? Thanks
ReplyDeleteThe sflow-rt/containerlab project is designed for performance monitoring experiments, generating realistic sFlow telemetry from typical data center topologies. To do this it uses a linux container type with FRR as the routing engine and host-sflow as the sFlow agent, using kernel forwarding and kernel instrumentation so that the containerlab topology handles reasonable traffic levels and generates accurate telemetry.
DeleteI am not sure that these labs are a useful basis for other network operating systems since they tend to perform poorly under containerlab since they don't use the native Linux dataplane.