Tuesday, August 23, 2022

NVIDIA ConnectX SmartNICs

NVIDIA ConnectX SmartNICs offer best-in-class network performance, serving low-latency, high-throughput applications with one, two, or four ports at 10, 25, 40, 50, 100, 200, and up to 400 gigabits per second (Gb/s) Ethernet speeds.

This article describes how use the instrumentation built into ConnectX SmartNICs for data center wide network visibility. Real-time network telemetry for automation provides some background, giving an overview of the sFlow industry standard with an example of troubleshooting a high performance GPU compute cluster.

Linux as a network operating system describes how standard Linux APIs are used in NVIDIA Spectrum switches to monitor data center network performance. Linux Kernel Upstream Release Notes v5.19 describes recent driver enhancements for ConnectX SmartNICs that extend visibility to servers for end-to-end visibility into the performance of high performance distributed compute infrastructure.

The open source Host sFlow agent uses standard Linux APIs to configure instrumentation in switches and hosts, streaming the resulting measurements to analytics software in real-time for comprehensive data center wide visibility.

Packet sampling provides detailed visibility into traffic flowing across the network. Hardware packet sampling makes it possible to monitor 400 gigabits per second interfaces on the server at line rate with minimal CPU/memory overhead.
psample { group=1 egress=on }
dent { sw=off switchport=^eth[0-9]+$ }
The above Host sFlow configuration entries enable packet sampling on the host. Linux 4.11 kernel extends packet sampling support describes the Linux PSAMPLE netlink channel used by the network adapter to send packet samples to the Host sFlow agent. The dent module automatically configures hardware packet sampling on network interfaces matching the switchport pattern (eth0, eth1, .. in this example) using the Linux tc-sample API, directing the packet samples to be sent to the specified psample group
Visibility into dropped packets offers significant benefits for network troubleshooting, providing real-time network-wide visibility into the specific packets that were dropped as well the reason the packet was dropped. This visibility instantly reveals the root cause of drops and the impacted connections.
dropmon { group=1 start=on sw=on hw=on }
The above Host sFlow configuration entry enables hardware dropped packet monitoring on the network adapter hardware.
sflow {
  collector { ip=10.0.0.1 }
  psample { group=1 egress=on }
  dropmon { group=1 start=on sw=on hw=on }
  dent { sw=off switchport=^eth[0-9]+$ }
}
The above /etc/hsflowd.conf file shows a complete configuration. The centralized collector, 10.0.0.1, receives sFlow from all the servers and switches in the network to provide comprehensive end-to-end visibility.
The sFlow-RT analytics engine receives and analyzes the sFlow telemetry stream, providing real-time analytics to visibility and automation systems (e.g. Flow metrics with Prometheus and Grafana).

No comments:

Post a Comment