Application development is greatly simplified if you can emulate the infrastructure you want to monitor on your development machine. Docker testbed describes a simple way to develop sFlow based visibility solutions. This article describes how to build a Kubernetes testbed to develop and test configurations before deploying solutions into production.
Docker Desktop provides a convenient way to set up a single node Kubernetes cluster, just select the Enable Kubernetes setting and click on Apply & Restart.
Create the following sflow-rt.yml file:
apiVersion: v1 kind: Service metadata: name: sflow-rt-sflow spec: type: NodePort selector: name: sflow-rt ports: - protocol: UDP port: 6343 --- apiVersion: v1 kind: Service metadata: name: sflow-rt-rest spec: type: LoadBalancer selector: name: sflow-rt ports: - protocol: TCP port: 8008 --- apiVersion: apps/v1 kind: Deployment metadata: name: sflow-rt spec: replicas: 1 selector: matchLabels: name: sflow-rt template: metadata: labels: name: sflow-rt spec: containers: - name: sflow-rt image: sflow/prometheus:latest ports: - name: http protocol: TCP containerPort: 8008 - name: sflow protocol: UDP containerPort: 6343Run the following command to deploy the service:
kubectl apply -f sflow-rt.ymlNow create the following host-sflow.yml file:
apiVersion: apps/v1 kind: DaemonSet metadata: name: host-sflow spec: selector: matchLabels: name: host-sflow template: metadata: labels: name: host-sflow spec: restartPolicy: Always hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: host-sflow image: sflow/host-sflow:latest env: - name: COLLECTOR value: "sflow-rt-sflow" - name: SAMPLING value: "10" - name: NET value: "flannel" volumeMounts: - mountPath: /var/run/docker.sock name: docker-sock readOnly: true volumes: - name: docker-sock hostPath: path: /var/run/docker.sockRun the following command to deploy the service:
kubectl apply -f host-sflow.ymlIn this case, there is only one node, but the command will deploy an instance of Host sFlow on every node in a Kubernetes cluster to provide a comprehensive view of network, server, and application performance.
Note: The single node Kubernetes cluster uses the Flannel plugin for Cluster Networking. Setting the sflow/host-sflow environment variable NET to flannel instruments the cni0 bridge used by Flannel to connect Kubernetes pods. The NET and SAMPLING settings will likely need to be changed when pushing the configuration into a production environment, see sflow/host-sflow for options.
Run the following command to verify that the Host sFlow and sFlow-RT pods are running:
kubectl get podsThe following output:
NAME READY STATUS RESTARTS AGE host-sflow-lp4db 1/1 Running 0 34s sflow-rt-544bff645d-kj4km 1/1 Running 0 21hThe following command displays the network services:
kubectl get servicesGenerating the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d sflow-rt-rest LoadBalancer 10.110.89.167 localhost 8008:31317/TCP 21h sflow-rt-sflow NodePort 10.105.87.169 <none> 6343:31782/UDP 21hAccess to the sFlow-RT REST API is available via localhost port 8008.
The sFlow-RT web interface confirms that telemetry is being received from 1 sFlow agent (the Host sFlow instance monitoring the Kubernetes node).
ab -c 4 -n 10000 -b 500 -l http://10.0.0.73:8008/dump/ALL/ALL/jsonThe command above uses the ab - Apache HTTP server benchmarking tool to generate network traffic by repeatedly querying the sFlow-RT instance using the Kubernetes node IP address (10.0.0.73).
The screen capture above shows the sFlow-RT Flow Browser application reporting traffic in real-time.
#!/usr/bin/env python import requests requests.put( 'http://10.0.0.73:8008/flow/elephant/json', json={'keys':'ipsource,ipdestination', 'value':'bytes'} ) requests.put( 'http://10.0.0.73:8008/threshold/elephant_threshold/json', json={'metric':'elephant', 'value': 10000000/8, 'byFlow':True, 'timeout': 1} ) eventurl = 'http://10.0.0.73:8008/events/json' eventurl += '?thresholdID=elephant_threshold&maxEvents=10&timeout=60' eventID = -1 while 1 == 1: r = requests.get(eventurl + '&eventID=' + str(eventID)) if r.status_code != 200: break events = r.json() if len(events) == 0: continue eventID = events[0]['eventID'] events.reverse() for e in events: print(e['flowKey'])The above elephant.py script is modified from the version in Docker testbed to reference the Kubernetes node IP address (10.0.0.73).
./elephant.py 10.1.0.72,192.168.65.3The output above is generated immediately when traffic is generated using the ab command. The IP addresses correspond to those displayed in the Flow Browser chart.
curl http://10.0.0.73:8008/prometheus/metrics/ALL/ALL/txtRun the above command to metrics from the Kubernetes cluster exported using Prometheus export format.
This article was focussed on using Docker Desktop to move sFlow real-time analytics solutions into a Kubernetes production environment. Docker testbed describes how to use Docker Desktop to create an environment to develop the applications.
The test environment is too small, there is need for a bigger environment like a 3 node cluster for better realization of real scenarios.
ReplyDelete