Thursday, October 21, 2021

InfluxDB Cloud


InfluxDB Cloud is a cloud hosted version of InfluxDB. The free tier makes it easy to try out the service and has enough capability to satisfy simple use cases. In this article we will explore how metrics based on sFlow streaming telemetry can be pushed into InfluxDB Cloud.

The diagram shows the elements of the solution. Agents in host and network devices are configured to stream sFlow telemetry to an sFlow-RT real-time analytics engine instance. The Telegraf Agent queries sFlow-RT's REST API for metrics and pushes them to InfluxDB Cloud.

docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus

Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.

Create an InfluxDB Cloud account. Click the Data tab. Click on the Telegraf option and the InfluxDB Output Plugin button to get the URL to post data. Click the API Tokens option and generate a token.
[agent]
  interval = "15s"
  round_interval = true
  metric_batch_size = 5000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = "1s"
  hostname = ""
  omit_hostname = true

[[outputs.influxdb_v2]]
  urls = ["INFLUXDB_CLOUD_URL"]
  token = "INFLUXDB_CLOUD_TOKEN"
  organization = "INFLUXDB_CLOUD_USER"
  bucket = "sflow"

[[inputs.prometheus]]
  urls = ["http://host.docker.internal:8008/prometheus/metrics/ALL/ifinutilization,ifoututilization/txt"]
  metric_version = 2

Create a telegraf.conf file. Substitute INFLUXDB_CLOUD_URL, INFLUXDB_CLOUD_TOKEN, and INFLUXDB_CLOUD_USER with values retrieved from the InfluxDB Cloud account.

docker run -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro \
-d --name telegraf telegraf

Use Docker to run the telegraf agent.

Data should start appearing in InfluxDB Cloud. Use the Explore tab to see what data is available and to create charts. In this case we are plotting ingress / egress utilization for each switch port in the network.

Telegraf sFlow input plugin describes why you would normally bypass Telegraf and have InfluxDB directly retrieve metrics from sFlow-RT. However, in the case of InfluxDB Cloud, Telegraf acts as a secure gateway, retrieving metrics locally using the inputs.prometheus module, and forwarding to the InfluxDB Cloud using the outputs.influxdb_v2 module. InfluxDB 2.0 released describes the settings used in the inputs.prometheus module.

Modify the urls setting in the inputs.prometheus section of the telegraf.conf file to add additional metrics and/or define flows.

There are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of the metrics collection service. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines / containers are frequently added and removed. Instead, sFlow-RT can be instructed to track all the members of the pool, calculates summary statistics for the pool, and log the summary statistics. This pre-processing can significantly reduce storage requirements, lowering costs and increasing query performance. 

Host, Docker, Swarm and Kubernetes monitoring describes how to deploy sFlow agents to monitor compute infrastructure.

The sFlow-RT Prometheus Exporter application exposes a REST API that allows metrics to be summarized, filtered, and synthesized. Exposing these capabilities through a REST API allows the Telegraf inputs.prometheus module to control the behavior of the sFlow-RT analytics pipeline and retrieve a small set of hight value metrics tailored to your requirements.

No comments:

Post a Comment