This article describes a proof of concept demonstrating how Librato's cloud service can be used to cost effectively monitor large scale cloud infrastructure by leveraging standard sFlow instrumentation. Librato offers a free 30 day trial, making it easy to evaluate solutions based on this demonstration.
The diagram shows the measurement pipeline. Standard sFlow measurements from hosts, hypervisors, virtual machines, containers, load balancers, web servers and network switches stream to the sFlow-RT real-time analytics engine. Metrics are pushed from sFlow-RT to Librato using the REST API.
Over 40 vendors implement the sFlow standard and compatible products are listed on sFlow.org. The open source Host sFlow agent exports standard sFlow metrics from hosts. For additional background, the Velocity conference talk provides an introduction to sFlow and case study from a large social networking site.
Librato's service is priced based on the number of data points that they need to store. For example, a Host sFlow agent reports approximately 50 measurements per node. Collecting all the measurements from a cluster of 100 servers would generate 5000 metrics and cost $1,000 per month if metrics are stored at 15 second intervals.
There are important scaleability and cost advantages to placing the sFlow-RT analytics engine in front of the metrics collection service. For example, in large scale cloud environments the metrics for each member of a dynamic pool isn't necessarily worth trending since virtual machines are frequently added and removed. Instead, sFlow-RT tracks all the members of the pool, calculates summary statistics for the pool, and logs the summary statistics. This pre-processing can significantly reduce storage requirements, reducing costs and increasing query performance. The sFlow-RT analytics software also calculates traffic flow metrics, hot/missed Memcache keys, top URLs, exports events via syslog to Splunk, Logstash etc. and provides access to detailed metrics through its REST API.The following steps were involved in setting up the proof of concept.
First register for free trial at Librato.com.
Find or build a server with Java 1.7+ and install sFlow-RT:
wget http://www.inmon.com/products/sFlow-RT/sflow-rt.tar.gz tar -xvzf sflow-rt.tar.gz cd sflow-rtEdit the init.js script and add the following lines (modifying the user and token from your Librato account):
var url = "https://metrics-api.librato.com/v1/metrics"; var user = "first.last@mycompany.com"; var token = "55add91c806fb5f634ad1a334789a32e8d10a597815e6865aa84f0749324450e"; setIntervalHandler(function() { var metrics = ['min:load_one','q1:load_one','med:load_one', 'q3:load_one','max:load_one']; var vals = metric('ALL',metrics,{os_name:['linux']}); var gauges = {}; for each (var val in vals) { gauges[val.metricName] = { "value": val.metricValue, "source": "Linux_Pool" }; } var body = {"gauges":gauges}; http(url,'post', 'application/json', JSON.stringify(body), user, token); } , 15);Now start sFlow-RT:
./start.shCluster performance metrics describes the summary metrics that sFlow-RT can calculate. In this case, the load average minimum, maximum, and quartiles for the cluster are being calculated and pushed to Librato every 15 seconds.
Install Host sFlow agents on the physical or virtual machines in your cluster and direct them to send metrics to the sFlow-RT host. The installation steps can be easily automated using orchestration tools like Puppet, Chef, Ansible, etc.
Physical and virtual switches in the cluster can be configured to send sFlow to sFlow-RT in order to add traffic metrics to the mix, exporting metrics that characterizing traffic between service tiers etc. However, in public cloud environments, traffic flow information is typically not available. The articles, Amazon Elastic Compute Cloud (EC2) and Rackspace cloudservers describe how Host sFlow agents can be configured to monitor traffic between virtual machines in the cloud.
Metrics should start appearing in Librato as soon as the Host sFlow agents are started.
In this example, sFlow-RT is exporting 5 metrics to summarize the cluster performance, reducing the total monthly cost of monitoring the cluster from $1,000 to $1. Of course there are likely to be more metrics that you will want to track, but the ability to selectively log high value metrics provides a way to control costs and maximize benefits.
No comments:
Post a Comment