Wednesday, May 4, 2011


The tomcat-sflow-valve project is an open source implementation of sFlow monitoring for Apache Tomcat, an open source web server implementing the Java Servlet and JavaServer Pages (JSP) specifications.

The advantage of using sFlow is the scalability it offers for monitoring the performance of large web server clusters or load balancers where request rates are high and conventional logging solutions generate too much data or impose excessive overhead. Real-time monitoring of HTTP provides essential visibility into the performance of large-scale, complex, multi-layer services constructed using Representational State Transfer (REST) architectures. In addition, monitoring HTTP services using sFlow is part of an integrated performance monitoring solution that provides real-time visibility into applications, servers and switches (see sFlow Host Structures).

The tomcat-sflow-valve software is designed to integrate with the Host sFlow agent to provide a complete picture of server performance. Download, install and configure Host sFlow before proceeding to install the tomcat-sflow-valve - see Installing Host sFlow on a Linux Server. There are a number of options for analyzing cluster performance using Host sFlow, including Ganglia and sFlowTrend.

Next, download the tomcat-sflow-valve software from The following steps install the SFlowValve in a Tomcat 7 server.

tar -xvzf tomcat-sflow-valve-0.5.1.tar.gz
cp sflowvalve.jar $TOMCAT_HOME/lib/

Edit the $TOMCAT_HOME/conf/server.xml file and insert the following Valve statement in the Host section of the Tomcat configuration file:

  <Valve className="com.sflow.catalina.SFlowValve" />

Restart Tomcat:

/sbin/service tomcat restart

Once installed, the sFlow Valve will stream measurements to a central sFlow Analyzer. Currently the only software that can decode HTTP sFlow is sflowtool. Download, compile and install the latest sflowtool sources on the system your are using to receive sFlow from the servers in the Tomcat cluster.

Running sflowtool will display output of the form:

[pp@test]$ /usr/local/bin/sflowtool
startDatagram =================================
datagramSize 116
unixSecondsUTC 1294273499
datagramVersion 5
agentSubId 6486
packetSequenceNo 6
sysUpTime 44000
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:2
sampleSequenceNo 6
sourceId 3:65537
counterBlock_tag 0:2201
http_method_option_count 0
http_method_get_count 247
http_method_head_count 0
http_method_post_count 2
http_method_put_count 0
http_method_delete_count 0
http_method_trace_count 0
http_methd_connect_count 0
http_method_other_count 0
http_status_1XX_count 0
http_status_2XX_count 214
http_status_3XX_count 35
http_status_4XX_count 0
http_status_5XX_count 0
http_status_other_count 0
endSample   ----------------------
startSample ----------------------
sampleType_tag 0:1
sampleSequenceNo 3434
sourceId 3:65537
meanSkipCount 2
samplePool 7082
dropEvents 0
inputPort 0
outputPort 1073741823
flowBlock_tag 0:2100
extendedType socket4
socket4_ip_protocol 6
socket4_local_port 80
socket4_remote_port 61401
flowBlock_tag 0:2201
flowSampleType http
http_method 2
http_protocol 1001
http_uri /favicon.ico
http_useragent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_5; en-us) AppleW
http_bytes 284
http_duration_uS 335
http_status 404
endSample   ----------------------
endDatagram   =================================

The -H option causes sflowtool to output the HTTP request samples using the combined log format:

[pp@test]$ /usr/local/bin/sflowtool -H - - [05/Jan/2011:22:39:50 -0800] "GET /membase.php HTTP/1.1" 200 3494 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_5; en-us) AppleW" - - [05/Jan/2011:22:39:50 -0800] "GET /favicon.ico HTTP/1.1" 404 284 "" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_5; en-us) AppleW"

Converting sFlow to combined logfile format allows existing log analyzers to be used to analyze the sFlow data. For example, the following commands use sflowtool and webalizer to create reports:

/usr/local/bin/sflowtool -H | rotatelogs log/http_log &
webalizer -o report log/*

The resulting webalizer report shows top URLs:

Finally, the real potential of HTTP sFlow is as part of a broader performance management system providing real-time visibility into applications, servers, storage and networking across the entire data center.


  1. Hi,
    I have followed all the steps to configure the tomcat-sflow module, but still i am not able to collect the data in sflowtool. I also added sampling.http=2 in /etc/hsflowd.conf
    Please help out !!!

  2. What version of Tomcat are you using? The SFlowValve uses the Tomcat valve API - which was changed fairly recently. The version of Tomcat used in the example was 7.0.11.

  3. I am using Tomcat ver-7.0.16. So, i hope it should be working on this version.

  4. Do you see the settings passing through to /etc/ The Tomcat valve reads /etc/ for its configuration.

  5. yes! i can see sampling.http=2 in /etc/ . I suppose then the settings are passing through.

  6. Any possibility of supporting Tomcat 6.0? I've tried running the SFlowValve against Tomcat 6.0.29, but it looks like the SFlowValve won't even compile against Tomcat 6.0. I have been able to successfully use the sflowagent, which works well for me. Thanks.

    1. The latest SFlowValve includes the sflowagent functionality - exporting JVM stats in addition to the HTTP stats, so don't run both together or you will end up with two sets of measurements.

      The SFlowValve has been back ported to Tomcat 6.0 and it appears to be working - the code is checked into the trunk: sflowvalve.jar.tomcat6 and Just remove the .tomcat6 extension and give it a try.

    2. Cannot get sflowvalve to work with tomcat 6. tomcat would not even start... getting Nullpointerexception in catalina.log. any help...?

    3. You need to use the sflowvalve.jar.tomcat6 file, renaming it to sflowvalve.jar:

      cp sflowvalve.jar.tomcat6 $TOMCAT_HOME/lib/sflowvalve.jar

    4. Yeah i did exactly that... but still tomcat won't start...

    5. Could you post the lines from the log file relating to the exception and the tomcat configuration file?

    6. The details are at
      Let me know if u need any more details..

    7. Thank you for sending the logs. Are you sure you put the sflowvalve.jar file in the correct location? The error log you sent suggests that the Java class loader couldn't find the SFlowValve class. On my system, the lib directory is:

  7. Hi Peter,
    I executed "cp sflowvalve.jar.tomcat6 $TOMCAT_HOME/lib/sflowvalve.jar" but still I get errors when I restart tomcat I still get class load failure . I am confused as to what the problem could be . You help would really be appreciated , thanks

    1. it is a typo classname should be className
      <-- wrong
      <-- correct

    2. Thanks. I fixed the typo in the instructions.

  8. Is your $TOMCAT_HOME defined? Check with: env|grep TOMCAT
    If that comes back blank then your copy failed. On RHEL/Centos machine this directory is CATALINE_HOME or /usr/share/tomcat6/lib/ others will be elsewhere... whereever you are serving your webapps folder from though should get you close, then just go up a dir.

  9. Hello,
    Our developers have multiple instances of Tomcat running on the same machine bound to alias IP addresses but all use the same port number (8080). Is there a way to have the metrics reported under a specific name? (such as APP1, APP2). We are suing hsflowd to push these metrics to Ganglia via gmond.

  10. I added code to allow override of the sFlow data source index number (by default the SFlowValve uses the port Tomcat is listening on, which in your case aliases the different servers together). You can also assign a name to each instance.

    You need to download the new sflowvalve.jar from the trunk:

    You need to create a file with the properties for each of your Tomcat instances, e.g. /usr/share/apache-tomcat-7.0.41/bin/


    export CATALINA_OPTS="$CATALINA_OPTS -Dsflow.dsindex=65536 -Dsflow.hostname=APP1"

    The script for the APP2 instance would have the line:

    export CATALINA_OPTS="$CATALINA_OPTS -Dsflow.dsindex=65537 -Dsflow.hostname=APP2"

    1. Thanks Peter! That's exactly what I was looking for. Is there anyway that this can be created for Tomcat6 like the other branches? sflowvalve.jar.tomcat6. Again, thanks again for creating this.

    2. I made the changes to the tomcat6 version and released 0.6.2 as a download.

    3. Thank you again Peter, that worked perfectly. All of the JVM's we have running now show up with the hostname that I specify so the developers know which application is theirs. Hey when I look at the http metrics they are showing up with a name of the -Dsflow.dsindex number that I specified and not the Dsflow.hostname. The JVM metrics all show up with the Dsflow.hostname though. Is there a was to get the http metrics to use the hostname as well?

    4. What software are you using to analyze and display the sFlow data? In general, one can't assume a one to one correspondence between application instance and hostname since there can be multiple web server instances per physical / virtual machine. However, the containment hierarchy is explicit in the sFlow data model and it should be possible to query to find the hostname of the containing virtual machine and include this in reports.

    5. Thanks for getting back to me, we're using Ganglia as the reporting software. The way they have implemented the JVM here is for each application they develop they will create a new separate JVM bound to port 8080 and a new virtual IP address assigned to the same physical host OS/ adapter. So as an example we basically we have one server running App1 with alias ip of X.X.X.01, App2 with alias IP of X.X.X.2, App3 with alias IP of X.X.X.3 and so on. All of these alias IP's are on the same physical host OS/adapter. I'm not sure why they designed the JVM this way.
      Here is what I see when I connect to the Ganglia gmond receiving port for application Moblie with the following set in the env:

      -Dsflow.dsindex=65539 -Dsflow.hostname=Moblie"

      HTTP metrics show with the number:

      JVM metrics show with the name (in our case vitural IP Hostname):

      Is there anyway to get http metric name to show as the jvm metric name?

    6. I don't believe there is a way to do what you are asking with Ganglia since Ganglia assumes there may be more than one web server per hostname and it falls back on the index (which would normally be the port that the web server is listening on and a useful way of labeling the instance).

      Have you looked at the HTTP transaction data being exported via sFlow (URL, host, user agent, response time, status code etc)? Ganglia isn't able to report on this data, but you might want to try sFlow-RT or sFlowTrend.

      If you want to run additional tools in parallel with Ganglia, you can configure the Host sFlow agents to send to multiple destinations, or use sflowtool/tcpdump to replicate and forward the sFlow from your gmond collector.

  11. We have hsflowd running in user space, without permission to add / create the /etc/
    so our is in /home/(user name)/run/ Can we add a -Dhsflowhome= to use a different location for

    1. The Tomcat valve is intended to be used with a Host sFlow agent (which will write the file) - see Host sFlow distributed agent. sFlow measurements from the network, host, virtual switches, VMs and applications are part of an integrated system that links the performance of the layers together - see sFlow Host Structures. An sFlow collector will expect to see host performance statistics from hosts exporting HTTP statistics.

  12. we are running hsflowd but the is not in /etc/ Can we tell sflowValue where to find

  13. We updated Tomcat from 7.0.10 to 7.0.61. In previous version we collected Tomact mwtrics usin Sflow_Valve.jar module. New version gives following errors:
    org.apache.coyote.http11.AbstractHttp11Processor process
    SEVERE: Error processing request
    java.lang.NoSuchMethodError: org.apache.coyote.Request.getBytesRead()I
    at com.sflow.catalina.SFlowValve.xdrFlowSample(
    at com.sflow.catalina.SFlowValve.sampleRequest(
    at com.sflow.catalina.SFlowValve.invoke(
    at org.apache.catalina.valves.AccessLogValve.invoke(
    at org.apache.catalina.core.StandardEngineValve.invoke(
    at org.apache.catalina.connector.CoyoteAdapter.service(
    at org.apache.coyote.http11.AbstractHttp11Processor.process(
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$ Source)
    at org.apache.tomcat.util.threads.TaskThread$
    at Source)

    Any help will be very appreciated.

  14. We updated Tomcat from 7.0.10 to 7.0.61. In previous version we collected Tomcat metrics using Sflow_Valve.jar module. It looks like Coyote Connector can't call Request.getBytesRead() method.
    Did anybody have this issue?

  15. I still didn't get any answer. I only know that HTTP Connector Coyote was changed in 7.0.60.
    I tried to recompile for Tomcat-7.0.61 - it didn't help. Any ideas?

    1. When you say re-compiling didn't help, what do you mean? Are you still getting an exception from the Request.getBytesRead() method?

      Are you running on Windows? The current implementation relies on the Host sFlow configuration file and only works on Linux based systems.

  16. Hi Peter,
    I added logging to
    Now I see that System.getProperty("sflow.dsindex") call always returns "null".
    I work on Linux version 2.6.39-400.214.4.el6uek.x86_64.
    Hsflowd is running normally. Any help?

    1. You don't need to set the sflow.dsindex property.

    2. Do the settings in your /etc/ file look correct (see Host sFlow distributed agent?

  17. I don't see Request.getBytesRead() exception anymore, but I don't get any data.

    1. You must also be running Host sFlow? The current implementation relies on the Host sFlow configuration file - see Host sFlow distributed agent. Currently only Linux based platforms are support - there is no integration between Host sFlow and the Tomcat Valve on Windows.

      How are you verifying whether data is being sent? Have you tried sflowtool?

  18. This is a piece of

    public static int dsIndex() {
    int dsIndex = -1;
    String indexStr = System.getProperty("sflow.dsindex");
    if(indexStr != null) {
    try { dsIndex = Integer.parseInt(indexStr); }
    catch(NumberFormatException e) {
    return dsIndex;

    It's called from here:

    private void pollCounters(long now) {

    if(agentAddress == null) return;
    if(dsIndex == -1) return;

    I see that it always get (dsIndex ==-1 )=> return on this place.
    I tried to comment this out and now it sends HTTP and JVM metrics.

    Do I use write source code?

    1. In the case where sflow.dsindex is not set as a System property, it will be set in sampleRequest(),

      if(dsIndex == -1) dsIndex = request.getLocalPort();

      Are you generating any http requests to Tomcat? You can adjust the sampling rate by setting the parameter sampling.http in the hsflowd.conf file (you need to restart hsflowd after making the change).

    2. You can set the sflow.dsindex system property in the Tomcat script, add a line like the following:

      export CATALINA_OPTS="$CATALINA_OPTS -Dsflow.dsindex=1234 -Dsflow.hostname=app_1" is in the Tomcat bin directory.

  19. I'm very limited in setting up Tomcat environments. I can only adjust monitoring parts.

  20. I added it to :
    export CATALINA_OPTS="$CATALINA_OPTS -Dsflow.dsindex=1234 -Dsflow.hostname=app_1"

    Now SflowValve,jar is working.