The diagrams show different two different configurations for sFlow monitoring:
- Without Forwarding Each sFlow agent is configured to send sFlow to each of the analysis applications. This configuration is appropriate when a small number of applications is being used to continuously monitor performance. However, the overhead on the network and agents increases as additional analyzers are added. Often it is not possible to increase the number of analyzers since many embedded sFlow agents have limited resources and only support a small number of sFlow streams. In addition, the complexity of configuring each agent to add or remove an analysis application can be significant since agents may reside in Ethernet switches, routers, servers, hypervisors and applications on many different platforms from a variety of vendors.
- With Forwarding In this case all the agents are configured to send sFlow to a forwarding module which resends the data to the analysis applications. In this case analyzers can be added and removed simply by reconfiguring the forwarder without any changes required to the agent configurations.
A previous posting introduced the sflowtool command line utility. The following examples demonstrate how sflowtool can be used to replicate and forward sFlow streams.
The following command configures sflowtool to listen for sFlow on the well known port (UDP port 6343) and forward the sFlow to two analyzers: the first running on remote machine 10.0.0.111 and the second listening on port 7343 on the local host.
sflowtool -f 10.0.0.111 -f localhost/7343
If an sFlow analyzer is already running on the server then it will already be bound to the sFlow port and the above command will fail. However, you can still forward the sFlow using the tcpdump command to capture the sFlow datagrams and sflowtool to forward them:
tcpdump -p -s 0 -w - udp port 6343 \ | sflowtool -r - -f 10.0.0.111 -f localhost/7343
It is also possible to filter the sFlow data to pick out a particular agent. This following command selectively forwards sFlow coming from IP address 10.0.0.237:
tcpdump -p -s 0 -w - src host 10.0.0.237 and udp port 6343 \ | sflowtool -r - -f 10.0.0.111 -f localhost/7343
Rather than forwarding the sFlow, this technique can also be used to locally analyze the data. For example, suppose that Ganglia is being used to monitor the performance of a web farm. While Ganglia might show a large spike in HTTP requests, analysis using sflowtool offers additional details:
tcpdump -p -s 0 -w - udp port 6343 | sflowtool -r - -H 10.0.0.70 - - [03/Jan/2012:14:44:29 -0800] "GET http://www.google.com/ HTTP/1.1" 200 21605 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.63 Safari/535.7"
The information about URLs, user agents, response times, status codes and bytes provides the additional detail needed to diagnose the performance problem. For example, identifying overloaded web servers, top URLs and the sources of the increased load.
Note: See the sflowtool article for more examples of analyzing sFlow data using sflowtool.
The command "tcpdump -p -s 0 -w - udp port 6343 \
ReplyDelete| sflowtool -r - -f 10.50.2.15/7343 -f localhost/7343" doesn't work.
I use Ganglia listening on port 6343 to parse the host/vhost metrics. When I want to parse sFlow switch packets with sflowtool on the same server, I use the command mentioned but it doesn't work. Neither localhost nor 10.50.2.15 can receive any udp packets from port 7343.
How can I make it?
Can you verify that the sFlow data is being captured by tcpdump:
ReplyDeletetcpdump -p -s 0 udp port 6343
If you print out the contents of the sFlow packets, do you see any flow samples?
tcpdump -p -s 0 -w - udp port 6343 | sflowtool -r -
Thanks for your response, Peter.
DeleteI can print out the contents of the sFlow packets correctly with "tcpdump -p -s 0 -w - udp port 6343 | sflowtool -r -" while it just can't forward the packets.
If there is any improvement in this bug, please let me know.
It does look like there is a bug in the forwarding option (-f) for sflowtool, I am following up with the developer and will let you know what I find out.
ReplyDeleteThanks for pointing this out. There is a new version of sflowtool (version 3.26) that has the fix. You should now be able to read from tcpdump, pull out the sFlow packets, and use the -f option to forward them to one or more targets. Please download, compile, and confirm that it works for you.
ReplyDeletehttp://inmon.com/technology/sflowTools.php
(This version also has a spec file for building an rpm)
Hi neil,
ReplyDeleteI have installed a virtual switch with sFlow agent. And I am monitoring the traffic using the SFlow tool. It works very well!! Thank you so much for the tool.
But my doubt is , I am only able to see the counter samples and not the flow samples. Not too sure, on what I am missing out, to be able to see the FLOW samples as well.
My set up is exactly as the one in this link http://openvswitch.org/support/config-cookbooks/sflow/
Thanking you again,
Keerthana
sflowtool is cool, no question. But one disadvantage is, that is not able to communicate in NetFlow v9/IPFIX to a netflow collector. So it will silently ignore sflow samples containing ipv6 data.
ReplyDeleteWhat NetFlow/IPFIX collector are you using? Most collectors natively support sFlow and so there is no need to convert sFlow to NetFlow. The conversion loses much of the useful information contained in the sFlow feed - it is better to use a traffic analyzer with native sFlow support if you want to get the most out of the data. For example, you are interested in IPv6 so you probably would like visibility into IPv6/IPv4 migration.
DeleteI have been using nprobe as a netflow collector for a few years now. Nprobe got a sflow collector as well, but its buggy and currently does not make any fun. My issue is data aggregation of sFlow data, so feeding it into the netflow collector makes sense for me before its hitting the database. Else i would like to write an easy thing like ' sflowtool | dbimport'.
DeleteNFDUMP or pmacct are open source projects that you might want to look at. I believe they can take sFlow and put records into a database.
DeleteAnother alternative is sFlow-RT's RESTflow API. You can configure and collect flows using HTTP. The article includes a Python script that you could modify to insert flow records into a database.
Hi Peter
ReplyDeleteI am using the flowd to capture the netflow packets but it does not support sFlow, so I have downloaded and compiled sflowtool and it works!
Thank you very much or your great work.
I am running it as below:
/usr/local/bin/sflowtool -v 5 -f localhost/2055
The problem I am having is that when the sFlow to NetFlow conversion is done the agent_addr gets overwritten with 127.0.0.1, which is localhost.
The actual NetFlow source is below:
192.168.1.254 Lab_router
Here is the decoded NetFlow packet:
FLOW: tag (0), recv_sec (1444378066), agent_addr (127.0.0.1), src_addr (192.168.1.70), dst_addr (192.168.1.254), src_port (4066), dst_port (1967), flow_octets (880), flow_packets (11), protocol (17), tcp_flags (16), tos (41), if_index_in (10), if_index_out (0), flow_start (3669321900), flow_finish (3668721888))
As you see agent_addr (127.0.0.1) has been re-written with 127.0.0.1 instead of 192.168.1.254.
I am using agent_addr to uniquely identify the device where sflow is configured.
Is there a way to make original agent_addr to stick?
Thank you
Mike
You need to include the -S option:
Delete-S - spoof source of netflow packets to input agent IP
I believe you need to be running as root to allow spoofing of addresses
Hi Peter
DeleteThanks for your prompt response.
I have included the -S option as below:
/usr/local/bin/sflowtool -S -v 5 -f localhost/2055
But the output still the same:
FLOW: tag (0), recv_sec (1444427489), agent_addr (127.0.0.1), src_addr (192.168.1.100), dst_addr (192.168.1.254), src_port (55675), dst_port (161), flow_octets (12875), flow_packets (163), protocol (17), tcp_flags (16), tos (0), if_index_in (10), if_index_out (0), flow_start (3718145504), flow_finish (3718143660))
If the code requires further development I am happy to fund the effort as I really need this to work.
Kind regards
Mike Chelomanov
NETSAS Australia
http://netsas.com.au
The -f host/port option is for forwarding sFlow only. The command line settings to forward source-spoofed sampled NetFlow are like this:
Deletesflowtool -S -c localhost -d 2055
For full details, see the output of "sflowtool -h".
Neil
Hi Neil
DeleteThanks for your reply.
I do need to forward sflow only but I need to preserve agent_addr so when NetFlow packets (converted from SFlow) are received but the NEtFlow collector (flowd) I can differentiate between various SFlow sources.
Could you please be patient with me and explain how this can be achieved.
If this option is not currently implemented I am happy to fund the dev work.
Appreciate your help.
Kind regards
Mike
Unlike NetFlow, sFlow carries the agent address in the UDP payload - there shouldn't be a need to spoof the IP source address. What are you using to receive the sFlow being forwarded by sflowtool? That tool should be identifying the sFlow source using the embedded sFlow agent address.
DeleteHi Peter
DeleteI am using flowd, which listens on port 2055 and dumps all flows into binary file /var/log/flowd.
Then I am using perl parser, which extracts all relevant field including agent_addr.
When I do forwarding with sflowtool extracted agent_addr becomes 127.0.0.1.
perhaps sflow agent_addr goes somewhere else in the netflow packet...
I would really appreciate you help with this as we have promised our client to make Enigma NMS to accept sFlows, your tool seems to be the perfect for this.
Kind regards
Mike
Can sflowtool accept sflow packets and dump them somewhere in the file system so we can pick them up?
DeleteI still don't understand - what software is converting the sFlow to NetFlow? You said you are using sflowtool to forward sFlow, but not convert sFlow to NetFlow. If you use sflowtool (with the spoof option set) to convert sFlow to NetFlow then you will get the correct agent address.
DeleteThe source code for sflowtool is on Github (https://github.com/sflow/sflowtool), could Enigma NMS embed the code to natively support sFlow?
Hi Peter
ReplyDeleteI am very sorry for confusion, when I said "forward" I have meant "convert".
I realize how dumb I look :-(
So I start from the beginning:
1. What is the syntax for the sflowtool to convert Sflow into NetFlow and preserve the original agent address?
2. what is the syntax for sflowtool to accept sflow packets and save them to disk?
If sflowtool can accept sflow packets and save them to disk we would gladly embed it into Enigma NMS.
I am again sorry for confusion.
Kind regards
Mike
Kind regards
1. Neil's comment above provided the syntax to convert sFlow to NetFlow and forward the NetFlow to a NetFlow collector.
Delete2. sflowtool does not save data to disk - the NetFlow collector should be saving the data so that the traffic history can be queried.
Thank you Neil and Peter!
DeleteAll works like a charm!!!
Is there a way to donate to your project?
Hi Team,
ReplyDeleteThe SFLOW v5 packets are being forwarded from Source port number 6343 to Destination port number 2055.
I am receiving SFLOW packets in port 2055.
When I run the below commands, it throws me error like below,
./sflowtool -4 -p 2055 -l
v4 bind() failed, port = 2055 : Address already in use
unable to open UDP read socket
Does the tool runs on the same port number(2055 in my case) it is listening ?
If so is there a way I can change the config so that the process can run on a port number and listen on another port number ?
Could you please help fixing this issue?
Thanks.
The two sflowtool instances can't open the same port. The article has an example of using tcpdump to grab the sFlow packets and send them to sflowtool without needing to open the port. You would run the command:
Deletetcpdump -p -s 0 -w - udp port 2055 | sflowtool -r -