Wednesday, December 1, 2010

XCP 1.0 beta

This article describes the steps needed to configure sFlow monitoring on the latest Xen Cloud Platform (XCP) 1.0 beta release.

First, download and install the XCP 1.0 beta on a server. The simplest way to build an XCP server is to use the binaries from the XCP 1.0 beta download page.

Next, enable the Open vSwitch by opening a console and typing the following commands:

xe-switch-network-backend openvswitch

The article, Configuring Open vSwitch, describes the steps to manually configure sFlow monitoring. However, manual configuration is not recommended since additional configuration is required every time a bridge is added to the vSwitch, or the system is rebooted.

The Host sFlow agent automatically configures the Open vSwitch and provides additional performance information from the physical and virtual machines (see Cloud-scale performance monitoring).

Download the Host sFlow agent and install it on your server using the following commands:

rpm -Uvh hsflowd_XCP_xxx.rpm
service hsflowd start
service sflowovsd start

The following steps configure all the sFlow agents to sample packets at 1-in-512, poll counters every 20 seconds and send sFlow to an analyzer ( over UDP using the default sFlow port (6343).

Note: A previous posting discussed the selection of sampling rates.

The default configuration method for sFlow is DNS-SD; enter the following DNS settings in the site DNS server:

analyzer A

_sflow._udp SRV 0 0 6343 analyzer
_sflow._udp TXT (

Note: These changes must be made to the DNS zone file corresponding to the search domain in the XCP server /etc/resolv.conf file. If you need to add a search domain to the DNS settings, do not edit the resolv.conf file directly since changes will be lost on a reboot, instead either follow the directions in How to Add a Permanent Search Domain Entry in the Resolv.conf File of a XenServer Host, or simply edit the DNSSD_domain setting in hsflowd.conf to specify the domain to use to retrieve DNS-SD settings.

Once the sFlow settings are added to the DNS server, they will be automatically picked up by the Host sFlow agents. If you need to change the sFlow settings, simply change them on the DNS server and the change will automatically be applied to all the XCP systems in the data center.

Manual configuration is an option if you do not want to use DNS-SD. Edit the Host sFlow agent configuration file, /etc/hsflowd.conf, on each XCP server:

  DNSSD = off
  polling = 20
  sampling = 512
    ip =
    udpport = 6343

After editing the configuration file you will need to restart the Host sFlow agent:

service hsflowd restart

An sFlow Analyzer is needed to receive the sFlow data and report on performance (see Choosing an sFlow analyzer). The free sFlowTrend analyzer is a great way to get started, see sFlowTrend adds server performance monitoring to see examples.

March 3, 2011 Update: XCP 1.0 has now been released, download the production version from the XCP 1.0 Download page. The installation procedure hasn't changed - follow these instructions to enable sFlow on XCP 1.0.


  1. I configure sFlow on the XCP server (The latest XCP 1.5 release), i see the new agent automatically appear in the list of agents in sFlowTrend with green status. The agent is a host.I need to receive traffic from openvswitch (xenbr0) to monitor each virtual machine. I cannot Edit or check sFlow agent address box and Use global SNMP settings box.
    In addition I don't see the new agent in Switch selector in the control bar.
    The goal is control the rate used by each virtual machine.
    ovs-vsctl list sflow
    _uuid : f18ddfd4-3dda-4c34-84df-9e6010eaf93c
    agent : "xenbr0"
    external_ids : {}
    header : 128
    polling : 20
    sampling : 512
    targets : [""]

    should agent be eth0 and should eth0 have an IP address?

  2. This output looks file. If you run "/sbin/ifconfig" you'll see that the IP address is associated with xenbr0, so that's why it shows as the agent here.

    The feed that this Open vSwitch sFlow sub-agent sends out will allow you to see the traffic from/to the MAC and IP addresses that belong to each VM, as well as interface counters for all the virtual ports. The other sub-agent is hsflowd itself, which will contribute CPU/Mem/diskIO stats for each VM, tagged with their UUIDs and MAC addresses. So it's the MAC addresses that you can use to tie it all together.

    Dynamically rate-limiting the VMs based on quotas, or to alleviate congestion elsewhere, seems like a great application. Here's an example of a similar system:
    Network Edge