(diagram from Going Superlinear)
The article, Going Superlinear, describes factors that affect the scalability of parallel processing systems. Ideally a parallel system will show a linear improvement in performance as the number of processors increases. However, typical parallel systems show sublinear performance since memory or I/O eventually becomes a bottleneck. Finally, the article states that it is sometimes possible to achieve superlinear scalability where adding processors creates a disproportionate increase in performance.
Applying the concepts from parallel processing to analyze the scalability of network performance monitoring systems is an interesting exercise. The parallel processing that allows a network monitoring system to scale is embedded in the network switches and operates as part of the packet forwarding function in each switch port. Therefore, it makes sense to measure system size in number of switch ports, not number of processors. The switch resources consumed by the monitoring system are a limiting factor. If an increase in network size requires a disproportionately increase in switch resources then the monitoring system will have limited scalability.
In a NetFlow monitoring system, each switch maintains a flow cache containing active connections. As the network size increases, the amount of memory allocated to each flow cache must be increased to accommodate the increased number of active connections associated with a larger network. This disproportionate increase in switch resource needed to implement NetFlow results in sublinear scalability.
The article states that superlinear scalability is possible if the monitoring system can:
- Do disproportionately less work.
- Harness disproportionately more resources.
- Less work. Packet sampling allows the switch to perform disproportionately less work (see Scalability and accuracy of packet sampling). Packet samples are not stored on the switch, but are immediately sent to a central collector for analysis. The elimination of the flow cache further reduces the switch resources consumed by monitoring.
- More resources. Moving analysis to a central collector harnesses the disproportionately larger memory and computational resources available on a server compared to the limited resources available on a switch (see Choosing an sFlow analyzer).
- NetFlow caches must be sized to handle worst case loads. Most switches in the network will have excess flow cache capacity while a few busy switches might have insufficient flow cache memory. Centralizing the cache allows memory to be pooled and flexibly re-assigned from idle to busy switches as traffic patterns change.
- While some NetFlow implementations support packet sampling, the use of sampling has no impact on memory overhead since on-board flow caches are sized for worst case (unsampled) configurations. Thus, if packet sampling is used with NetFlow, the result is poor cache utilization. However, with a centralized cache the memory savings associated with sampling increase with network size.
- Traffic paths often traverse multiple switches. With NetFlow, each switch in the path will keep a copy the flow in its cache. A centralized cache reduces memory requirements by eliminating the redundant copies.
- Centralizing the cache replaces scarce, expensive switch memory with abundant, inexpensive server memory.
The differences in scalability between NetFlow and sFlow reflect differences in the size and type of network targeted by the two technologies. Typically, NetFlow monitoring is selectively applied to a relatively small number of routed links, whereas sFlow is used to monitor all the links in large switched networks (see LAN and WAN).
The sFlow standard is widely supported by switch vendors, delivering the scalability needed to manage the performance of large, converged networks. The scalability of sFlow allows performance monitoring to extend beyond the network to include virtualization, server and cloud computing in a single integrated system.
No comments:
Post a Comment