Google's traffic engineering system is able to insert multiple non-shortest path routes depending on traffic priorities and measured demand. Using only 3 non-shortest path routes, the overall throughput can be increased by around 15%.
However, Amin stated that the big win is being able to run the backbone links at close to 100% utilization 24 x 7, a greater than a 3 times improvement over traditional WAN designs.
A key element of the Google architecture is the use of traffic prioritization. Generally over provisioning has prevailed as the technique for ensuring wide areas network quality of service, see The economics of the Internet: Utility, utilization, pricing, and Quality of Service and more recently The Concept of Quality of Service in the Internet. This seems like a contradiction - why does it make sense to use quality of service mechanisms in Google's case?
Actually there isn't a contradiction, by using SDN to accurately place traffic into just two classes (high priority and low priority), Google is effectively using over provisioning to ensure high quality of service for the high priority class (which comprises 10-20% of the link traffic). The rest of the bandwidth is filled with low priority traffic that must tollerate packet loss and lower availability, since the low priority traffic may be dropped in the case of link failure.
Figure 1: Cloud operation system (from Pragmatic software defined networking) |
SDN and large flows describes how steering large flows can significantly increase available bandwidth. Active flow steering and traffic classification are complementary techniques that could be combined to dramatically increase the usable bandwidth in any given physical network.
No comments:
Post a Comment