Consistent Hash with Bounded Loads

Adam Cassar

Co-Founder

5 min read

A consistent hash with load bounds is an enhancement over the basic consistent hashing technique. In the standard consistent hashing algorithm, while keys are distributed across the servers in a way that minimizes rehashing when servers are added or removed, there is no guarantee of a perfectly even load distribution. Some servers may end up being responsible for significantly more keys than others due to random hash collisions, which can lead to an imbalance in the load distribution.

The concept of consistent hashing with bounded loads, or simply bounded-load hashing, addresses this issue. It provides a way to ensure that the load (i.e., the number of keys assigned) on any given server is within a factor of the average load across all servers.

The approach still uses a hash ring, but with an additional constraint that no server can exceed a certain load limit. When a key is hashed and the corresponding server in the hash ring is already at its load limit, the key is assigned to the next server in the hash ring that isn't at its load limit. This process is repeated until all keys have been assigned.

Here's a rough pseudocode example of how it might look:

# Function to add a key with bounded loads
def add_key_bounded(key):
    position = hash_function(key)

    # Find the server for this key
    for i in range(position, position+1000):
        i = i % 1000  # To loop back to the beginning of the ring
        if hash_ring[i] is not None and server_loads[hash_ring[i]] < LOAD_LIMIT:
            server_loads[hash_ring[i]] += 1  # Increment the server's load
            return hash_ring[i]  # Return the server that should handle this key

In this example, server_loads is a dictionary that keeps track of how many keys each server is responsible for, and LOAD_LIMIT is the maximum number of keys any server is allowed to handle.

By bounding the load on each server, this approach can ensure a more balanced distribution of keys across servers. However, it's worth noting that this algorithm can result in keys being assigned to servers that are not their nearest neighbors in the hash ring, which can potentially increase latency in some cases. As with any algorithm, there's a trade-off between load balancing and other factors such as latency and computational overhead.

The Google Cloud team, in collaboration with visiting researcher Mikkel Thorup from the University of Copenhagen, devised an efficient allocation algorithm to improve load balancing in large-scale web services such as content hosting. The results of their research were detailed in a paper titled “Consistent Hashing with Bounded Loads” that was published in August 2016.

Load balancing, a crucial aspect of managing large-scale web services, involves distributing client requests evenly across multiple servers to prevent any individual server from becoming overloaded. An ideal load-balancing system not only distributes the load uniformly but also minimizes the changes to the system when servers or clients are added or removed. This consistency is crucial in dynamic environments where servers and clients can change over time.

Traditional consistent hashing techniques, although effective for load balancing in dynamic environments, can lead to sub-optimal load distribution across servers in certain scenarios. Additionally, as both servers and clients can be added or removed frequently, the load balancing algorithm needs to be dynamic and adaptive, maintaining an evenly distributed load while minimizing the number of client reassignments whenever a change occurs.

Google's algorithm tackles these challenges head-on. To explain their method, they use an analogy of servers as bins and clients as balls. The goal is for all bins (servers) to have a load (clients) roughly equal to the average density, with a small tolerance factor, ε. The algorithm is designed to ensure that every bin has a capacity within the range of the average load times (1+ε), which helps achieve both uniformity and consistency in client-server allocations.

The algorithm employs two separate hash functions to assign positions to balls (clients) and bins (servers) on a circular continuum. The balls are then allocated in a specific order (say, based on their ID), with each ball being assigned to the first bin it encounters with spare capacity as it moves clockwise on the circle.

The algorithm is designed to recompute the allocation whenever an update occurs (i.e., the addition or removal of a client or server) to maintain the uniform distribution of load. The paper proves that every ball removal or insertion in the system results in O(1/ε²) movements of other balls. Importantly, this upper bound is independent of the total number of balls or bins in the system, which means the algorithm scales well with increasing size.

The algorithm creates a trade-off between uniformity and consistency. A smaller ε value provides better uniformity but compromises consistency, while a larger ε value improves consistency but reduces uniformity. The algorithm performs well even in worst-case scenarios, making it ideal for dynamic, large-scale environments like content hosting services.

Google's algorithm has already seen real-world success. Andrew Rodland from Vimeo implemented it in HAProxy, a popular open-source software, for their load balancing project, resulting in a substantial decrease in cache bandwidth by almost a factor of 8. This eliminated a significant scaling bottleneck, demonstrating the algorithm's practical effectiveness.

Overall, the work of the Google Cloud team and Mikkel Thorup represents a significant advancement in the field of load balancing. By addressing the challenges of uniformity and consistency in dynamic environments, their algorithm provides a robust solution for managing large-scale web services efficiently. The team's research and its open-source availability promises to continue benefiting the broader community.

Enterprise-Grade Security and Performance

Peakhour offers enterprise-grade security to shield your applications from DDoS attacks, bots, and online fraud, while our global CDN ensures optimal performance.

Contact Us

Related Content

Enterprise-Level Caching for All

Enterprise-Level Caching for All

Elevate your e-commerce with our newly released Magento 2 plugin. Experience enterprise-level caching features accessible to all Peakhour customers.

Navigating CDN Consolidation

Navigating CDN Consolidation

Explore the complexities of switching CDN providers amid industry consolidation and how Peakhour can assist in the transition

Vary Cache on Cookie Value

Vary Cache on Cookie Value

Varying the cache on a specific cookie value is a powerful way to cache personalised content. Many CDNs consider this an enterprise feature, but it's essential for modern dynamic websites.

Cache Tags/Surrogate Keys

Cache Tags/Surrogate Keys

Surrogate Keys, or cache tags, are a powerful mechanism for targeted flushing of content from a cache, not all CDNs support them though.

Double MAD vs the Rest

A look at the limitations of Double MAD for anomaly detection, and a comparison with the Z-score method, to help you choose the right approach for your data.

© PEAKHOUR.IO PTY LTD 2025   ABN 76 619 930 826    All rights reserved.