In Part 3, we mentioned the various distributed techniques used while architecting NoSQL data stores. A table nicely summarized these techniques and their advantages. In this article, we will go into the details of partitioning and replication.
Partitioning
As mentioned earlier, the key design requirement for DynamoDB is to scale incrementally. In order to achieve this, there must be a mechanism in place that dynamically partitions the entire data over a set of storage nodes. DynamoDB employs consistent hashing for this purpose. As per the Wikipedia page, “Consistent hashing is a special kind of hashing such that when a hash table is resized and consistent hashing is used, only K/n
keys need to be remapped on average, where K
is the number of keys, and n
is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped.”
Consistent Hashing (Credit)
Let’s understand the intuition behind it: Consistent hashing is based on mapping each object to a point on the edge of a circle. The system maps each available storage node to many pseudo-randomly distributed points on the edge of the same circle. To find where an object O
should be placed, the system finds the location of that object’s key
(which is MD5 hashed) on the edge of the circle; then walks around the circle until falling into the first bucket it encounters. The result is that each bucket contains all the resources located between its point and the previous bucket point. If a bucket becomes unavailable (for example because the computer it resides on is not reachable), then the angles it maps to will be removed. Requests for resources that would have mapped to each of those points now map to the next highest point. Since each bucket is associated with many pseudo-randomly distributed points, the resources that were held by that bucket will now map to many different buckets. The items that mapped to the lost bucket must be redistributed among the remaining ones, but values mapping to other buckets will still do so and do not need to be moved. A similar process occurs when a bucket is added. You can clearly see that with this approach, the additions and deletions of a storage node only affects its immediate neighbors and all the other nodes remain unaffected.
The basic implementation of consistent hashing has its limitations:
- Random position assignment of each node leads to non-uniform data and load distribution.
- Heterogeneity of the nodes is not taken into account.
To overcome these challenges, DynamoDB uses the concept of virtual nodes. A virtual node looks like a single data storage node but each node is responsible for more than one virtual node. The number of virtual nodes that a single node is responsible for depends on its processing capacity.
Replication
Every data item is replicated at N
nodes (and NOT virtual nodes). Each key
is assigned to a coordinator node, which is responsible for READ
and WRITE
operation of that particular key
. The job of this coordinator node is to ensure that every data item that falls in its range is stored locally and replicated to N - 1
nodes. The list of nodes responsible for storing a certain key is called preference list. To account for node failures, preference list usually contains more than N
nodes. In the case of DynamoDB, the default replication factor is 3.
In the next article, we will look into Data Versioning.
Article authored by Vijay Olety.
X-Post from CloudAcademy.com