-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
4 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -30,7 +30,7 @@ NoSQL databases include several different models for accessing and managing data | |
| Wide-Column Store | Related data is stored as a set of nested-key/value pairs within a single column. | | ||
| Graph Store | Data is stored in a graph structure as node, edge, and data properties. | | ||
|
||
## The CAP theorem | ||
## CAP and PACELC theorems | ||
|
||
As a way to understand the differences between these types of databases, consider the CAP theorem, a set of principles applied to distributed systems that store state. Figure 5-10 shows the three properties of the CAP theorem. | ||
|
||
|
@@ -46,11 +46,13 @@ The theorem states that distributed data systems will offer a trade-off between | |
|
||
- *Partition Tolerance.* Guarantees the system continues to operate even if a replicated data node fails or loses connectivity with other replicated data nodes. | ||
|
||
CAP theorem explains the tradeoffs associated with managing consistency and availability during a network partition; however tradeoffs with respect to consistency and performance also exist with the absence of a network partition. CAP theorem is often further extended to [PACELC](http://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf) to explain the tradeoffs more comprehensively. | ||
CAP theorem explains the tradeoffs associated with managing consistency and availability during a network partition; however tradeoffs with respect to consistency and performance also exist with the absence of a network partition. | ||
Check failure on line 49 in docs/architecture/cloud-native/relational-vs-nosql-data.md GitHub Actions / lintTrailing spaces
|
||
|
||
> [!NOTE] | ||
> Even if you choose availability over consistency, in times of network partition, availability will suffer. CAP available system is more available to some of its clients but it's not necessarily "highly available" to all its clients. | ||
CAP theorem is often further extended to [PACELC](http://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf) to explain the tradeoffs more comprehensively. The CAP theorem is particularly relevant in intermittently connected environments, such as those related to the Internet of Things (IoT), environmental monitoring, and mobile applications. In these contexts, devices may become partitioned due to challenging physical conditions, such as power outages or when entering confined spaces like elevators. For distributed systems, such as cloud applications, it is more appropriate to use the PACELC theorem, which is more comprehensive and considers trade-offs such as latency and consistency even in the absence of network partitions. | ||
Check failure on line 54 in docs/architecture/cloud-native/relational-vs-nosql-data.md GitHub Actions / lintTrailing spaces
|
||
|
||
Relational databases typically provide consistency and availability, but not partition tolerance. They're typically provisioned to a single server and scale vertically by adding more resources to the machine. | ||
|
||
Many relational database systems support built-in replication features where copies of the primary database can be made to other secondary server instances. Write operations are made to the primary instance and replicated to each of the secondaries. Upon a failure, the primary instance can fail over to a secondary to provide high availability. Secondaries can also be used to distribute read operations. While writes operations always go against the primary replica, read operations can be routed to any of the secondaries to reduce system load. | ||
|