Distributed systems for Fun and Profit
Partitioning:
- Divide data into smaller independent subsets thereby reducing impact of dataset growth.
- Improves performance by limiting the amount of data to be examined and locating required data within the subset
- Improves availability as the nodes can fail independently
- Partition is very application specific
Replication:
- Copies of same data over multiple machines to make available more bandwidth and computation
- Provides more availability as nodes can fail independently
- Since there will be multiple copies of a Data, need for good consistency model is required.
- Strong Consistency: Lets you program as though no replication is happening
- Weak Consistency: Leads to low latency and higher availability
Abstractions, fundamentally, are fake. Every situation is unique, as is every node. But abstractions make the world manageable: simpler problem statements - free of reality - are much more analytically tractable and provided that we did not ignore anything essential, the solutions are widely applicable.
A Systems Model:
-
Key Properties
- Programs run concurrently on independent node
- No shared memory or share clock
- Network between may introduce message loss or nondeterminism
-
Implications
- Programs have fast access to local state but global state maybe outdated
- Messages can be delayed/lost
- Programs run concurrently
A System model is a set of assumptions about environments and facilities underneath a Distributed System. A robust system model is one that makes the weakest assumptions: any algorithm written for such a system is very tolerant of different environments, since it makes very few and very weak assumptions.
Nodes:
Node serves as host to make our program computations and storages and has
- Volatile and non-volatile memory
- a clock(may not be accurate)
- ability to run a program
Nodes use deterministic algorithms i.e. local computation, local state after computation, and messages sent are determined uniquely by the message received and local state when the message was received.
Most nodes assume crash-recovery failure model where in nodes can only fail by crashing and can recover at some later point.
Byzantine Fault tolerance:
- Nodes can misbehave or fail arbitrarily.
- This is not employed in commercial systems due to its high computations and costs.
Communication Links:
These are the links that connect nodes and allows each node to send/receive messages. Most of the distributed algorithm books assumes that links follow FIFO for message passing. A Network partition is occurs when the link between nodes is broken but the nodes still remains operational. During this, messages might get lost or delayed indefinitely. Also, the nodes must be treated differently from crashed nodes.
Time/Order:
Each node can receive a same message at a different time due to distances between each node.
- Synchronous System model:
- Fixed delays
- Accurate clocks
- Asynchronous System Models:
- No reliance on times
- absent clocks
Consensus Problem:
- Agreement: Every node must agree on same value
- Integrity: Agreed value mush have been chosen by one of the processes
- Termination: All process must reach decision
- Validity: All processes must use the same value
Impossibility Results:
- FLP impossibility is use by those who design the distributed systems
- CAP theorem result is used by practitioners who want choose a System design to use.
FLP result
- For async system model
- Assumes Node fails by crashing, network is reliable and unbound on message delay applies
- Under these, then can exist no algorithm as it cannot decide on message delays there by imposing restrictions on system design
CAP theorem
- Assume network failure than node failure
- Can simultaneously satisfy 2 of the 3 properties
- Consistency: Data remain constant across nodes
- Availability: Node failures doesn’t prevent operational nodes to fail
- Partition Tolerance: System continues to operate despite message loss due to network/node failure
- The CA and CP system designs both offer the same consistency model: strong consistency. The only difference is that a CA system cannot tolerate any node failures; a CP system can tolerate up to f faults given 2f+1 nodes in a non-Byzantine failure model (in other words, it can tolerate the failure of a minority f of the nodes as long as majority f+1 stays up).
- First, that many system designs used in early distributed relational database systems did not take into account partition tolerance (e.g. they were CA designs)
- Second, that there is a tension between strong consistency and high availability during network partitions .
- Third, that there is a tension between strong consistency and performance in normal operation.
- Fourth - and somewhat indirectly - that if we do not want to give up availability during a network partition, then we need to explore whether consistency models other than strong consistency are workable for our purposes.
Consistency model
A contract between programmer and system, wherein the system guarantees that if the programmer follows some specific rules, the results of operations on the data store will be predictable.
- The “C” in CAP is “strong consistency”
- Linearizable Consistency Model:
- That writes should be instantaneous and post write, all reads should give latest written value
- Serializable Consistency:
- applies serial set of operations as long as system doesn’t break any rules at individual nodes and order is same on all nodes
- Strict serialization:
- Mix of Linearizable and serializable consistency models
- Other consistent Models:
- Client Centric consistency:
- Client never sees an older version of the Value
- Usually achieved with Memcache. So when primary node fails, the cached version is served until the other latest version is written to the next primary
- Eventual Consistency:
- That client will agree on value after some undefined amount of time given the value is unchanged.
- How long is eventually ?
- If going with time stamp as the latest value, any node with wrong clock will give undesired results.
- Client Centric consistency:
- Lamport and Vector Clocks:
- Interesting read at - https://en.wikipedia.org/wiki/Vector_clock
Primary/backup replication:
- Asynchronous
- primary just waits for update and commit to backup is done async
- failure at backup —> data loss if the primary fails before ack to client and after updating(not committed) backup
- Synchronous
- Primary waits till ack is received from backup
- Will have a data loss if the backup gives ack but primary fails post backup ack
2PC replication;
- Most relational DBs use this form
- This is CA and any partition failures, the system has to wait till the partition recovers.
- Assumptions - Failures always recover
- First phase includes just getting an update from backups and backups store it in a temporary area
- The update is committed in the second(commit) phase. If primary fails, then the backups know to recover
- Data loss is possible if the data is corrupted during the failover
- Is latency sensitive due n-n update/ack
Partition tolerant consensus algorithms:
- Paxos
- Raft
Network partition
- Node failure is different from network partition between nodes
- It is not possible to discover node failure/network partition
- Updates can only happen based on the votes
- Use odd number of patrons to get clear majority votes.
- System can still handle updates during network partition if (n/2+1) nodes are active
Roles
- System can be designed to have all nodes with single role or separate distinct role
- Consensus algorithms raft/paxos uses distinct role (master / slave)
- During normal operations, one is master and rest are acceptors/slave
- leader is elected at the start and during a failover
Epochs
- A period of normal operation is called Epoch in paxos and term in raft
- During an epoch, election takes place and leader is designated
- if not leader is elected, the epoch ends immediately
- partitioned nodes will have smaller epoch time than current ones and their commands are ignored.