Change is inevitable, especially so in applications that ebb and flow with system changes and user requirements. Generally, changes are codified in the form of an event; an event like a ddos attack for instance triggers a reaction or a response from one or more nodes, which ultimately mutates the overall state of the universe in a system.
Undoubtedly, events rarely happen as one offs. At any given time, a sequence of events flows through the system.
Read more
Distributed nodes share neither memory nor a logical clock. In order to effectively keep a system in running order, nodes need some way to communicate with one another to share data and coordinate complex tasks. Interprocess communication (IPC) plays an integral role in this. IPC is the mechanism that enables a server to query a database instance and resolve a request with the associated data successfully. Without it, nodes remain stranded akin to the people in the Tower of Babel, connected by infrastructure but unable to communicate with one another.
Read more
In a distributed system, failures are to be expected. Whether it be a fault of the network or the node, failure is disruptive to the entire system. Failed processes struggle to recover and receive updates. Healthy nodes that try to connect to a failed node might fail completely or receive stale information, which can jeopardize the integrity and overall ©onsistency of the system. To mitigate this, nodes have a failure mode, which it codifies in the form of a membership protocol.
Read more
In a distributed system, communication is crucial to ensuring no node becomes stale. Updates can happen in 2 main ways, through consensus, or through gossip. In a consensus model, nodes attempt to reach quorum by mutually agreeing on the order of updates or periodically voting on and rotating the leader in charge of distributing updates to fellow nodes—I covered the ins and outs of this in an earlier post, if you’re curious to learn more about it.
Read more
In an earlier post, I glossed over a concept that forms a fundamental tenet of systems design; the CAP theorem. To reiterate, the CAP theorem states that of 3 desirable traits in a distributed system, Consistency, Availability and Partition tolerance, only 2 can be prioritized at any given time. It highlights the tradeoffs we make when we design for a distributed setting.
The CAP theorem was birthed from the move away from vertical scaling where we simply sized up machines for greater processing capacity to horizontal scaling where machines work in a distributed and parallel fashion.
Read more
What does it mean to be consistent? In an earlier post, we talked about the importance of building consensus when it comes to ensuring consistency in a distributed setting. Loosely, consistency focuses on keeping all nodes in a system up to date on changes as they happen in real time or near real time. This ensures the reliability of the system as a whole, so users are never (fully) let down when parts of the system unexpectedly go down.
Read more
In a distributed setup where nodes share the load of processing requests and storing/retrieving data, reaching consensus is crucial to ensuring consistency in an application. To return to the concert ticket sales example from the last post, we wouldn’t want 2 users buying the same seat since that violates the condition of assigned seating. Consensus can then be described as ensuring all nodes agree on a specific value or a state in spite of a potential failure or delay; there lies the rub.
Read more
Demystifying Distributed Systems: the what and why If you deploy an app today, dealing with a distributed system is an inescapable problem. Whether it’s understanding how to balance requests, or synchronizing data replication, at least some aspect of your deployment will require orchestrating and coordinating multiple servers to operate in unison. The goal of “distributedness” here is to have things work across regions and systems in a way that is completely imperceptible to a user.
Read more
Today was a really hard day. I lost the man who raised me. The man who was there for all my milestones. We didn’t always agree but he had a heart of gold and was always there for his family, his friends, his neighborhood, his community.
Words can’t begin to express the immense grief I feel. The grief I feel as I process this loss alone from the confines of a hotel room 5 miles away, 3 days shy of being released from quarantine, devoid of the ability to hug and be hugged.
Read more
Didn’t make a lot of progress today and decided to play around with shapes instead.
Here’s a heart -> https://codepen.io/shortdiv/pen/zYxGQBm?editors=1010