UNIXy's Failover and Load Balancing explained

Proper load balancing and failover require that at least one service of the same service-category (ex: HTTPS) to be healthy in order for a Website to remain online. So if you had two same-replica web servers serving the same content, it's a no-brainer that at any given moment in time, a Website will continue functioning should one of them fail. Because you could go in and switch traffic the healthy web server and restore service.

Armed with confidence and freedom, you reckon that this service duality is only scratching the surface of high availability. And you would definitely be right. But what if this service failure happened in the wee hours of dawn when you're asleep? How does one ensure automatic failover? How does one replicate files across web servers? What about database replication? What if the datacenter hosting your high availability setup loses power? What if the datacenter network is the unlucky recipient of a massive DDoS? What if a megathrust earthquake's epicenter happened to be exactly underneath your "cloud" server?

failover and load balancing with global anycast


These are all valid concerns. There are many more. The deeper you dig the more complex it gets. And it can get quite sticky! Spoiler alert: we've thought about every little detail so no worries. We just want you to think through and understand the high-level idea of the technology we've developed to address all the pesky corners of the Web.

Cloud Server on a Stick

Let's be clear from the get-go: a "cloud" server does not imply high availability, failover, or load balancing. Nowadays "cloud" simply means pay-as-you-go. The cloud is the commoditization of servers and networks. So for the purpose of this explainer, a cloud server remains vulnerable and exposed to the aforementioned risks that are network, hardware, and service failures. It is also subject to random systemic events they've dubbed Acts of God just like any server or network.
As you can imagine, there are so many things that could go wrong with the single-server setup. In other words, expect the worst. A disk failure, power failure, hardware failure, service failure, or network outage at 3:39AM on Sunday. You name it!

High-Availability Configuration

Taking it up a notch, a local in-situ failover solution is a step up from a "naked" server-on-a-stick deployment. Now we're starting to address potential uptime killers. This solution protects against a slew of hardware and service failures. But it's no where close to what we're aiming for. First of all, the load balancer is suceptible to both hardware and service failures. Such events will take everything down. Second, a power failure at this single data center, would just wipe everything out. And third, a major event like a hurricane or earthquake is fatal because all eggs are in one basket: Datacenter One.
A physical load balancer that is tied to one physical location (Datacenter One) is most certainly a single point of failure. Our solution addresses this by leveraging Anycast on BGP. Anycast enables us to give your website one IP that has several physical load balancers to pick from. Should one load balancers fail, it tries the next one and the next until it gets an answer. Someone in England will see the exact same IP as someone from the US but they will get routed to the closest load balancer in their region.

Global Anycast Auto-Failover and Load Balancing

Enter our Global Anycast failover with load balancing solution. This solution covers 99% of all possible mishaps and some. For every component, be it network, data center, hardware, service, power utility, data carrier, and so on, there are at least two exact replicas. We've even picked two data centers that are thousands of miles appart so there's not an iota of doubt that's clouding (pun intended!) our availability design. In the very worst of cases, we're up and are happily serving pages. Main DB is out? No problem, the replica kicks in and assumes the main DB role; unattended! NTT goes dark on us on new year's eve? No problem, we automatically route traffic through Level 3 and several others.
But replicating data on its own is such a waste of resources if we're not leveraging these replicas to serve your traffic. It would be compute resources just sitting there doing nothing. This is where our load balancing setup comes in. Our load balancers send traffic to ALL nodes. This means that your traffic is distributed across at least two systems for improved performance.




Nginx
Litespeed
varnish