Does Load Balancer avoid Single Point Of failure !

Objectives

  1. What is Load Balancing
  2. Benefits of Load Balancing
  3. Types of Algorithms for Load Balancing
  4. Types of Load Balancing
  5. Redundant Load Balancing

What is Load Balancing

  • Load Balancing (LB) is one of the crucial part in any Distributed System. It helps to spread the incoming traffic across a cluster of servers to improve response time and availability of databases and applications.
  • It also monitors status of all the resources while distributing the traffic across multiple servers using various algorithms.
  • It avoids Single Point of Failure, thus improves overall application performance and maintains scalable in nature.

Note: To utilize full scalability and redundancy, we can try to balance the load at each layer of the system.We can add LBs at three places.

  • Between the user and the Web Server
  • Between Web Servers and an internal platform layer, like application servers are cache servers
  • Between internal platform layer and database

Benefits

  • User Experience faster and uninterrupted service.
  • Less Down time and high throughput
  • “Smart load balancers” will provide benefits like predictive analytics as a result, SLBs gives an organization actionable insights. These are the key to automation and can help drive business decisions.
  • In the seven-layer Open System Interconnection (OSI) model, network firewalls are at levels one to three (L1-Physical Wiring, L2-Data Link and L3-Network). Meanwhile, load balancing happens between layers four to seven (L4-Transport, L5-Session, L6-Presentation and L7-Application).

Types of Algorithms for Load Balancing

There is a variety of load balancing methods, which use different algorithms best suited for a particular situation.We will discuss these algorithms shortly.

Health Checks :- Load Balancer’s should only forward traffic to healthy backend servers. To monitor the health of a backend server. Health Checks regularly attempt to connect to backend servers to ensure that servers are listening. If a server fails a health check. it will automatically remove from the pool, and traffic will not be forwarded to it until it responds to the health checks again.

There is a variety of Load Balancing methods, which use different algorithms for different needs.

Least Connection Method : This method directs traffic to the server with the fewest active connections. This approach is quite useful when there are a large number of persistent client connections which are unevenly distributed between the servers.

Least Response Time Method : This algorithm directs traffic to the server with the fewest active connections and the lowest average response time.

Least Bandwidth Method : This method selects the server that is currently serving the least amount of traffic measured in megabits per second (Mbps).

Round Robin Method : This method cycles through a list of servers and sends each new request to the next server. When it reaches the end of the list, it starts over at the beginning. It is most useful when the servers are of equal specification and there are not many persistent connections.

Weighted Round Robin Method : The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server is assigned a weight (an integer value that indicates the processing capacity). Servers with higher weights receive new connections before those with less weights and servers with higher weights get more connections than those with less weights.

IP Hash : Under this method, a hash of the IP address of the client is calculated to redirect the request to a server.

Types of Load Balancing

  • SDN — Load balancing using SDN (software-defined networking) separates the control plane from the data plane for application delivery. This allows the control of multiple load balancing. It also helps the network to function like the virtualized versions of compute and storage. With the centralized control, networking policies and parameters can be programmed directly for more responsive and efficient application services. This is how networks can become more agile.
  • UDP — A UDP load balancer utilizes User Datagram Protocol (UDP). UDP load balancing is often used for live broadcasts and online games when speed is important and there is little need for error correction. UDP has low latency because it does not provide time-consuming health checks.
  • TCP — A TCP load balancer uses transmission control protocol (TCP). TCP load balancing provides a reliable and error-checked stream of packets to IP addresses, which can otherwise easily be lost or corrupted.
  • SLB — Server Load Balancing (SLB) provides network services and content delivery using a series of load balancing algorithms. It prioritizes responses to the specific requests from clients over the network. Server load balancing distributes client traffic to servers to ensure consistent, high-performance application delivery.
  • Virtual — Virtual load balancing aims to mimic software-driven infrastructure through virtualization. It runs the software of a physical load balancing appliance on a virtual machine. Virtual load balancers, however, do not avoid the architectural challenges of traditional hardware appliances which include limited scalability and automation, and lack of central management.
  • ElasticElastic Load Balancing scales traffic to an application as demand changes over time. It uses system health checks to learn the status of application pool members (application servers) and routes traffic appropriately to available servers, manages fail-over to high availability targets, or automatically spins-up additional capacity.
  • Geographic — Geographic load balancing redistributes application traffic across data centers in different locations for maximum efficiency and security. While local load balancing happens within a single data center, geographic load balancing uses multiple data centers in many locations.
  • Multi-site — Multi-site load balancing, also known as global server load balancing (GSLB), distributes traffic across servers located in multiple sites or locations around the world. The servers can be on-premises or hosted in a public or private cloud. Multi-site load balancing is important for quick disaster recovery and business continuity after a disaster in one location renders a server inoperable.
  • Load Balancer as a Service (LBaaS) — Load Balancer as a Service (LBaaS) uses advances in load balancing technology to meet the agility and application traffic demands of organizations implementing private cloud infrastructure. Using an as-a-service model, LBaaS creates a simple model for application teams to spin up load balancers.

Redundant Load Balancers

The load balancer can be a single point of failure; to overcome this, a second load balancer can be connected to the first to form a cluster. Each LB monitors the health of the other and, since both of them are equally capable of serving traffic and failure detection, in the event the main load balancer fails, the second load balancer takes over.

Conclusion

What we learned so far, we learnt what load balancing means and what is the use of it. The purpose behind use of load balancers and algorithms used to achieve specific tasks.Types and how to reduce single point of failure using cluster mechanism.

Link i follow for this blog content some are from my work experience and some from this site

https://avinetworks.com/what-is-load-balancing/

Thank you everyone, for contributing your valuable time to read this blog, feel free to comment and i’m always curious to learn and share new things.

worked as Devops Software Engineer ,Ex-Cognitive Innovations, AWS Cloud Engineer, AWS-CLF-C01, GCP Associate, GCP Devops, Software Engineer