SEARCH

— 葡萄酒 | 威士忌 | 白兰地 | 啤酒 —

The Concept of Load Balancing

BLOG 520
Load Balancing

Load Balancing

 

Load balancing is built on top of the existing network structure. It provides a cost-effective and transparent method to expand network device and server bandwidth, increase throughput, enhance network data processing capabilities, and improve network flexibility and availability.

Load balancing, also known as load sharing, refers to dynamically adjusting the load in a system to minimize or reduce the imbalance of load among various nodes.

A simple analogy: Suppose a supermarket originally has two checkout counters, which are usually just right. But during major holidays like the Spring Festival, additional checkout counters are needed. However, the flow of customers is not intelligent; without guidance, they won’t go to the newly added counters to check out. Here, load balancing is responsible for the line allocation when large traffic is connected through multiple links, avoiding excessive traffic on one link while others remain idle.

Load balancing is generally divided into two levels: one is software-level load balancing, which is implemented through programming, and the other is hardware load balancing, which is achieved with the assistance of hardware devices. Today, we mainly share hardware load balancing (also known as server load balancing).

Server Load Balancing

Server load balancing involves installing load balancing hardware devices directly between the server and the external network. These devices are commonly referred to as load balancers. They are implemented by specialized equipment, independent of the operating system, significantly enhancing overall performance. With more load balancing strategies and intelligent traffic management, they can meet the best load balancing needs. Generally, hardware load balancing is superior in functionality and performance to software load balancing but is expensive.

In the past, software load balancing was more widely used because hardware was too costly. Instead of adding expensive hardware, it was more economical to have programmers implement load balancing at the software level.

However, the situation has changed. On one hand, the development of the internet has led to explosive growth in traffic, necessitating server expansion. On the other hand, hardware load balancing is indeed more efficient. With the significant advancements in hardware load balancing manufacturers, the cost-effectiveness has greatly improved. It’s no wonder that more and more enterprises and venues are choosing hardware load balancing devices.

Server Load Balancing Features

  1. Service Consistency: Load balancers provide service consistency by reading information within client requests, rewriting headers, and sending requests to appropriate servers. These servers maintain client information. In HTTP communication, load balancers effectively provide service consistency, although the method is not very secure. If the message is encrypted, the load balancer cannot read the hidden information.
  2. Failover: If a node in the service cluster fails to process a request, the request is forwarded to another node. Once the request is successfully sent to another node, the request information on the original node automatically disappears.
  3. Statistical Measurement: Since all client requests pass through the load balancer, it has statistical measurement capabilities. This allows the load balancer to accurately count various traffic flows and the number of times each project is executed, enabling the network to adjust system performance accordingly.

Choosing Load Balancing

In fact, software and hardware load balancing are not mutually exclusive; they each have their strengths and are suited to different scenarios.

For system environments, since the load balancer itself does not process data, the performance bottleneck is more about the backend servers. Typically, software load balancers are sufficient and can seamlessly integrate with our system platform.

Hardware load balancing is more suitable for scenarios with a large number of backend servers and data processing and distribution, such as environments with tens of thousands of data concurrency per second (e.g., hotels, internet cafes, large enterprises with multiple users). In such cases, a suitable hardware load balancing device is necessary.

Load Balancing Deployment Methods

There are three deployment methods for load balancing: routing mode, bridging mode, and direct service return mode. About 60% of users adopt routing mode due to its flexibility; about 30% use direct service return mode, which is suitable for high-throughput, content distribution network applications.

Routing Mode (Recommended)

In routing mode, the gateway of the server must be set to the LAN address of the load balancer, which is different from the WAN address. Therefore, all return traffic also passes through the load balancer. This method has minimal network changes and can balance any downstream traffic.

Bridging Mode

Bridging mode is simple to configure and does not change the existing network. The WAN and LAN ports of the load balancer are connected to the upstream device and downstream servers, respectively. The LAN port does not need an IP address (the WAN and LAN ports are bridged), and all servers and the load balancer are on the same logical network. Due to its poor fault tolerance and lack of network flexibility, and sensitivity to broadcast storms and other spanning tree protocol loop-related errors, this architecture is generally not recommended.

Direct Service Return Mode

In this mode, the LAN port of the load balancer is not used. The WAN port is on the same network as the servers. Internet clients access the virtual IP (VIP) of the load balancer, which corresponds to the WAN port of the load balancer. The load balancer distributes traffic to servers based on policies, and the servers directly respond to client requests. Therefore, for the client, the IP responding to them is not the load balancer’s VIP but the server’s own IP address. This means the return traffic does not pass through the load balancer. This method is suitable for services with high traffic and bandwidth requirements.

Nowadays, standalone load balancers are rare. Most manufacturers integrate load balancing functions with other features into a single routing server device. This not only reduces the cost of multiple devices but also effectively improves operational difficulty.

 

The prev: The next:

Related recommendations

Expand more!

Mo