Skip to Content
Classic Load Balancer CLBProduct IntroductionTechnical Architecture

Technical Architecture

CLB supports both intranet and internet scenarios, as well as request proxy and message forwarding modes. This article will introduce the basic architecture of CLB’s request proxy and message forwarding modes respectively.

Terms

UVER: DezaiCloud Virtual Edge Router, DezaiCloud’s public network traffic forwarding center.

Message Forwarding

The message forwarding CLB is self-developed based on DPDK technology. It adopts a cluster deployment, with at least 4 servers in a single cluster (at least 2 servers in an overseas cluster), and achieves high availability through ECMP+ BGP.

Intranet

The message forwarding CLB uses a forwarding mode similar to DR. The architecture diagram of the intranet message forwarding is as follows:

The message forwarding CLB cluster announces the same VIP (Virtual IP) to its upstream access switches. The access switches are configured with the ECMP algorithm, which can load-balance traffic across multiple CLB servers, forming the message forwarding CLB cluster. When forwarding anomalies occur on certain servers in the CLB cluster, BGP message forwarding will also stop. Within three seconds, the faulty CLB server will be removed from the cluster to ensure high availability. Meanwhile, the health check module of the CLB cluster will issue an alert to notify engineers for intervention. Additionally, servers in the same CLB cluster are distributed across availability zones to ensure cross-availability zone high availability. The message forwarding CLB contains a module specifically responsible for health checks on backend nodes (currently only supporting TCP/UDP port probing) and reporting the status of backend nodes. After a CLB forwarding server receives a business message from a Client, it selects a backend node in a healthy state, modifies the destination MAC address, and sends the message to the backend node via tunneling, while keeping the source IP and destination IP of the message unchanged.

In message forwarding mode, backend nodes must bind the CLB’s VIP (Virtual IP) address to their LO interface and listen for services to correctly process messages and unicast response packets directly back to the Client. This is a typical DR (Direct Routing) process, allowing the internal network message forwarding CLB to directly see the Client’s source IP.

Internet

The schematic diagram of internet message forwarding CLB forwarding is as follows:

Unlike the internal network message forwarding CLB, traffic for the external network message forwarding CLB originates from the public network. When a Client accesses the CLB, traffic enters the DezaiCloud POP point and goes through the UVER (DezaiCloud Virtual Edge Router). UVER distributes this traffic to each server in the CLB cluster using a consistent hashing algorithm. The subsequent processes are similar to those of the internal network message forwarding CLB. Backend nodes must bind the CLB’s EIP (Elastic IP) to their LO interface and listen for services. Return traffic is sent directly to the UVER and routed back to the Client via the Internet.

In the external network message forwarding CLB, the cluster health check module periodically probes the liveness of servers. If any server is found to be faulty, the module notifies the UVER to remove the abnormal server from the cluster, ensuring high availability. The external network message forwarding CLB cluster is also deployed across availability zones to guarantee cross-availability zone high availability.

Request Proxy

The request proxy type is developed based on Nginx. It adopts a cluster deployment, with at least 4 servers in a single cluster (at least 2 servers in an overseas cluster).

Intranet

The architecture diagram of the intranet request proxy is as follows:

Different from the DR mode used by the packet forwarding CLB, the request proxy CLB adopts the Proxy mode (i.e., Fullnat mode). After receiving a client’s request, the intranet request proxy CLB converts the connection from the client to the CLB IP into a connection from the CLB’s proxy IP to the actual IP of the Backend (service node). Therefore, the Backend (service node) cannot directly obtain the client IP and can only retrieve it through X-Forwarded-For (in HTTP mode). Additionally, the node health check module is integrated into the CLB process, eliminating the need for an additional module.

The intranet request proxy CLB achieves high availability through ECMP + BGP. CLB servers establish BGP connections with upstream switches via Quagga. Multiple servers in the same cluster advertise the same VIP (Virtual IP) to the upstream switches, which use the ECMP algorithm to load-balance traffic across cluster servers. When a server fails, BGP terminates the connection within three seconds, removing the faulty server from the cluster and ensuring service continuity.

Internet

The architecture diagram of the internet request proxy is as follows:

Different from the intranet request proxy CLB, external network traffic enters from the public network. When a client accesses the request proxy CLB, the traffic first enters the DezaiCloud POP point and then goes through the UVER (DezaiCloud Virtual Edge Router). UVER distributes this traffic to each server in the CLB cluster using the consistent hashing algorithm. The subsequent process is similar to that of the intranet request proxy CLB.

In the external network request proxy CLB, the cluster health check module periodically detects the survival status of servers. If a server is found to be faulty, it notifies UVER to remove the abnormal server, ensuring high availability. The external network packet forwarding CLB cluster is also deployed across availability zones to guarantee cross-availability zone high availability.

Mode Comparison

Compared to request proxy CLB, message forwarding CLB has stronger forwarding ability, suitable for scenarios requiring high forwarding performance. Request proxy CLB, on the other hand, can process layer-7 data, perform SSL offloading, and execute domain forwarding and path forwarding, etc., and the backend nodes do not need to additionally configure VIP (Virtual IP).