Linux has proven itself as a rock solid operating system platform for industry leading software applications and appliances. One such implementation is in the form of application load balancing. As the global internet traffic is increasing, it is demanding an increased throughput from existing web infrastructure. It is crucial to deliver fast content on the web networks; this is especially true for businesses whose only presence is the web portals.
Load balancers add a great value to achieve this, and also provide multiple other functionalities beyond handling web traffic load. This article explains the new trends in this already famous product category, which are certainly not explored by IT managers and system administrators. It is imperative for an IT management to know load balancer basics, and also understand how to leverage product features to their best usage.
While managing a web services infrastructure, web administrators often find it a challenge to cope up with increased web hits, while maintaining high availability of the servers. This situation gets even tougher when a new web application or functionality is released, attracting more users per day. Optimization of server performance thus becomes a job that requires a continuous improvement.
Consider a web server hosting a website running few applications. When the website starts attracting more users it results into multiple web requests for each page being browsed. To cater to each request, a definite amount of CPU, memory and network resource is utilized. Adding powerful resources can only solve problem to some extent, but introduces another challenge. When the web server hits the ceiling in terms of its resource capacity limit, it starts dropping web requests which results into a bad user experience such as a broken or hanging web page. Also if that web server goes down for some reason, entire website becomes non-functional. This can certainly result into a reputational loss and in some cases a potential monitory loss for the organization. In order to address such situation effectively and proactively, the IT management must deploy load balancing solutions in their datacenter infrastructure. We will soon discuss how a load balancer can not only distribute traffic, but also helps ease network operations tasks.
First generation balncing devices were implemented around BSD Unix versions. A new trend of balancing products is typically in the form of an appliance running on FOSS Linux distribution, or in some cases an enterprise grade appliance implemented on Red-hat or similar Linux flavor. Functionally, a load balancer can balance the traffic by distributing it among two or more servers. Fig. 1 shows a typical web farm configuration hosting a load balancing device that acts as a front end server to handle all web requests. Each silo hosts a different set of applications whereas all servers in a given silo host identical application. From the configuration point of view, the device is configured with two separate IP ranges, one used to handle incoming traffic and the other called as virtual servers, is used to connect to the nodes under its control. Thus it forms an agent service between the requesting client and the responding server. It also acts on the requests intelligently based on the rules configured, to choose a recipient node having least amount of work load on it at that particular time. Rules define how a request should be handled and also instructs load balancer to handle special conditions such as node preference, session management etc. Load balancing device then makes a separate TCP connection with the recipient web server and redirects the requests to it and keep track of the request processing.
In real technical sense, a load balancer balances underlying TCP connections rather than the actual web requests. It is always a misconception that a load balancer checks resource utilization such as CPU, memory etc on a controlled server. In reality it simply checks the network response time of a server, which is a resultant of server’s overall resource utilization. Since it acts as a catalyst in improving scalability of a server farm, it maintains data for each node under its control in the form of number of requests processed in history, response time by each host for requests, fault trend of each host etc. In earlier days, load balancing solutions were implemented around simple round robin techniques, which did help in distributing load but did not provide fault tolerance features, as they lacked the necessary intelligence.
In today’s advanced data centers, load balancers are used to effectively distribute traffics for web servers, databases, queue managers, DNS servers, email and SMTP traffic, and almost all the applications which use IP traffic. Balancing DNS servers helps distribute DNS queries to servers which are dispersed geographically, which is useful for disaster recovery implementations.
In a server farm it is often possible to experience a down time of a server, either caused due to unforeseen resource failure, or a scheduled one for maintenance purposes. Further into the resource failure, those can be either at a hardware level, or simply at a software application level. In a business critical infrastructure, such situations should be transparent by totally avoiding the user impact. As discussed earlier, since a separate TCP connectivity with the controlled node is maintained by the balancing device, it can be further utilized to achieve fault tolerance.
A configurable heart beat called as a monitor, is maintained by balancer with each node, which can be a simple ICMP ping, or an FTP/HTTP connection to pull up a particular page. Upon an appropriate response from the node, load balancer knows that the node is live and responding, and marks the node as an active participant and finds it eligible for the balancing process. If the server or its application resource fails, balancer waits for a certain amount of time for the heart beat to be responded by node, and upon non-compliance it marks that node as a non-participant and removes it from the silo. Once such marked, the load balancer doesn’t send traffic to that node. It however still keeps polling to see if the node back alive, and if so it marks the node as an active participant again and starts sending traffic to it. If the fault situation occurs while the request is being transferred to a node, modern load balancers are capable of detecting that too, and taking corrective configurable actions.
This feature can further be explored by operations team to their benefit for maintenance purpose. A service instance can be configured on a node, for example a separate web instance running under a separate IP address with a dummy page on it. A monitor can be configured to access that page periodically. If the server is to be taken offline for maintenance purpose, the operations person can stop the website which will result in server being marked as a non-participant and then it can be shutdown or worked upon for administration. Once the maintenance chores are completed, the website service instance can be started again, to bring server back into the load balanced pool. This feature can be further extended by configuring multiple such monitors at application level and reported on a dashboard via a network monitoring product, for operations admin view.
Earlier versions of load balancers used to work at layer-2 (link aggregation), or layer-4 (IP based). Since the requests flow through the balancing devices, it made sense to read into the requests at layer-7, to bring additional flexibility in the decision making process of balancing techniques. Adding such intelligence to a load balanced server farm brings a great deal of flexibility to achieve higher scalability, better manageability and high availability. Layer-7 load balancing primarily operates on the following three techniques
• URL Parsing
• HTTP Header interception
• Cookie interception
Typically a layer-7 rule structure algorithm looks somewhat like one shown below, however the exact syntax varies for each vendor and device model. As seen in the example, a request is parsed firstly based on virtual directory being accessed, then by a particular cookie field content, and finally sending request to a default pool if first two conditions do not match.
{
if (http_qstring, "/") = "mydir"
sendto server_pool1
else{}
if cookie("mycookie") contains "logon_cookie"
sendto server_pool2
else {}
sendto server_pool3
Since the request is intercepted and interpreted at layer-7, the possibilities of adding intelligence grow exponentially. A rule can be configured to distribute traffic based on a field in http header, source IP address, cookie custom fields etc to name a few. There are endless possibilities to make intelligent traffic distributions, for example, if the incoming request is coming from a smart phone it can be sent to servers hosting mobile applications, if the request is for a URL which hosts a simple html based website it can be routed to an economical server farm, if a login cookie is not present in the request it can be sent to a login server by keeping the clutter away from other busy servers etc.
As the layer-7 rules bring programmability to the balancing techniques those can further be explored for the technology operations staff’s benefit. When a rollout of a newer version is planned in an existing database server farm, the new set of servers can be configured as a separate pool to perform migration mock-tests and can be brought online once satisfied. In case the rollout experiences problems, merely switching pools back to the original settings on load balancing configurations, can help achieve rollbacks with minimum downtime. As another example many mission critical web farms require to maintain legacy server operating systems for stability reasons, while the new applications demand for latest and greatest platforms. In such a case, separate server pools can be configured for new applications and traffic distribution can be achieved by detecting web requests URLs at layer-7.
Load balancing at layer-7 also helps improve the return on investment (ROI) of an IT infrastructure. Consider a web portal which caters to a high volume of users with web pages that are content rich, with JavaScript and images etc. Since the scripts and images don’t change quite often, those can be treated as static contents and hosted on a separate set of servers. As a result the web servers running important business logic can be freed up in terms of a resource, which also means that we can accommodate more users per server, or host more applications per server, and thus reduce the effective cost of hosting. This also proves that a carefully configured layer-7 load balancer can achieve higher application performance throughputs on a give datacenter infrastructure footprint.
Additional features in a load balancer appliance
Besides a powerful traffic distribution task, most of the industry grade modern load balancers also come equipped with features which are essential to take up additional task from the managed nodes, or other infrastructure components. SSL negotiation is one such feature which can handle heavy volumes of SSL handshaking which otherwise would take a performance toll on the web servers. Another great feature is cookie persistence which helps applications stick to a particular server, in order to maintain a state-full session with it.
Many new load balancer trends provide admin features such as traffic monitoring, TCP buffering, security features such as content filtering, intrusion detection firewall, and also performance based features such as HTTP caching, HTTP compression etc. Since a load balancing device sits as a frontend component in a server farm, those come equipped with high speed network ports such as gigabit Ethernet and fiber connections.
Open source load balancing solutions
Multiple vendors provide industry grade enterprise load balancing solutions, such as F5 networks (BigIP), Citrix Netscaler, Cisco, Coyote point etc. These devices are rich in features, provide flexible rule programmability and exhibit high performance throughput but do come with a price tag and support cost. For those who are interested in FOSS (Free and Open Source Software), there are multiple distributions that are available on Linux platform which offer features from simple load balancing to a full featured appliance grade product. Let’s look at three most wanted solutions built on open source platform.
LVS (Linux Virtual Server) is one famous solution which has proved it to be industry grade software, which can be used to build highly scalable, highly available Linux cluster servers to cater to high volume web requests. It comes with ample of documentation which can help build a load balanced farm step by step, and can be found at http://www.linux-vs.org
Ultra Monkey is another interesting solution which provides fail-safing features besides basic load balancing, whereby if one load balancer device fails the other can take over to provide a device level fault tolerance. It supports multiple Linux flavors such as Fedora, Debian etc and can be found athttp://www.ultramonkey.org
Another powerful but less known implementation is Crossroads for Linux, which is a TCP based load balancer providing very basic form of traffic distribution. The beauty of this product is that its source code can be easily modified to serve just one task such as DNS or web balancing without any bells and whistles, and thus achieving very high performance in the purpose served. It can be found at http://crossroads.e-tunity.com
Summary
A global increase in web hits is demanding for higher throughput from existing web infrastructures. While the datacenter components can be upgraded, it needs solutions far beyond that. Presence of a load balancer appliance has become crucial in modern IT infrastructure. Adding Layer-7 programming capabilities to the load balancing devices helps improve its power in terms of high availability, better manageability, and increased scalability. While there are costly vendor products available in the market, multiple open source solutions are available too. Configuring layer-7 rules on a load balancer is an art and needs deep understanding of networking protocols and server operations. Features of load balancers can also be used as an aid to the operations and maintenance tasks.
Published in Linux For You Magazine
http://www.linuxforu.com/teachme/layer-7-load-balancers/