In the modern enterprise, the lines between business leaders and IT continue to blur. Executives are increasingly making decisions based on technology and data, and will have to play a more active role in the technological direction of their organization.
As such, understanding a firm’s IT operations could help its leaders better understand their business as a whole. This is especially true regarding management of the data center and web traffic.
For the uninitiated, load balancing is the process of distributing across multiple servers. There are many reasons for load balancing but, essentially, it provides a more efficient path for your workloads and helps you scale your business, while limiting downtime.
“At its core, a load balancer is a network device that routes incoming traffic destined for a single destination (web site, application, or service) and ‘shares’ the incoming connections across multiple destination devices or services,” said Toby Owen, vice president of product management at Cogeco Peer 1.
If you want a more in-depth introduction to load balancing, check out this TechRepublic article written back in August.
Beyond making the choice to pursue load balancing, you have to decide how you’re going to implement it. That starts with choosing what kind of load balancer will work best for your organization’s needs.
Here are five of the most common types of load balancers and how they stack up to each other.
As the name implies, this method uses a physical hardware load balancer, which is commonly rack-mounted. The machine is physically connected to both the upstream and downstream segments of your network to perform load balancing based on the parameters established by the data center administrator.
Typically, in deployments using a hardware load balancer, the application is hosted on-premise. Common vendors in this space are companies like F5, Citrix, Cisco, and Barracuda, among others.
Paul Andersen, director of marketing for Array Networks, said hardware load balancers work well in situations “where there is high-volume traffic and/or where there is heavy utilization of compute-intensive functions such as SSL offloading, traffic inspection or complex scripts.”
Many consider dedicated hardware to be a tried and true method. Because of this, Owen said, it can be easier to find network admins to support. Additionally, being single-purpose means physical load balancers usually require less maintenance and have high reliability. Still, dedicated hardware is often the most expensive option and can limit your flexibility.
In this scenario, a load balancing software is run on a server to handle requests. Faisal Memon of NGINX, said that the major upside to this method is that you get the flexibility of a software solution, but the performance of it running on pure hardware. The hardware could potentially be cheaper if you buy from an OEM vendor as well.
The load balancing software itself may be licensed or open source. The software method works well in an environment that is focused on implementing DevOps and a continuous delivery approach. Well-known companies in this space are NGINX, Array Networks, HAProxy, and Zen, among others.
Another benefit of software load balancing is a potentially lower cost. Being that you aren’t likely to be tied in to any vendor package, you can pick and choose your hardware from an OEM vendor and save some money avoiding the markups of some other options, Memon said.
A virtualized load balancer is one that is running inside a virtual machine, managed through the hypervisor. Depending on how it is hosted, it may exist in a shared environment. As is the case with virtualization in general, virtualized load balancing works especially well for test and development environments, Owen said.
This option tends to be a little cheaper, and it is usually more of an out-of-the-box solution, especially if you already have the proper tools in place. For example, if you’re a VMware shop, Memon said, it would make sense to go virtual with your load balancing.
The big decision to make is how you will approach the cost. If you’re in the public cloud, Andersen said, utility consumption is the way to go, but there are other options to consider.
“If you have built a private cloud, and have invested heavily in virtual servers and infrastructure, virtual load balancers with perpetual licenses will give you maximum agility with the biggest bang for buck. Virtual load balancers with perpetual licenses are also good to have on hand for development and non-production environments,” Andersen said.
The major downside of virtual load balancers is the performance penalty of the hypervisor, Memon said. Depending on how well it’s tuned, he estimates it could be a 10-20% performance drop relative to other methods.
Elastic load balancers are the “cloud” load balancers, usually available in a subscription model. They are only available through cloud solutions offered by companies like Amazon Web Services (AWS), Google, and Microsoft. However, they can be used to balance non cloud-based applications, too.
Think of this as load balancing-as-a-service (LBaaS). You pay for what you use and can cut it off at any time. It is called “elastic” because it scales automatically based on your organization’s requests. It is easy to use and quick to set up.
“Often you’ll incur some additional latency, as your incoming request and outgoing traffic will need to traverse a WAN link in transit between the LB service and your application stack (wherever it is located), but in a pinch, or for non-latency sensitive applications, this can be a very cost effective solution, and can literally be set up in seconds,” Owen said.
The main downsides to this approach are the limited features and the potential for vendor lock-in through the initial cloud provider.
“You can’t use your ELB application in Azure, or you can’t use your Azure application in Amazon if you’re tied to their particular load balancer,” Memon said.
While not its key functionality, DNS can actually be used for load balancing, Owen said. Because it is a free service, it’s sometimes referred to as a “poor man’s load balancer,” he said.
“When an incoming request is received, the DNS service can route between the IP addresses for servers it has on record for providing that service, typically in a round-robin fashion,” Owen said.
It can be good for simple load balancing tasks, but this method doesn’t scale well and won’t have many options for you to implement.
These methods aren’t explicitly separate, and many organizations use a variety of methods in combination to achieve their goals. Hopefully this has given you a general overview of some ways you can approach load balancing.
About the Author
Conner Forrest is Enterprise Editor for TechRepublic. He covers startups and enterprise technology and is passionate about the convergence of tech and culture.
Article source: http://techrepublic.com.feedsportal.com/c/35463/f/670841/s/4bf1df4b/sc/23/l/0L0Stechrepublic0N0Carticle0Cdata0Ecenter0E10A10Echoosing0Ea0Eload0Ebalancer0C0Tftag0FRSS56d97e7/story01.htm
Powered by Facebook Comments