High availability refers to a system or service that needs to remain operational with almost zero downtime. If you’re planning to take the SY0-501 exam, you should have a basic understanding of utilizing different redundancy and fault-tolerance methods.
For example, can you answer this practice test question?
Q. Your company’s web site experiences a large number of client requests during certain times of the year. Which of the following could your company add to ensure the web site’s availability during these times?
A. Fail-open cluster
B. Certificates
C. Web application firewall
D. Load balancing
More, do you know why the correct answer is correct and the incorrect answers are incorrect? The answer and explanation are available at the end of this post.
It’s worth mentioning that CompTIA has grouped both clustering and load balancing into the same category of load balancing in the objectives. Many IT professionals do the same thing, though technically they are different concepts. In general, failover clusters are commonly used for applications such as database applications. Load balancers are often used for services, such as web servers in a web farm.
Failover Clusters for High Availability
The primary purpose of a failover cluster is to provide high availability for a service offered by a server. Failover clusters use two or more servers in a cluster configuration, and the servers are referred to as nodes. At least one server or node is active and at least one is inactive. If an active node fails, the inactive node can take over the load without interruption to clients.
Consider the figure, which shows a two-node active-passive failover cluster. Both nodes are individual servers, and they both have access to external data storage used by the active server. Additionally, the two nodes have a monitoring connection to each other used to check the health or heartbeat of each other.
Failover cluster
Imagine that Node 1 is the active node. When any of the clients connect, the cluster software (installed on both nodes) ensures that the clients connect to the active node. If Node 1 fails, Node 2 senses the failure through the heartbeat connection and configures itself as the active node. Because both nodes have access to the shared storage, there is no loss of data for the client. Clients may notice a momentary hiccup or pause, but the service continues.
You might notice that the shared storage in the figure represents a single point of failure. It’s not uncommon for this to be a robust hardware RAID-10. This ensures that even if a hard drive in the shared storage fails, the service will continue. Additionally, if both nodes are plugged into the same power grid, the power represents a single point of failure. They can each be protected with a separate UPS, and use a separate power grid.
It’s also possible to configure the cluster as an active-active cluster. Instead of one server being passive, the cluster balances the load between both servers.
Cluster configurations can include many more nodes than just two. However, nodes need to have close to identical hardware and are often quite expensive, but if a company truly needs to achieve 99.999 percent uptime, it’s worth the expense.
Load Balancers for High Availability
A load balancer can optimize and distribute data loads across multiple computers or multiple networks. For example, if an organization hosts a popular web site, it can use multiple servers hosting the same web site in a web farm. Load-balancing software distributes traffic equally among all the servers in the web farm, typically located in a DMZ.
The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based on factors such as processor utilization and the number of current connections to the server. A software-based load balancer uses software running on each of the servers in the load- balanced cluster to balance the load. Load balancing primarily provides scalability, but it also contributes to high availability. Scalability refers to the ability of a service to serve more clients without any decrease in performance. Availability ensures that systems are up and operational when needed. By spreading the load among multiple systems, it ensures that individual systems are not overloaded, increasing overall availability.
An added benefit of many load balancers is that they can detect when a server fails. If a server stops responding, the load-balancing software no longer sends clients to this server. This contributes to overall high availability for the load balancer.
Q. Your company’s web site experiences a large number of client requests during certain times of the year. Which of the following could your company add to ensure the web site’s availability during these times?
A. Fail-open cluster
B. Certificates
C. Web application firewall
D. Load balancing
Answer is D. Load balancing shifts the load among multiple systems and can increase the site’s availability by adding additional nodes when necessary.
A failover cluster also provides high availability, but there is no such thing as a fail-open cluster.
Certificates help ensure confidentiality and integrity, but do not assist with availability.
A web application firewall helps protect a web server against attacks, but it does not increase availability from normal client requests.
See Chapter 9 of the CompTIA Security+: Get Certified Get Ahead: SY0-501 Study Guide for more information on redundancy and fault tolerance.