If you’re planning to take the SY0-601 version of the Security+ exam, you should understand how to implement secure network designs. This includes adding redundancy and fault tolerance. By adding redundancy into your systems and networks, you can increase the system’s reliability even when they fail. By increasing reliability, you increase a system’s resiliency or availability.
For example, can you answer this practice test question?
Q. Your organization recently implemented two servers in an active/passive load-balancing configuration. What security goal does this support?
A. Obfuscation
B. Integrity
C. Confidentiality
D. Resilience
More, do you know why the correct answer is correct and the incorrect answers are incorrect? The answer and explanation are available at the end of this post.
High availability refers to a system or service that needs to remain operational with almost zero downtime. It’s possible to achieve 99.999 percent uptime, commonly called five nines by implementing redundancy and fault tolerance methods. This equates to less than 6 minutes of downtime a year: 60 minutes × 24 hours × 365 days ×.00001 = 5.256 minutes.
Although five nines is achievable, it’s expensive. However, if the potential cost of an outage is high, the high cost of the redundant technologies is justified. For example, some websites generate a significant amount of revenue, and every minute a website is unavailable represents lost money. High-capacity load balancers ensure the service is always available even if a server fails.
Active/Active Load Balancers
An active/active load balancer can optimize and distribute data loads across multiple computers or multiple networks. For example, if an organization hosts a popular website, it can use multiple servers hosting the same website in a web farm. Load-balancing software distributes traffic equally among all the servers in the web farm, typically located in a DMZ.
The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based on factors such as processor utilization and the number of current connections to the server. A software-based load balancer uses software running on each of the servers to balance the load. Load balancing primarily provides scalability, but it also contributes to high availability. Scalability refers to the ability of a service to serve more clients without any decrease in performance. Availability ensures that systems are up and operational when needed. By spreading the load among multiple systems, it ensures that individual systems are not overloaded, increasing overall availability.
Consider a web server that can serve 100 clients per minute, but if more than 100 clients connect at a time, performance degrades. You need to either scale up or scale out to serve more clients. You scale the server up by adding additional resources, such as processors and memory, and you scale out by adding additional servers in a load balancer.
The figure shows an example of a load balancer with multiple web servers configured in a web farm. Each web server includes the same web application. A load balancer uses a scheduling technique to determine where to send new requests. Some load balancers simply send new requests to the servers in a round-robin fashion. The load balancer sends the first request to Server 1, the second request to Server 2, and so on. Other load balancers automatically detect the load on individual servers and send new clients to the least used server.

Load balancing
Some load balancers use source address affinity to direct the requests. Source affinity sends requests to the same server based on the requestor’s IP address and provides the user with session persistence. As an example, imagine that Homer sends a request to retrieve a webpage. The load balancer records his IP address and sends his request to Server 3. When he interacts with the page and sends another request, the load balancer identifies his IP address and sends his request to Server 3 again. Source affinity effectively sticks users to a specific server ensuring session persistence.
A software-based load balancer uses a virtual IP. For example, imagine the IP address of the website is 72.52.206.134. This IP address isn’t assigned to a specific server. Instead, clients send requests to this IP address, and the load-balancing software redirects the request to one of the servers in the web farm using their private IP addresses. In this scenario, the actual IP address is referred to as a virtual IP.
An added benefit of many load balancers is that they can detect when a server fails. If a server stops responding, the load-balancing software no longer sends clients to this server. This contributes to overall high availability.
Active/Passive Load Balancers
Load balancers can also be configured in an active/passive configuration. In an active/passive configuration, one server is active, and the other server is inactive. If the active server fails, the inactive server takes over.
Consider the figure, which shows a two-node active-passive configuration. (Load balancers can include more than two nodes, but these examples use only two to keep them simple.) Both nodes are individual servers, and they both have access to external data storage used by the active server. Additionally, the two nodes have a monitoring connection to each other used to check each other’s health or heartbeat.

Active/passive configuration
Imagine that Node 1 is the active node. When any of the clients connect, the load balancer ensures that the clients connect to the active node. If Node 1 fails, Node 2 senses the failure through the heartbeat connection and configures itself as the active node. Because both nodes have access to the shared storage, there is no loss of data for the client. Clients may notice a momentary hiccup or pause, but the service continues.
You might notice that the shared storage in the figure represents a single point of failure. It’s not uncommon for this to be a robust hardware RAID-10. This ensures that even if a hard drive in the shared storage fails, the service will continue. Additionally, if both nodes are plugged into the same power grid, the power represents a single point of failure. They can each be protected with a separate UPS and use a separate power grid.
Q. Your organization recently implemented two servers in an active/passive load-balancing configuration. What security goal does this support?
A. Obfuscation
B. Integrity
C. Confidentiality
D. Resilience
Answer is D. An active/passive load-balancing configuration supports resilience and high availability. An active/passive load-balancing configuration uses redundant servers to ensure a service continues to operate even if one of the servers fails.
Obfuscation methods attempt to make something unclear or difficult to understand and are not related to load balancing.
Integrity methods ensure that data has not been modified.
Confidentiality methods such as encryption prevent the unauthorized disclosure of data.
See Chapter 9 of the CompTIA Security+: Get Certified Get Ahead: SY0-601 Study Guide for more information on implementing controls to protect assets.