If you’re planning to take the SY0-401 or the SY0-501 Security+ exam, you should have a basic understanding of how resiliency and automation strategies reduce risk. This includes high availability, distributive allocation, redundancy, fault tolerance, and RAID.
For example, can you answer this question?
Q. Your organization is planning to deploy a new e-commerce web site. Management anticipates heavy processing requirements for a back-end application. The current design will use one web server and multiple application servers. Which of the following BEST describes the application servers?
A. Load balancing
B. Clustering
C. RAID
D. Affinity scheduling
More, do you know why the correct answer is correct and the incorrect answers are incorrect? The answer and explanation is available at the end of this post.
Load Balancers for High Availability
A load balancer can optimize and distribute data loads across multiple computers or multiple networks. For example, if an organization hosts a popular web site, it can use multiple servers hosting the same web site in a web farm. Load-balancing software distributes traffic equally among all the servers in the web farm, typically located in a DMZ.
The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based on factors such as processor utilization and the number of current connections to the server. A software-based load balancer uses software running on each of the servers in the load- balanced cluster to balance the load. Load balancing primarily provides scalability, but it also contributes to high availability. Scalability refers to the ability of a service to serve more clients without any decrease in performance. Availability ensures that systems are up and operational when needed. By spreading the load among multiple systems, it ensures that individual systems are not overloaded, increasing overall availability.
Consider a web server that can serve 100 clients per minute, but if more than 100 clients connect at a time, performance degrades. You need to either scale up or scale out to serve more clients. You scale the server up by adding additional resources, such as processors and memory, and you scale out by adding additional servers in a load balancer.
The figure shows an example of a load balancer with multiple web servers configured in a web farm. Each web server includes the same web application. A load balancer uses a scheduling technique to determine where to send new requests. Some load balancers simply send new requests to the servers in a round-robin fashion. The load balancer sends the first request to Server 1, the second request to Server 2, and so on. Other load balancers automatically detect the load on individual servers and send new clients to the least used server.
Load balancing
Some load balancers use source address affinity to direct the requests. Source affinity sends requests to the same server based on the requestor’s IP address. As an example, imagine that Homer sends a request to retrieve a web page. The load balancer records his IP address and sends his request to Server 3. When he sends another request, the load balancer identifies his IP address and sends his request to Server 3 again. Source affinity effectively sticks users to a specific server for the duration of their sessions.
A software-based load balancer uses a virtual IP. For example, imagine the IP address of the web site is 72.52.206.134. This IP address isn’t assigned to a specific server. Instead, clients send requests to this IP address and the load-balancing software redirects the request to one of the three servers in the web farm using their private IP addresses. In this scenario, the actual IP address is referred to as a virtual IP.
An added benefit of many load balancers is that they can detect when a server fails. If a server stops responding, the load-balancing software no longer sends clients to this server. This contributes to overall high availability for the load balancer.
Q. Your organization is planning to deploy a new e-commerce web site. Management anticipates heavy processing requirements for a back-end application. The current design will use one web server and multiple application servers. Which of the following BEST describes the application servers?
A. Load balancing
B. Clustering
C. RAID
D. Affinity scheduling
Answer is A. The design is using load balancing to spread the load across multiple application servers. The scenario indicates the goal is to use multiple servers because of heavy processing requirements, and this is exactly what load balancing does.
Clustering is typically used to provide high availability by failing over to another server if one server fails.
RAID provides fault tolerance for disk drives, not servers.
Affinity scheduling helps ensure clients go to the same server during a session, but this isn’t relevant to this scenario.
See Chapter 9 of the CompTIA Security+: Get Certified Get Ahead: SY0-501 Study Guide
or
Chapter 9 of the CompTIA Security+: Get Certified Get Ahead: SY0-401 Study Guide
for more information on redundancy and fault tolerance.
1 thought on “High Availability”