Applies To:
Show VersionsBIG-IP ASM
- 11.6.5, 11.6.4, 11.6.3, 11.6.2, 11.6.1
Preventing DoS Attacks on Applications
What is a DoS attack?
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) makes a victim's resource unavailable to its intended users, or obstructs the communication media between the intended users and the victimized site so that they can no longer communicate adequately. Perpetrators of DoS attacks typically target sites or services, such as banks, credit card payment gateways, and e-commerce web sites.
Application Security Manager™ (ASM) helps protect web applications from DoS attacks aimed at the resources that are used for serving the application: the web server, web framework, and the application logic. Advanced Firewall Manager™ (AFM) helps prevent network, SIP, and DNS DoS and DDoS attacks.
HTTP-GET attacks and page flood attacks are typical examples of application DoS attacks. These attacks are initiated either from a single user (single IP address) or from thousands of computers (distributed DoS attack), which overwhelms the target system. In page flood attacks, the attacker downloads all the resources on the page (images, scripts, and so on) while an HTTP-GET flood repeatedly requests specific URLs regardless of their place in the application.
About recognizing DoS attacks
Application Security Manager™ determines that traffic is a DoS attack based on calculations for transaction rates on the client side (TPS-based) or latency on the server side (latency-based). You can specify the calculations that you want the system to use.
In addition, the system can protect web applications against DoS attacks on heavy URLs. Heavy URL protection implies that during a DoS attack, the system protects the heavy URLs using the methods configured in the DoS profile.
You can view details about DoS attacks that the system detected and logged in the event logs and DoS reports. You can also configure remote logging support for DoS attacks when creating a logging profile.
When to use different DoS protections
Application Security Manager provides several different types of DoS protections that you can configure to protect applications. The following table describes when it is most advantageous to use the different protections. You can use any combination of the protections.
DoS Protection | When to Use |
---|---|
TPS-based protection | To focus protection on the client side to detect an attack right away. |
Latency-based protection | To focus protection on the server side where attacks are detected when a server slowdown occurs. |
Heavy URLs | If application users can query a database or submit complex queries that may slow the system down. |
Proactive bot defense | To stop DoS attacks before they compromise the system. Affords great protection but impacts performance. |
CAPTCHA challenge | To stop non-human attackers by presenting a character recognition challenge to suspicious users. |
About configuring TPS-based DoS protection
When setting up DoS protection, you can configure the system to prevent DoS attacks based on transaction rates (TPS-based anomaly detection). If you choose TPS-based anomaly protection, the system detects DoS attacks from the client side using the following calculations:
- Transaction rate detection interval
- A short-term average of recent requests per second (for a specific URL or from an IP address) that is updated every 10 seconds.
- Transaction rate history interval
- A longer-term average of requests per second (for a specific URL or from an IP address) calculated for the past hour and is updated every 10 seconds.
If the ratio of the transaction rate detection interval to the transaction rate during the history interval is greater than the percentage indicated in the TPS increased by setting, the system considers the web site to be under attack, or the URL, IP address, or geolocation to be suspicious. In addition, if the transaction rate detection interval is greater than the TPS reached setting (regardless of the history interval), then also the respective URL, IP address, or geolocation is suspicious or the site is being attacked.
Note that TPS-based protection might detect a DoS attack simply because many users are trying to access the server all at once, such as during a busy time or when a new product comes out. In this case, the attack might be a false positive because the users are legitimate. But the advantage of TPS-based DoS protection is that attacks can be detected earlier than when using latency-based protection. So it is important to understand the typical maximum peak loads on your system when setting up DoS protection, and use the methods that are best for your application.
About configuring latency-based DoS protection
When setting up DoS protection, you can configure the system to prevent DoS attacks based on the server side (latency-based anomaly detection). In latency-based detection, it takes a latency increase and at least one suspicious IP address, URL, heavy URL, site-wide entry, or geolocation to consider the activity to be an attack.
If the ratio of recent versus historical values is greater than the Latency increased by setting, then a prerequisite for the presence of an attack is satisfied, but that is not sufficient. It also takes at least one suspicious IP address or geolocation, one attacked URL based on TPS criteria, one heavy URL, or one site-wide entry for the system to declare an attack and start mitigation. In addition, if the transaction rate detection interval is greater than the Latency reached setting (regardless of the history interval), then also the respective IP address is suspicious or the URL is being attacked.
Latency-based protection is less prone to false positives than TPS-based protection because in a DoS attack, the server is reaching capacity and service/response time is slow: this is impacting all users. Increased latency can be used as a trigger step for detecting an L7 attack. Following the detection of a significant latency increase, it is important to determine whether you need further action. After examining the increase in the requests per second and by comparing these numbers with past activity, you can identify suspicious versus normal latency increases.
About DoS prevention policy
When setting up either transaction-based or latency-based DoS protection, you can specify a prevention policy that determines how the system recognizes and mitigates DoS attacks. The prevention policy can use the following methods:
- JavaScript challenges (also called Client-Side Integrity Defense)
- CAPTCHA challenges
- Request blocking (including Rate Limiting or Block All)
Based on the same suspicious criteria, the system can also issue a CAPTCHA (character recognition) challenge to verify that the client is human. Depending on how strict you want to enforce DoS protection, you can limit the number of requests that are allowed through to the server or block requests that are deemed suspicious.
You can also use can use request blocking in the prevention policy to specify conditions for when the system blocks requests. Note that the system only blocks requests during a DoS attack when the TPS-based or latency-based anomaly’s Operation Mode is set to Blocking. You can use request blocking to rate limit or block all requests from suspicious IP addresses, suspicious countries, or URLs suspected of being under attack. Site-wide rate limiting also blocks requests to web sites suspected of being under attack. If you block all requests, the system blocks suspicious IP addresses and geolocations except those on the whitelist. If using rate limiting, the system blocks some requests depending on the threshold detection criteria set for the anomaly.
The mitigation methods that you select are used in the order they appear on the screen. The system enforces the methods only as needed if the previous method was not able to stem the attack.
About geolocation mitigation
You can mitigate DoS attacks based on geolocation by detecting traffic from countries sending suspicious traffic. This is part of the prevention policy in the DoS profile for latency-based and TPS-based anomalies and can be used for unusual activity as follows:
- Geolocation-based Client Side integrity: If traffic from countries matches the thresholds configured in the DoS profile, the system considers those countries suspicious, and sends a JavaScript challenge to each suspicious country.
- Geolocation-based CAPTCHA challenge: If traffic from countries matches the thresholds configured in the DoS profile, the system considers those countries suspicious, and issues a CAPTCHA challenge to each suspicious country.
- Geolocation-based request dropping: The system drops all, or some, requests from suspicious countries.
In addition, you can add countries to a geolocation whitelist (traffic from these countries is never blocked) and a blacklist (traffic from these countries is always blocked when a DoS attack is detected).
About heavy URL protection
Heavy URLs are URLs that may consume considerable server resources per request. Heavy URLs respond with low latency most of the time, but can easily reach high latency under specific conditions. Heavy URLs are not necessarily heavy all the time, but tend to get heavy especially during attacks. Therefore, low rate requests to those URLs can cause significant DoS attacks and be hard to distinguish from legitimate clients.
Typically, heavy URLs involve complex database queries; for example, retrieving historical stock quotes. In most cases, users request recent quotes with weekly resolution, and those queries quickly yield responses. However, an attack might involve requesting five years of quotes with day-by-day resolution, which requires retrieval of large amounts of data, and consumes considerably more resources.
Application Security Manager™ (ASM) allows you to configure protection from heavy URLs in a DoS profile. You can specify a latency threshold for automatically detecting heavy URLs. If some of the web site's URLs could potentially become heavy URLs, you can add them so the system will keep an eye on them, and you can add URLs that should be ignored and not considered heavy.
ASM measures the tail latency of each URL and of the whole site for 24 hours to get a good sample of request behavior. A URL is considered heavy if its average tail latency is more than twice that of the site latency for the 24-hour period.
About proactive bot defense
Application Security Manager™ (ASM) can proactively defend your applications against automated attacks by web robots, called bots for short. This defense method, called proactive bot defense, can prevent layer 7 DoS attacks, web scraping, and brute force attacks from starting. By preventing bots from accessing the web site, these attacks are prevented as well.
Working together with anomaly detection and DoS protection, proactive bot defense helps identify and mitigate attacks before they cause damage to the site. Because this feature generally inspects most traffic, it affects system performance, but requires fewer resources than traditional web scraping and brute force protections. You can use proactive bot defense in addition to the web scraping and brute force protections that are available in ASM security policies. Proactive bot defense is enforced through a DoS profile and does not require a security policy.
When clients access a protected web site for the first time, the system sends a JavaScript challenge to the browser. Therefore, it is important when using this feature for clients to use browsers that allow JavaScript.
If the client successfully evaluates the challenge and resends the request with a valid cookie, the system allows the client to reach the server. Requests that do not answer the challenge remain unanswered and are not sent to the server. Requests sent to non-HTML URLs without the cookie are dropped and considered to be bots.
You can configure lists of URLs to consider safe so that the system does not need to validate them. This speeds up access time to the web site. If your application accesses many cross-domain resources and you have a list of those domains, you may want to select an option that validates cross-domain requests to those domains.
About cross-domain requests
Proactive bot defense in a DoS profile allows you to specify which cross-domain requests are legal. Cross-domain requests are HTTP requests for resources from a different domain than the domain of the resource making the request.
If your application accesses many cross-domain resources and you have a list of those domains, you can validate cross-domain requests to those domains.
For example, your web site uses two domains, site1.com (the main site) and site2.com (where resources are stored). You can configure this in the DoS profile by enabling proactive bot defense, choosing one of the Allowed configured domains options for the Cross-Domain Requests setting, and specifying both of the web sites in the list of related site domains. When the browser makes a request to site1.com, it gets cookies for both site1.com and site2.com independently and simultaneously, and cross domain requests from site1.com to site2.com are allowed.
If only site1.com is configured as a related site domain, when the browser makes a request to site1.com, it gets a cookie for site1.com only. If the browser makes a cross-domain request to get an image from site2.com, it gets a cookie and is allowed only if it already has a valid site1.com cookie.
About site-wide DoS mitigation
In order to mitigate highly distributed DoS attacks, such as those instigated using large scale botnets attacking multiple URLs, you can include site-wide mitigation in a DoS profile. You can use site-wide mitigation as part of the prevention policy for either TPS-based or latency-based DoS protection. In this case, the whole site can be considered suspicious as opposed to a particular URL or IP address. Site-wide mitigation goes into effect when the system determines that the whole site is experiencing high-volume traffic but is not able to pinpoint and handle the problem.
The system implements site-wide mitigation method only as a last resort because it may cause the system to drop legitimate requests. However, it maintains, at least partially, the availability of the web site, even when it is under attack. When the system applies site-wide mitigation, it is because all other active detection methods were unable to stop the attack.
The whole site is considered suspicious when configured thresholds are crossed, and in parallel, specific IP addresses and URLs could also be found to be suspicious. The mitigation continues until the maximum duration elapses or when the whole site stops being suspicious. That is, there are no suspicious URLs, no suspicious IP addresses, and the whole site is no longer suspicious.
About DoS protection and HTTP caching
HTTP caching enables the BIG-IP® system to store frequently requested web objects (or static content) in memory to save bandwidth and reduce traffic load on web servers. The Web Acceleration profile has the settings to configure caching.
If you are using HTTP caching along with DoS protection, you need to understand how DoS protection for cached content works. In this case, URLs serving cached content are considered a DoS attack if they exceed the relative TPS increased by percentage (and not the explicit TPS reached number). Requests to static or cacheable URLs are always mitigated by rate limiting. This is true even during periods of mitigation using client-side integrity or CAPTCHA, and when those mitigations are not only URL-based.
Overview: Preventing DoS attacks on applications
You can configure the Application Security Manager™ to protect against DoS attacks on web applications. Depending on your configuration, the system detects DoS attacks based on transactions per second (TPS) on the client side, server latency, heavy URLs, geolocation, and failed CAPTCHA response.
You configure DoS protection for Layer 7 by creating a DoS profile with Application Security enabled. You then associate the DoS profile with one or more virtual servers representing applications that you want to protect. DoS protection is not part of a security policy.
The main factors in establishing the prevention policy are:
- Attackers: The clients that initiate the actual attacks. They are represented by their IP addresses and the geolocations they come from.
- Servers: The web application servers that are under attack. You can view them site-wide as the pairing of the virtual server and the DoS profile, by the URL, or as a pool member.
- BIG-IP system: The middle tier that detects attacks and associated suspicious entities, then mitigates the attacks, or blocks or drops requests depending on the options you configure in the DoS profile.
Task Summary
Configuring DoS protection for applications
Configuring TPS-based DoS protection
Configuring latency-based DoS protection
Configuring heavy URL protection
By reviewing the URL Latencies report and sorting the URLs listed by latency, you can make sure that the URLs that you expect to be heavy are listed in the DoS profile. Also, if the system detects too many (or too few) heavy URLs, you can increase (or decrease) the latency threshold.
Configuring CAPTCHA for DoS protection
Recording traffic during DoS attacks
Configuring proactive bot defense
The system sends a JavaScript challenge to traffic accessing the site for the first time. Legitimate traffic answers the challenge correctly, and resends the request with a valid cookie; then it is allowed to access the server. The system drops requests sent by browsers that do not answer the system’s initial JavaScript challenge (considering those requests to be bots).
If proactive bot detection is always running, ASM™ filters out bots before they manage to build up an attack on the system and cause damage. If using proactive bot defense only during attacks, once ASM detects a DoS attack, the system uses proactive bot defense for the duration of the attack. Proactive bot defense is used together with the active mitigation method. Any request that is not blocked by the active mitigation method still has to pass the proactive bot defense mechanism to be able to reach the server.
Associating a DoS profile with a virtual server
Implementation Result
When you have completed the steps in this implementation, you have configured the Application Security Manager™ (ASM) to protect against L7 DoS attacks. If using proactive bot defense, ASM™ protects against DDoS, web scraping, and brute force attacks (on the virtual servers that use this DoS profile) before the attacks can harm the system. Depending on the configuration, the system may also detect DoS attacks based on transactions per second (TPS) on the client side, server latency, or both.
In TPS-based detection mode, if the ratio of the transaction rate during the history interval is greater than the TPS increased by percentage, the system considers the URL to be under attack, the IP address or country to be suspicious, or possibly the whole site to be suspicious.
In latency-based detection mode, if there is a latency increase and at least one suspicious IP address, country, URL, or heavy URL, the system considers the URL to be under attack, the IP address or country to be suspicious, or possibly the whole site to be suspicious.
If you enabled heavy URL protection, the system tracks URLs that consume higher than average resources and mitigates traffic that is going to those URLs.
If you chose the blocking operation mode, the system applies the necessary mitigation steps to suspicious IP addresses, URLs, or geolocations, or applies them site-wide. If using the transparent operation mode, the system reports DoS attacks but does not block them.
If using iRules®, when the system detects a DoS attack based on the configured conditions, it triggers an iRule and responds to the attack as specified in the iRule code.
After traffic is flowing to the system, you can check whether DoS attacks are being prevented, and investigate them by viewing DoS event logs and reports.