Applies To:
Show VersionsBIG-IP versions 1.x - 4.x
- 3.3.1 PTF-06, 3.3.1 PTF-05, 3.3.1 PTF-04, 3.3.1 PTF-03, 3.3.1 PTF-02, 3.3.1 PTF-01, 3.3.1, 3.3.0
2
Configuring Local Server Acceleration
Introducing local server acceleration
This chapter explains how to set up a local server acceleration configuration, in which a BIG-IP Cache Controller redundant system uses content-aware traffic direction to enhance the efficiency of an array of cache servers that cache content for a local web server. This type of configuration is useful for any enterprise that wants to improve the speed with which it responds to content requests from users on the Internet.
The configuration detailed in this chapter uses the following BIG-IP Cache Controller features:
- Cacheable content determination
Cacheable content determination enables you to determine the type of content you cache on the basis of any combination of elements in the header of an HTTP request. - Content affinity
Content affinity ensures that a given subset of content remains associated with a given cache to the maximum extent possible, even when cache servers become unavailable, or are added or removed. This feature also maximizes efficient use of cache memory. - Hot content load balancing
Hot content load balancing identifies hot, or frequently requested, content on the basis of number of requests in a given time period for a given hot content subset. A hot content subset is different from, and typically smaller than, the content subsets used for content striping. Requests for hot content are redirected to a cache server in the hot pool, a designated group of cache servers. This feature maximizes the use of cache server processing power without significantly affecting the memory efficiency gained by content affinity. - Intelligent cache population
Intelligent cache population allows caches to retrieve content from other caches in addition to the origin web server. This feature is useful only when working with non-transparent cache servers, which can receive requests that are destined for the cache servers themselves, as opposed to transparent cache servers, which can intercept requests destined for a web server. Intelligent cache population minimizes the load on the origin web server and speeds cache population.
Maximizing memory or processing power
From the time you implement a cache control rule until such time as a hot content subset becomes hot, the content is divided across your cache servers, so that no two cache servers contain the same content. In this way, efficient use of the cache servers' memory is maximized.
After a hot content subset becomes hot, requests for any content contained in that subset are load balanced, so that, ultimately, each cache server contains a copy of the hot content. The BIG-IP Cache Controller distributes requests for the hot content among the cache servers. In this way, efficient use of the cache servers' processing power is maximized.
Thus, for a particular content item, the BIG-IP Cache Controller maximizes either cache server memory (when the content is cool) or cache server processing power (when the content is hot), but not both at the same time. The fact that content is requested with greatly varying frequency enables the cache statement rule to evaluate and select the appropriate attribute to maximize for a given content subset.
Using the configuration diagram
Figure 2.1, following, illustrates a local server acceleration configuration, and provides an example configuration for this entire chapter. Remember that this is just a sample: when creating your own configuration, you must use IP addresses, host names, and so on, that are applicable to your own network.
Figure 2.1 Local server acceleration
Configuration tasks
If you want to configure local server acceleration, you need to complete the following tasks in order:
- Create pools
- Create a cache control rule
- Create a virtual server
- Configure for intelligent cache population
Each of the following sections explains one of these tasks, and shows how you would perform the tasks in order to implement the configuration shown in Figure 2.1. Note that in this example, as in all examples in this guide, we use only non-routable IP addresses. In a real topology, the appropriate IP addresses would have to be routable on the Internet.
Creating pools
To use the local server acceleration configuration, you need to create three sets of load balancing pools. You create pools for your origin server (the web server on which all your content resides), for your cache servers, and for your hot, or frequently requested, content servers, which may or may not be cache servers. A pool is a group of devices to which you want the BIG-IP Cache Controller redundant system to direct traffic. For more information about pools, refer to .
You will create these pools:
- Cache server pool
The BIG-IP Cache Controller directs all cacheable requests bound for your web server to this pool, unless a request is for hot content. - Origin server pool
This pool includes your origin web server. Requests are directed to this pool when:- The request is for non-cacheable content; that is, content that is not identified in the cacheable content expression part of a cache rule statement. For more information, see Cacheable content expression, on page 2-9.
- The request is from a cache server that does not yet contain the requested content, and no other cache server yet contains the requested content.
- No cache server in the cache pool is available.
- Hot cache servers pool
If a request is for frequently requested content, the BIG-IP Cache Controller redundant system directs the request to this pool.Note: While the configuration shown in Figure 2.1 implements a hot cache servers pool, this pool is not required if you want to use the content determination and content affinity features. However, you must implement this pool if you want to use the hot content load balancing or intelligent cache population features.
Creating a pool for the cache servers
First, create a pool for the cache servers. Use either the Configuration utility or the command line to create this pool.
To create a pool using the Configuration utility
- In the navigation pane, click Pools.
The Pools screen opens. - In the toolbar, click the Add Pool button.
The Add Pool screen opens. - In the Add Pool screen, configure attributes required for the cache servers you want to add to the pool.
For additional information about configuring a pool, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1:· Create a pool named cache_servers.
· Add each cache server from the example, 10.10.20.4, 10.10.20.5, and 10.10.20.6, to the pool.
For each cache server you add to the pool, specify port 80, which means this cache server accepts traffic for the HTTP service only.
To create a pool from the command line
To define a pool from the command line, use the following syntax:
bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }
For example, to implement the configuration shown in Figure 2.1, you use the command:
bigpipe pool cache_servers { lb_method round_robin member 10.10.20.4:80 member 10.10.20.5:80 member 10.10.20.6:80 }
Creating a pool for the origin server
Next, create a pool for your origin server. Use either the Configuration utility or the bigpipe pool command, as you did to create the pool for the cache servers.
To create a pool using the Configuration utility
- In the navigation pane, click Pools.
The Pools screen opens. - In the toolbar, click the Add Pool button.
The Add Pool screen opens. - In the Add Pool screen, configure attributes required for the cache servers you want to add to the pool.
For additional information about configuring a pool, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1:· Create a pool named origin_server.
· Add the origin server from the example (10.10.20.7) to the pool and specify port 80, which means the server accepts traffic for the HTTP service only.
To create a pool from the command line
To define a pool from the command line, use the following syntax:
bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }
For example, to implement the configuration shown in Figure 2.1, you would use the command:
bigpipe pool origin_server { lb_method round_robin member 10.10.20.7:80 }
Creating a pool for hot content
The last step in creating pools is to create a pool for hot content. Use either the Configuration utility or the command line to create this pool, as in the previous sections.
To create a pool using the Configuration utility
- In the navigation pane, click Pools.
The Pools screen opens. - In the toolbar, click the Add Pool button.
The Add Pool screen opens. - In the Add Pool screen, configure the attributes required for the cache servers you want to add to the pool.
For additional information about configuring a pool, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1:· Create a pool named hot_cache_servers.
· Add each cache server from the example, 10.10.20.4, 10.10.20.5, and 10.10.20.6, to the pool. For each cache server you add to the pool, specify port 80, which means this cache server accepts traffic for the HTTP service only.
To create a pool from the command line
To define a pool from the command line, use the following syntax:
bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }
To implement the configuration shown in Figure 2.1, you would use the command:
bigpipe pool hot_cache_servers { lb_method round_robin member 10.10.20.4:80 member 10.10.20.5:80 member 10.10.20.6:80 }
Note: If you have the hot content pool and the cache servers pool reference the same nodes, it enables use of the intelligent cache population feature.
Creating a cache control rule
A cache control rule is a specific type of rule. A rule establishes criteria by which a BIG-IP Cache Controller directs traffic. A cache control rule determines where and how the BIG-IP Cache Controller redundant system directs content requests in order to maximize the efficiency of your cache server array and of your origin web server.
A cache control rule includes a cache statement, which is composed of a cacheable content expression and two attributes. An attribute is a variable that the cache statement uses to direct requests. It can also include several optional attributes.
A cache statement may be either the only statement in a rule, or it may be nested in a rule within an if statement.
Cacheable content expression
The cacheable content expression determines whether the BIG-IP Cache Controller redundant system directs a given request to the cache server or to the origin server, based on evaluating variables in the HTTP header of the request.
Any content that does not meet the criteria in the cacheable content expression is deemed non-cacheable.
For example, in the configuration illustrated in this chapter, the cacheable content expression includes content having the file extension .html or .gif. The BIG-IP Cache Controller redundant system considers any request for content having a file extension other than .html or .gif to be non-cacheable, and sends such requests directly to the origin server.
For your configuration, you may want to cache any content that is not dynamically generated.
Required attributes
The cache control rule must include the following attributes:
- origin_pool
Specifies a pool of servers that contain original copies of all content. Requests are load balanced to this pool when any of the following are true:- The requested content does not meet the criteria in the cacheable content condition.
- No cache server is available.
- The BIG-IP Cache Controller redundant system is redirecting a request from a cache server that did not have the requested content.
- cache_pool
Specifies a pool of cache servers to which requests are directed in a manner that optimizes cache performance.
Optional attributes
The attributes in this section apply only if you are using the hot content load balancing feature.
Note: In order to use the intelligent cache population feature, the cache_pool and the hot_pool must either be the same pool, or different pools referencing the same nodes.
- hot_pool
Specifies a pool of cache servers to which requests are load balanced when the requested content is hot.The hot_pool attribute is required if any of the following attributes is specified:
- hot_threshold
Specifies the minimum number of requests for content in a given hot content set that causes the content set to change from cool to hot at the end of the period.
If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hot threshold of 100 requests. - cool_threshold
Specifies the maximum number of requests for content in a given hot content set that causes the content set to change from hot to cool at the end of the hit period.
If you specify a variable for hot_pool, but do not specify a value for this variable, the cache statement uses a default cool threshold of 10 requests. - hit_period
Specifies the period in seconds over which to count requests for particular content before determining whether to change the content demand status (hot or cool) of the content.
If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hit period of 60 seconds. - content_hash_size
Specifies the number of units, or hot content subsets, into which the content is divided when determining whether content demand status is hot or cool. The requests for all content in a given subset are summed, and a content demand status (hot or cool) is assigned to each subset. The content_hash_size should be within the same order of magnitude as the actual number of requests possible. For example, if the entire site is composed of 500,000 pieces of content, a content_hash_size of 100,000 would be typical.
If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hash size of 1028 subsets.
Content demand status
Content demand status is a measure of the frequency with which a given hot content subset is requested. Content demand status, which is either hot or cool, is applicable only when using the hot content load balancing feature. For a given hot content subset, content demand status is cool from the time the cache control rule is implemented until the number of requests for the subset exceeds the hot_threshold during a hit_period. At this point content demand status for the subset becomes hot, and requests for any item in the subset are load balanced to the hot_pool. Content demand status remains hot until the number of requests for the subset falls below the cool_threshold during a hit_period, at which point the content demand status becomes cool. The BIG-IP Cache Controller the directs requests for any item in the subset to the appropriate server in the cache_pool until such time as the subset becomes hot again.
To create a cache statement rule using the Configuration utility
- In the navigation pane, click Rules.
The Rules screen opens. - In the toolbar, click the Add Rule button.
The Add Rule screen opens. - In the Add Rule screen, type the cache statement.
For example, given the configuration shown in Figure 2.1, to cache all content having either the file extension .html or .gif, you would type:rule cache_rule { cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) { origin_pool origin_server cache_pool cache_servers hot_pool hot_cache_servers } }
- Click the Add button.
To create a cache statement rule from the command line
To create a cache statement rule from the command line, use the following syntax:
bigpipe 'rule <rule_name> { cache ( <condition> ) { origin_pool <origin_pool_name> cache_pool <cache_pool_name> hot_pool <hot_pool_name> hot_threshold <hot_threshold_value> cool_threshold <cool_threshold_value> hit_period <hit_period_value> content_hash_size <content_hash_size_value> } }'
For example, given the configuration shown in Figure 2.1, to cache all content having the file extension .html or .gif, you would use the bigpipe command:
bigpipe 'rule cache_rule { cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) { origin_pool origin_server cache_pool cache_servers hot_pool hot_cache_servers } }'
Creating a virtual server
Now that you have created pools and a cache control rule to determine how the BIG-IP Cache Controller redundant system will distribute traffic in the configuration, you need to create a virtual server to use this rule and these pools. For this virtual server, use the host name or IP address that Internet clients use to request content from your site.
To create a virtual server using the Configuration utility
- In the navigation pane, click Virtual Servers.
- On the toolbar, click Add Virtual Server.
The Add Virtual Server screen opens. - In the Add Virtual Server screen, configure the attributes you want to use with the virtual server.
For additional information about configuring a virtual server, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1:· Add a virtual server with address 10.10.10.4 and port 80 (this means the virtual server accepts traffic for the HTTP service only).
· Add the rule cache_rule.
To create a virtual server from the command line
Use the bigpipe vip command to configure the virtual server to use the pool that contains the outside addresses of the firewalls:
bigpipe vip <virtual server>:<service> <interface> use rule <rule name>
In the command, replace the parameters with the appropriate information:
- <virtual server> is an IP address appropriate to your network.
- <service> is a service you want to configure, such as HTTP; FTP, or Telnet.
- <interface> is the interface on the BIG-IP on which you want to create this virtual server.
- <rule name> is the name of the rule you want this virtual server to use.
To implement the configuration shown in Figure 2.1, you use the command:
bigpipe vip 10.10.10.4:80 use rule cache_rule
Configuring for intelligent cache population
Your cache control rule routes a request to the appropriate cache server. However, the cache server will not have the requested content if the content has expired, or if the cache server is receiving a request for this content for the first time. If the cache does not have the requested content, the cache initiates a miss request (that is, a request resulting from a request for content a cache does not have) for this content. The miss request goes to the origin server specified in the configuration of the cache or to another cache server. If you want to allow intelligent cache population, you should configure the cache with its origin server set to be the virtual server on the BIG-IP Cache Controller, so that the cache sends miss requests to the internal shared interface of the BIG-IP Cache Controller. The BIG-IP Cache Controller translates the destination of the request, and sends the request to either the origin server or another cache server that already has the requested content.
To ensure that the origin server or cache server responds to the BIG-IP Cache Controller rather than to the original cache server that generated the miss request, the BIG-IP Cache Controller also translates the source of the miss request to the translated address and port of the associated Secure Network Address Translation (SNAT) connection.
In order to enable this scenario, you must:
- Create a SNAT on the BIG-IP Cache Controller.
- Enable destination processing on the internal interface of the BIG-IP Cache Controller.
Configuring a SNAT
A Secure Network Address Translation (SNAT) translates the address of a packet from the cache server to the address you specify. For more information about SNATs, see Configuring SNAT address mappings, on page 5-27.
To configure a SNAT mapping using the Configuration utility
- In the navigation pane, click Secure NATs.
The Secure Network Address Translations screen opens. - On the toolbar, click Add SNAT.
The Add SNAT screen opens. - In the Add SNAT screen, configure the attributes required for the SNAT you want to add.
For additional information about configuring a pool, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1, use the translation address 10.10.10.5.
To configure a SNAT mapping on the command line
The bigpipe snat command defines one SNAT for one or more node addresses.
bigpipe snat map <node addr>... <node addr> to <SNAT addr>
For example, to implement the configuration shown in Figure 2.1, you use the command:
bigpipe snat map default to 10.10.10.5
Configuring interfaces
Typically, a BIG-IP Cache Controller has two interfaces:
- An external interface, typically set for destination processing. Destination processing means that the interface can rewrite the destination address of an incoming packet, and allows initiation of virtual server connections.
- An internal interface, typically set for source processing. Source processing means that the interface can rewrite the source of an incoming packet, and allows initiation of SNAT connections.
In this configuration, you must add destination processing to the internal interface. Adding destination processing to the internal interface enables the BIG-IP Cache Controller to direct a request from a cache server to either another cache server or to the origin web server.
To add destination processing to the internal interface using the Configuration utility
- In the navigation pane, click NICs.
The Network Interface Cards screen opens. You can view the current settings for each interface in the Network Interface Card table. - In the Network Interface Card table, click the name of the interface you want to configure.
The Network Interface Card Properties screen opens. - In the Network Interface Card Properties screen, configure the attributes required for the interface.
For additional information about creating a pool, click the Help button.
Configuration notes
To create the configuration shown in Figure 2.1, make sure the Enable Destination Processing check box is checked for exp1.
To add destination processing to the internal interface from the command line
Use the bigpipe interface command with the source keyword to turn source processing on or off for an interface:
bigpipe interface <interface> dest [ <enable> | <disable> ]
To implement the configuration shown in Figure 2.1 from the command line, you use the command:
bigpipe interface exp1 dest enable