Manual Chapter : BIG-IP Reference guide v3.3: Object Properties

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 3.3.1 PTF-06, 3.3.1 PTF-05, 3.3.1 PTF-04, 3.3.1 PTF-03, 3.3.1 PTF-02, 3.3.1 PTF-01, 3.3.1, 3.3.0
Manual Chapter


2

Object Properties



Object properties

This chapter lists the properties of the major objects you can configure on the BIG-IP Controller. The objects described in this chapter include:

  • extended application verification (EAV)
  • extended content verification (ECV)
  • filters
  • IP forwarding
  • interface
  • load balancing
  • netword address translation (NAT)
  • node
  • pool
  • port
  • redundant system
  • rule
  • secure network address transation (SNAT)
  • timer settings
  • virtual server

Extended Application Verification (EAV)

Extended Application Verification (EAV) is a sophisticated type of service check typically used to confirm whether an application running on a node is responsive to client requests. To determine whether a node application is responsive, the BIG-IP Controller uses a custom program referred to as an external service checker. An external service checker program essentially provides the option to customize service check functionality for the BIG-IP Controller. It is external to the BIG-IP system itself, and is usually developed by the customer. However, the BIG-IP Controller ships with several external service check programs. These include service check programs for FTP, POP3, SMTP, NNTP, and SQL.

The attributes you can configure for a EAV are in Table 2.1.

The attributes you can configure for EAV.
Attributes Description
Custom EAV You can set up custom EAV service checks on the BIG-IP Controller.
Bundled EAVs The BIG-IP Controller software includes bundled EAV service check scripts for checking FTP, POP3, SMTP, NNTP, and SQL.

You can use an external service checker to verify Internet or intranet applications, such as a web application that retrieves data from a back-end database and displays the data in an HTML page.

An external service checker program works in conjunction with the bigd daemon, which verifies node status using node pings and service checks. If you configure external service check on a specific node, the bigd daemon checks the node by executing the external service checker program. Once the external service checker executes, the bigd daemon looks for output written by the external service checker. If the bigd daemon finds output from the external service checker, it marks the node up. If it does not find output from the external service checker, it marks the node down. Note that bigd does not actually interpret output from the external service checker; it simply verifies that the external service checker created output.

Note: Custom external service checker programs are custom programs that are developed either by the customer, or by the customer in conjunction with F5 Networks.

Warning: Active checks that look for a receive string only accept 5000 bytes from the server before assuming that the receive string is not in the content.

Setting up custom EAV service checks

An Extended Application Verification service check is a service check that is performed on an application running on a host on the network connected to the BIG-IP Controller. You can create a custom application for this purpose. Complete the following four tasks to implement a custom EAV service check program on the BIG-IP Controller:

  • If you use a custom EAV service check program, verify that your external service checker program meets certain requirements, such as creating a pid file.
  • Install the external service checker program on the BIG-IP Controller.
  • Allow EAV service checks in the BIG-IP configuration.
  • Configure the specific nodes to use the EAV service check.

Verifying external service checker requirements

Extended Application Verification (EAV) is intended to provide maximum flexibility. The external service checker programs that you create can use any number of methods to determine whether or not a service or an application on a node is responsive. The external service checker must, however, meet the following minimum requirements:

  • The external service checker must use a pid file to hold its process ID, and the pid file must use the following naming scheme: /var/run/pinger.<ip>..<port>.pid.
  • As soon as the external service checker starts, if the pid file already exists, the external service checker should read the file and send a SIGKILL command to the indicated process.
  • The external service checker must write its process ID to the pid file.
  • If the external service checker verifies that the service is available, it must write standard output. If the external service checker verifies that the service is not available, it cannot write standard output.
  • The external service checker must delete its pid file before it exits.

The BIG-IP Controller includes a several sample external service checker scripts for HTTP, NNTP, SMTP, and POP3. These scripts can be found in the following location:

/usr/local/lib/pingers/sample_pinger

The sample service checker, shown in Figure 2.1, is included with the BIG-IP Controller.

Figure 2.1 The HTTP external service checker program

 # these arguments supplied automatically for all external pingers:    
# $1 = IP (nnn.nnn.nnn.nnn notation or hostname)
# $2 = port (decimal, host byte order)
# $3 and higher = additional arguments
#
# In this sample script, $3 is the regular expression
#

pidfile="/var/run/pinger.$1..$2.pid"

if [ -f $pidfile ]
then
kill -9 `cat $pidfile` > /dev/null 2>&1
fi

echo "___FCKpd___0quot; > $pidfile

echo "GET /" | /usr/local/lib/pingers/nc $1 $2 2> /dev/null | \ grep -E -i $3 > /dev/null

status=$?
if [ $status -eq 0 ]
then
echo "up"
fi
rm -f $pidfile

Installing the external service checker on the BIG-IP Controller

To install an EAV service check script, place it in the /usr/local/lib/pingers directory. This is the default location for external service checker applications. You can install external service checker applications to other directory locations if desired.

Allowing EAV service checks

Once you install an external service checker on the BIG-IP Controller, you need to add an entry to the /etc/bigd.conf file.

To allow external service checking, you need to add the following entry to the /etc/bigd.conf file:

external [<node_ip>:]<port> [<path>]<pinger_name> \ ["<argument_string>"]

The <path> variable can be an absolute or a relative path to the external checker application. Absolute paths should begin with a slash ("/"). If no path is specified, the default path is substituted (usr/local/lib/pingers). The <pinger_name> argument is the name of the pinger script to use for service checking.

The "<argument_string>" variable must consist of exactly one string in quotation marks. The string may include any number of arguments, delimited in the usual way by white space, for example:

external n1:8000 my_pinger "-a 600 -b"

In the above example, the BIG-IP Controller runs the script /usr/local/lib/pingers/my_pinger to check port 8000, with additional arguments.

In the following example, the BIG-IP Controller checks port 8000 on n1. The BIG-IP Controller runs a separate copy of the external service checker named my_pinger for each node:

external n1:8000 my_pinger "-a -b"

external 8000 my_pinger "-b"

In this example, the first entry specifies how to ping port 8000 on node n1. The second entry specifies how to ping port 8000 on any other node.

Command line arguments for EAV service checks

The BIG-IP Controller performs the external service check at specified intervals. The BIG-IP Controller actually uses the service ping interval, which is set using the bigpipe tping_svc command.

The external service checker runs as root. The BIG-IP Controller starts an external service checker using the following shell command:

<path> <pinger name> <node_ip> <port> [ <additional_argument> ... ]

For the case of the example shown above, the appropriate command would be:

/usr/local/lib/pingers/my_pinger n1 8000 -a 600 -b

The BIG-IP Controller inserts the node IP and port number before the additional arguments that are specified in the /etc/bigd.conf file.

Note that the standard input and output of an external service checker are connected to bigd. The bigd does not write anything to the external service checker's standard input, but it does read the external service checker's standard output. If bigd is able to read any data from the external service checker program, the particular service is considered up.

Using the EAV pingers bundled with the BIG-IP Controller

The BIG-IP Controller includes several sample external service checker scripts for HTTP, NNTP, SMTP, POP3, and SQL. These scripts can be found in this location:

/usr/local/lib/pingers/

The following sections describe how to set up each of these service checkers.

EAV service check for FTP

This section describes how to set up the BIG-IP Controller to perform EAV service checks on FTP services.

The FTP pinger requires three arguments: a full path to the file on any given server, a user name, and a password. Here are example bigd.conf entries:

external 10.0.0.57:21 /usr/local/lib/pingers/FTP_pinger "/pub/demo/file.txt anonymous user@company.com"

external 10.0.0.62:21 /usr/local/lib/pingers/FTP_pinger "/pub/spool/incoming.doc carol carols_password"

The FTP pinger attempts to download the specified file to the /var/tmp directory. A successful retrieval of any file with the name indicated is considered a successful ping.

To configure the FTP EAV check in the Configuration utility

  1. In the navigation pane, click Nodes.
    The Node Properties screen opens.
  2. In the Extended section, click the EAV box to enable EAV service checking. The service check frequency and service check timeout must be set in order to access this option.
  3. In the Type list, select the FTP service checker. The External Program Path is automatically filled in when you select a pinger from the list.
  4. In the External Program Arguments box, type in the arguments required for the FTP service checker: a full path to the file on any given server, a user name, and a password. For example:
  5. /pub/demo/file.txt anonymous user@company.com

    /pub/spool/incoming.doc carol carols_password

  6. Click the Apply button.

EAV service check for POP3

This section describes how to set up the BIG-IP Controller to perform EAV service checks on POP3 services.

The POP3_pinger for Post Office Protocol requires only two arguments, a user name and a password. This check is considered successful if it successfully connects to the server, logs in as the indicated user, and logs out again. Here are example bigd.conf entries:

external 10.0.0.57:109 /usr/local/lib/pingers/POP3_pinger "alice alices_password"

external 10.0.0.57:109 /usr/local/lib/pingers/POP3_pinger "bob bobs_password"

To configure the POP3 EAV check in the Configuration utility

  1. In the navigation pane, click Nodes.
    The Node Properties screen opens.
  2. In the Extended section, click the EAV box to enable EAV service checking. The service check frequency and service check timeout must be set in order to access this option.
  3. In the Type list, select the POP3 service checker. The External Program Path is automatically filled in when you select a service checker from the list.
  4. In the External Program Arguments box, type in a user name and password. For example:

    alice alices_password

    bob bobs_password

  5. Click the Apply button.

EAV service check for SMTP

This section describes how to set up the BIG-IP Controller to perform EAV service checks on SMTP services.

The SMTP_pinger for mail transport servers requires only one argument, a string identifying the server from which the EAV is originating. This is an extremely simple pinger that checks only that the server is up and responding to commands. It counts a success if the mail server it is connecting to responds to the standard SMTP HELO and QUIT commands. Here is an example bigd.conf entry:

external 10.0.0.57:25 /usr/local/lib/pingers/SMTP_pinger "bigip@internal.net"

To configure the SMTP EAV check in the Configuration utility

  1. In the navigation pane, click Nodes.
    The Node Properties screen opens.
  2. In the Extended section, click the EAV box to enable EAV service checking. The service check frequency and service check timeout must be set in order to access this option.
  3. In the Type list, select the SMTP service checker. The External Program Path is automatically filled in when you select a service checker from the list.
  4. In the External Program Arguments box, type in a string identifying the server from which the EAV is originating. For example:

    bigip@internal.net

  5. Click the Apply button.

EAV service check for NNTP

This section describes how to set up the BIG-IP Controller to perform EAV service checks on NNTP services.

The NNTP_pinger for Usenet News requires only one argument, a newsgroup name to check for presence. If the NNTP server being queried requires authentication, the user name and password can be provided as additional arguments. This pinger counts a success if it successfully retrieves a newsgroup identification line from the server. Here are example bigd.conf entries, the second showing the optional login parameters:

external 10.0.0.57:119 /usr/local/lib/pingers/NNTP_pinger "comp.lang.java"

external 10.0.0.62:119 /usr/local/lib/pingers/NNTP_pinger "local.chat username password"

To configure the NNTP EAV check in the Configuration utility

  1. In the navigation pane, click Nodes.
    The Node Properties screen opens.
  2. In the Extended section, click the EAV box to enable EAV service checking. The service check frequency and service check timeout must be set in order to access this option.
  3. In the Type list, select the NNTP service checker. The External Program Path is automatically filled in when you select a service checker from the list.
  4. In the External Program Arguments box, type in the news group name for which you want to check. If the NNTP server being queried requires authentication, you can provide the user name and password as additional arguments. For example:

    comp.lang.java

    local.chat username password

  5. Click the Apply button.

EAV service check for SQL-based services

This section describes how to set up the BIG-IP Controller to perform EAV service checks on SQL-based services such as Microsoft SQL Server versions 6.5 and 7.0, and also Sybase.

The service checking is accomplished by performing an SQL login to the service. If the login succeeds, the service is considered up, and if it fails, the service is considered down. An executable program, tdslogin performs the actual login.

  1. Test the login manually:

    cd /usr/local/lib/pingers

    ./tdslogin 192.168.1.1 1433 mydata user1 mypass1

    Replace the IP address, port, database, user, and password in this example with your own information.

    You should receive the message:

    Login succeeded!

    If you receive the connection refused message, verify that the IP and port are correct. See the Troubleshooting SQL based EAV service checks section for more tips.

  2. Create an entry in the /etc/bigd.conf with the following syntax:

    external 192.168.1.1:1433 "/usr/local/lib/pingers/SQL_pinger" "mydata user1 mypass1"

    In this entry, mydata is the name of the database, user1 is the login name, and mypass1 is the password.

  3. Add entries in the /etc/bigip.conf for the service checking:

    tping_svc 1433 5

    timeout_svc 1433 15

  4. Reload the /etc/bigip.conf and restart bigd:

    bigpipe -f /etc/bigip.conf

    bigd

  5. Verify that the service check is being performed correctly: If the service is up, change the password in /etc/bigd.conf to an invalid password and restart bigd. The service should go down after the timeout period elapses.

    Correct the password and restart bigd and the service should go up again.

Troubleshooting SQL-based service checks

If you are having trouble, you should verify that you can login using another tool. For example, if you have Microsoft NT SQL Server version 6.5, there is a client program ISQL/w included with the SQL software. This client program performs simple logins to SQL servers. Use this program to test whether you can login using the ISQL/w program before attempting logins from the BIG-IP Controller.

Creating a test account for Microsoft SQL Server

On the SQL Server, you can run the SQL Enterprise Manager to add logins. When first entering the SQL Enterprise Manager, you may be prompted for the SQL server to manage.

You can register servers by entering the machine name, user name, and password. If these names are correct, the server will be registered and you will be able to click on an icon for the server. When you expand the subtree for the server, there will be an icon for Logins.

Underneath this subtree, you can find the SQL logins. Here, you can change passwords or add new logins by right-clicking on the Logins icon. Click this icon to open an option to Add login. After you open this option, enter the user name and password for the new login, as well as which databases the login is allowed to access. You must grant the test account access to the database you specify in the EAV configuration.

Extended Content Verification (ECV)

Extended Content Verification service checking is another feature you can configure after you have performed the three basic configuration tasks. ECV service check is a special type of service checking that actually retrieves content from a server. If the content matches the expected result, the BIG-IP Controller marks the node up and uses it for load balancing. If the content does not match, or if the server does not return content, the BIG-IP Controller marks the node down, and does not use it for load balancing.

The attributes you can configure for ECV are in Table 2.2

The attributes you can configure for ECV.
Attributes Description
Normal ECV A normal ECV checks for content on the specified server. If the expected content is retrieved, the BIG-IP controller makes the server available for load balancing. If the content is not retrieved, the server is marked down.
SSL ECV The SSL ECV performs the same service check as a normal ECV, however, it is designed to work with servers that use SSL.
Reverse ECV A reverse ECV check marks a node unavailable for load balancing when it retrieves the expected content. For example, if the content on your web site is dynamic, you can set up a reverse ECV to check for the string Error. A match for this string indicates that the server is down.
Transparent ECV You can use a transparent ECV to check a transparent node. With transparent ECV, you configure the service check to check content on a node through a transparent device.
Manually creating and testing ECVs You can manually configure and test ECVs by editing the /etc/bigd.conf with a text editor.

You can set up ECV service check in the Configuration utility, or you can use a text editor, such as vi or pico, to manually create the /etc/bigd.conf file, which stores ECV information.

ECV service check is most frequently used to verify content on web servers, although you can use it for more advanced applications, such as verifying firewalls or mail servers. This section focuses on setting up ECV for web servers. For details about using advanced ECV service check options, see the BIG-IP Controller Administrator Guide, Working with Advanced Service Check Options.

Note: It is important to note that the intervals and timeouts for service checks apply to EAV and ECV service checks. These timeouts are configured by setting the service check timers. For more information about setting these timers, see timer settings, on page 2-138.

ECV service check properties

ECV service check is a property of both a node port and a node. If you define ECV service check settings for a node port, all nodes that use the port inherit the ECV service check settings. You can override these settings by defining ECV service check settings for the node itself.

There are actually three different types of ECV service check settings that you can define:

  • ECV normal
    An ECV normal service check requires that the BIG-IP Controller mark a node up (available for load balancing) when the retrieved content matches the expected result. For example, if the home page for your web site included the words Welcome home, you could set up an ECV service check to look for the string "Welcome home". A match for this string would mean that the web server is up and available.
  • ECV SSL
    An ECV SSL service check performs the same function as an ECV normal service check, but it is designed to work with secure servers that use the SSL protocol, rather than standard servers using HTTP. The BIG-IP Controller uses SSL version 3, as do popular web browsers, but it is backward-compatible for web servers that support only version 2.
  • ECV reverse
    In contrast, an ECV reverse service check requires that the BIG-IP Controller mark a node down (not available for load balancing) when the retrieved content matches the expected result. For example, if the content on your web site home page is dynamic and changes frequently, you may prefer to set up a reverse ECV service check that looks for the string "Error". A match for this string would mean that the web server was down.

Warning: When the BIG-IP Controller checks content looking for a match, it reads through the content until the service check times out, or until the read reaches 5,000 bytes, whichever comes first. When you choose text, an HTML tag, or an image name to search for, be sure to pick one that appears in the first 5,000 bytes of the web page.

Writing regular expressions for ECV service checks

When you set up an ECV service check for a web server, you need to define a send string and a receive expression. A send string is the request that the BIG-IP Controller sends to the web server. Send strings typically request that the server return a specific web page, such as the default page for a web site. For example, the most common send string is "GET /" which simply retrieves the default HTML page for a web site. The receive expression is the text string that the BIG-IP Controller looks for in the returned web page.

Receive expressions use regular expression syntax, and they are not case-sensitive. Although regular expressions can be complex, you will find that simple regular expressions are adequate for most ECV service checks.

The corresponding receive string could be any simple text string included in your home page, such as text, HTML tags, or image names.

Sample send strings

The send string below is probably the most common send string, and it retrieves the default HTML page for a web site. Note that all send strings are enclosed by quotation marks (" ") inside the /etc/bigd.conf file.

"GET /"

To retrieve a specific page from a web site, simply enter a fully qualified path name:

"GET /www/support/customer_info_form.html"

Sample receive expressions

The most common receive expressions contain a text string that would be included in a particular HTML page on your site. The text string can be regular text, HTML tags, or image names. Note that all receive expressions are enclosed by quotation marks (" ").

For example, the following receive expression attempts to match the text Welcome, and it is useful for ECV reverse service checks:

"welcome"

The sample receive expression below searches for a standard HTML tag. Note that even though you are searching for an HTML tag, you still need to enclose the regular expression with quotation marks (" ").

"<HEAD>"

You can also use null receive expressions, formatted as the one shown below. When you use a null receive expression, the BIG-IP Controller considers any content retrieved to be a match.

""

Null receive expressions are suitable only for ECV normal and ECV SSL. Note, however, that if you use them you run the risk of the BIG-IP Controller considering an HTML error page to be a successful service check.

Note: The regular expression syntax discussed here is not the same as the wildcard syntax that is commonly used in command shells. For more information about regular expression, see the man page for re_format. To view the man page for re_format, type man re_format at the command line.

Setting up ECV service checks in the Configuration utility

In the Configuration utility, you can set ECV service check options in the Global Node Port Properties screen, and also in individual Node Properties screens. Regardless of which screen you use to configure the options, the steps are the same.

To set up ECV service check in the Configuration utility

  1. In the navigation pane, click Nodes.
    The Nodes screen opens.
  2. Select a node from the list.
    The Node Properties screen opens.
  3. If you want to configure ECV service check options, stay in this screen. If you want to configure ECV service check options for the port that the node uses, click the port number listed next to the IP address of the node.
  4. Click the ECV button.
  5. In the Type box, choose the type of ECV service check you want to set up: normal, reverse, or SSL.
  6. In the Send String/Destination String box, type the send string that requests the web page. Note that the Configuration utility automatically places quotation marks around the string itself. For example, the following string retrieves the default HTML page for the site.

    GET /

  7. In the Receive Rule box, type the receive expression that the BIG-IP Controller should look for in the returned web page. For example, the following receive expression looks for a text string in a web page:

    Welcome home!

  8. Click the Apply button.

Setting up ECV service checks for transparent nodes

In addition to verifying content on web servers, you can use Extended Content Verification (ECV) service checks to verify connections to mail servers and FTP servers through transparent nodes. If you want to set up ECV service checks through a transparent node to these types of servers, there are certain special issues that you need to address.

Configuring ECV for transparent nodes

You can set up ECV to verify that a transparent node is functioning properly. To check if a transparent node is functioning, you can add an entry to the /etc/bigd.conf file that allows you to retrieve content through the transparent node.

You can use a text editor, such as vi or pico, to manually create the /etc/bigd.conf file, which stores ECV information. To create the entry for checking a transparent node, use the following syntax:

transparent <node ip>:<node port> <url> ["recv_expr"]

You can also use the following syntax for this entry:

transparent <node ip>:<node port> <dest ip>:<dest port>/<path> ["recv_expr"]

For example, if you want to run a service check through the transparent firewall 10.10.10.101:80 to the node 10.10.10.53:80, the entry might look like this:

transparent 10.10.10.101:80 10.10.10.53:80/www/forms/survey.html "Company Survey"

For more information about these configuration entries, please refer to Table 2.3.

ECV configuration entries.
Configuration Entry Description

transparent

The transparent keyword is required at the beginning of the entry.

node ip

The IP address, in dotted decimal notation, of the transparent firewall or proxy. This IP cannot be a wild card IP (0.0.0.0). Note that the node must be defined as a node in a pool definition. Typically this would be a wild card virtual server (0.0.0.0). This entry can also be specified as a fully qualified domain name (FQDN). In order to use an FQDN, the BIG-IP Controller must be configured for name resolution.

node port

This entry is the node port to use for the ECV check. This port can be zero. This entry can be numeric or can use a well-known service name, such as http.

dest ip:dest port

This is the combination of the destination, in dotted decimal notation, and port number of the destination against which the ECV service check is performed. The IP address cannot be a wild card (0.0.0.0). The port number is optional. The port can be specified as any non-zero numeric port number, or specified as a well-known port name, such as http.

url

The URL is an optional standard HTTP URL. If you do not specify a URL, a default URL is retrieved using the HTTP 1.0 request format. This entry can also be specified using a complete URL with an embedded FQDN. This entry cannot be longer than 4096 bytes. In order to resolve an FQDN, the BIG-IP Controller must be configured for name resolution.

recv string

This string is optional. If you specify a string, the string you specify is used to perform standard ECV verification. This entry must be enclosed in quotation marks, and cannot be longer than 128 bytes.

Note: The /etc/bigd.conf file is read once at startup. If you change the file on the command line, you must reboot or restart bigd for the changes to take effect. To restart bigd from the command line, type bigd. If you make changes in the Configuration utility, clicking the Apply button makes changes and restarts bigd. For more information, see bigd, on page 7-4.

Setting up ECV through transparent nodes with the Configuration utility

New ECV syntax facilitates using ECV with transparent nodes. With it, you can test whether a transparent node is functioning properly by retrieving content through it. You can enable this feature in the Configuration utility or from the command line. This section describes how to enable this feature from the Configuration utility.

Note: You must have at least one wildcard virtual server configured in order to configure ECV through a transparent node.

To set up ECV through a transparent node using the Configuration utility

There are two procedures required to set up ECV through a transparent node. First, set up the frequency and timeout for the port:

  1. In the navigation pane, click expand button (+) next to Nodes.
    The navigation tree expands to display Ports.
  2. In the navigation pane, click Ports.
    The Global Node Port properties screen opens.
  3. In the Port list, click the port you want to configure.
    The properties screen for the port opens.
  4. In the Frequency (seconds) box, type in the interval (in seconds) at which the BIG-IP Controller performs a service check on the node.
  5. In the Timeout (seconds) box, type in the time limit (in seconds) that a node has to respond to a service check issued by the BIG-IP Controller.
  6. Click the Apply button.

    After you configure the frequency and timeout settings for the port, set the specific settings for the transparent node:

  7. In the navigation pane, click Nodes.
    The Node Properties screen opens.
  8. In the Node list, click the node you want to configure.
    The Node Properties screen opens.
  9. In the Service Check Extended section, click the ECV button to enable ECV.
  10. In the Type list, select Transparent.
    By default, the list is set to Transparent.
  11. In the Send String/Destination String box, you must type the destination IP address of the node you are checking on the other side of the transparent device. The port number/port name argument is optional. The URL entry is also optional. For more information about what to type in this box, see Table 2.3.
  12. In the Receive Rule box, you can type an ECV check receive string. The receive string is optional.
  13. Click the Apply button.

Manually configuring and testing the /etc/bigd.conf file

You can set up ECV service check on the command line by creating an /etc/bigd.conf file in a text editor such as vi or pico. Each line in the /etc/bigd.conf file defines a send string and a receive expression for one node, or for one port. Remember that when you define a ECV service check for a port, all nodes that use the port inherit the service check settings.

Changes to the /etc/bigd.conf do not take effect until the system is rebooted, or bigd is restarted. To restart bigd, simply run the command bigd.

Setting up the /etc/bigd.conf file

The /etc/bigd.conf file uses three different types of syntax for lines in the file that correspond to the three different types of service check that you can configure: ECV normal, ECV SSL, and ECV reverse. The following sections describe the syntax for each type, and provide some useful examples.

To set up an ECV normal service check

The line for a normal ECV service check begins with the keyword active. The <node IP> parameter is optional, and you need to include it only if you are defining ECV service check for a specific node.

active [<node IP>:]<port> "<send_string>" "<recv_expr>"

For example, the following line sets up a normal ECV service check for a node, where the BIG-IP Controller looks for the text Welcome in the default page for the site.

active 192.168.100.10:80 "GET /" "welcome"

To set up an ECV SSL service check

The line for an SSL ECV service check begins with the keyword ssl. The <node IP> parameter is optional, and you need to include it only if you are defining ECV service check for a specific node.

ssl [<node IP>:]<port> "<send_string>" "<recv_expr>"

For example, the following line sets up an SSL ECV service check for a node port. Note that the receive expression is null. When you use a null receive expression, the BIG-IP Controller considers any retrieved content to be a match.

ssl 443 "GET /www/orders/order_form.html" ""

To set up ECV reverse service check

The line for a reverse ECV service check begins with the keyword reverse. The <node IP> parameter is optional, and you need to include it only if you are defining ECV service check for a specific node.

reverse [<node IP>:]<port> "<send_string>" "<recv_expr>"

For example, the following line sets up a reverse ECV service check for a node port. Note that the receive expression is null. When you use a null receive expression, the BIG-IP Controller considers any retrieved content to be a match.

reverse 80 "GET /" ""

Testing /etc/bigd.conf syntax

To test /etc/bigd.conf syntax

You can test your ECV syntax in the bigd.conf file using the following bigd command:

/sbin/bigd -d

This command parses the file, checks ECV syntax, reports any errors, and then exits.

Note: The /etc/bigd.conf file is read once at startup. If you change the file on the command line, you must reboot or restart bigd for the changes to take effect. If you make changes in the Configuration utility, clicking the Apply button makes changes and restarts bigd. For more information about bigd, see the BIG-IP Controller Reference Guide, System Utilities.

Filters

Filters control network traffic by setting whether packets are forwarded or rejected at the external network interface. Filters apply to both incoming and outgoing traffic. When creating a filter, you define criteria which are applied to each packet that is processed by the BIG-IP Controller. You can configure the BIG-IP Controller to forward or block each packet based on whether or not the packet matches the criteria.

The BIG-IP Controller supports two types of filters, IP filters and rate filters.

The attributes you can configure for a filter are in Table 2.4

The attributes you can configure for a filter.
Filter Attributes Description
IP filter You can configure IP filters to control request sent to the BIG-IP Controller by other hosts in the network.
Rate filter You can configure rate filters to control the flow of traffic into the BIG-IP Controller based on rate classes you define. In order to create a rate filter, you must first define a rate class.
Rate class You can define a rate class for use with a rate filter. A rate class is a definition used by a rate filter to restrict the flow of traffic into the BIG-IP Controller.

IP filters

Typical criteria that you define in IP filters are packet source IP addresses, packet destination IP addresses, and upper-layer protocol of the packet. However, each protocol has its own specific set of criteria that can be defined.

For a single filter, you can define multiple criteria in multiple, separate statements. Each of these statements should reference the same identifying name or number, to tie the statements to the same filter. You can have as many criteria statements as you want, limited only by the available memory. Of course, the more statements you have, the more difficult it is to understand and maintain your filters.

Configuring IP filters

When you define an IP filter, you can filter traffic in two ways:

  • You can filter traffic going to a specific destination or coming from a specific destination, or both.
  • The filter can allow network traffic through, or it can reject network traffic.

Defining an IP filter in the Configuration utility

  1. In the navigation pane, click IP Filters.
    The IP Filters screen opens.
  2. In the IP Filters screen, click Add Filter.
    The Add IP Filter screen opens.
  3. On the Add IP Filter screen, in the Name box, type a filter name.
  4. From the Type list, choose Accept Packet to allow traffic, or Deny Packet to reject traffic.
  5. In the Source IP Address box, only if you want the filter to be applied to network traffic based on its source, enter the IP address from which you want to filter traffic.
  6. In the Source Port box, only if you want the filter to be applied to network traffic based on its source, enter the port number from which you want to filter traffic.
  7. In the Destination IP Address box, enter the IP address to which you want to filter traffic, only if you want the filter to be applied to network traffic based on its destination.
  8. In the Destination Port box, enter the port number to which you want to filter traffic, only if you want the filter to be applied to network traffic based on its destination.
  9. Click Add to add the IP filter to the system.

    Note: For information on configuring IP filters on the command line, refer to the IPFW man page by typing man ipfw on the command line. You can configure more complex filtering through the IPFW command line interface.

Rate filters and rate classes

In addition to IP filters, you can also define rates of access by using a rate filter. Rate filters consist of the basic filter and a rate class. Rate classes define how many bits per second are allowed per connection and the number of packets in a queue.

Configuring rate filters and rate classes

Rate filters are a type of extended IP filter. They use the same IP filter method, but they apply a rate class which determines the volume of network traffic allowed through the filter.

Tip: You must define at least one rate class in order to apply a rate filter.

Rate filters are useful for sites that have preferred clients. For example, an e-commerce site may want to set a higher throughput for preferred customers, and a lower throughput for random site traffic.

Configuring rate filters involves both creating a rate filter and a rate class. When you configure rate filters, you can use existing rate classes. However, if you want a new rate filter to use a new rate class, you must configure the new rate class before you configure the new rate filter.

To configure a new rate class in the Configuration utility

  1. In the navigation pane, click Rate Filters.
    The Rate Filters screen opens.
  2. In the Rate Filters screen, click Add Class.
    The Rate Class screen opens.
  3. On the Rate Class screen, in the Name box, type a rate class name.
  4. In the Bits Per Second Allowed box, enter the maximum number of bits per second that you want the class to allow.
  5. In the Minimum Number of Bits Outstanding box, enter the minimum number of bits required to be sent for processing from the queue at one time.
  6. In the Queue Length (in Packets) box, enter the maximum number of packets allowed in the queue. Once the BIG-IP Controller fills the queue, it begins to drop subsequent packets received.
  7. Click Add to add the rate class to the system.

    Note: For information on configuring IP filters on the command line, refer to the IPFW man page.

    After you have added a rate class, you can configure rate filters for your system.

To configure a rate filter in the Configuration utility

  1. Click Rate Filters in the navigation pane.
    The Rate Filters screen opens.
  2. In the Rate Filters screen, click Add Class.
    The Add Class screen opens.
  3. On the Rate Filter screen, in the Name box, type a name for the rate filter.
  4. From the Rate Class list, choose a rate class. Note that you must have a rate class defined before you can proceed.
  5. In the Source IP Address box, enter the IP address from which you want to filter traffic, only if you want the filter to be applied to network traffic based on its source.
  6. In the Source Port box, enter the port number from which you want to filter traffic, only if you want the filter to be applied to network traffic based on its source.
  7. In the Destination IP Address box, enter the IP address to which you want to filter traffic, only if you want the filter to be applied to network traffic based on its destination.
  8. In the Destination Port box, enter the port number to which you want to filter traffic, only if you want the filter to be applied to network traffic based on its destination.
  9. Click the Add button.

    Note: For information on configuring IP filters on the command line, refer to the IPFW man page.

IP forwarding

IP forwarding does not translate node addresses. Instead, it simply exposes the node's IP address to the BIG-IP Controller's external network and clients can use it as a standard routable address. When you turn IP forwarding on, the BIG-IP Controller acts as a router when it receives connection requests for node addresses. IP forwarding does not provide security features, but you can use the IP filter feature to implement a layer of security that can help protect your nodes.

The attributes you can configure for IP forwarding are in Table 2.5

The attributes you can configure for IP forwarding.
Attributes Description
Enable IP forwarding globally You can turn IP forwarding on for the BIG-IP Controller globally either with the Configuration utility, or by turning on the sysctl variable net.inet.ip.forwarding.
Addressing routing issues If you turn on IP forwarding, you need to route packets to the node addresses through the BIG-IP Controller.
Enable IP forwarding for a virtual server Instead of turning IP forwarding on globally, you can create a special virtual server with IP forwarding on.

Note: NATs and SNATs do not support the NT Domain or CORBA protocols. Instead of using NATs or SNATs, you need to configure IP forwarding.

Setting up IP forwarding

If you do not want to translate addresses with a NAT or SNAT, you can use the IP forwarding configuration option. IP forwarding is an alternate way of allowing nodes to initiate or receive direct connections from the BIG-IP Controller's external network. IP forwarding exposes all of the node IP addresses to the external network, making them routable on that network. If your network uses the NT Domain or CORBA protocols, IP forwarding is an option for direct access to nodes.

To set up IP forwarding, you need to complete two tasks:

  • Turn IP forwarding on
    The BIG-IP Controller uses a system control variable to control IP forwarding, and its default setting is off.
  • Verify the routing configuration
    You probably have to change the routing table for the router on the BIG-IP Controller's external network. The router needs to direct packets for nodes to the BIG-IP Controller, which in turn directs the packets to the nodes themselves.

Turning on IP forwarding

IP forwarding is a property of the BIG-IP Controller system, and it is controlled by the system control variable net.inet.ip.forwarding.

To set the IP forwarding system control variable in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. On the toolbar, click Advanced Properties.
    The BIG-IP System Control Variables screen opens.
  3. Check the Allow IP Forwarding box.
  4. Click the Apply button.

To set the IP forwarding system control variable on the command line

Use the standard sysctl command to set the variable. The default setting for the variable is 0, which is off. You want to change the setting to 1, which is on:

sysctl -w net.inet.ip.forwarding=1

To permanently set this value, you can use a text editor, such as vi or pico, to manually edit the /etc/rc.sysctl file. For additional information about editing this file, see Setting BIG-IP system control variables, on page 6-1.

Addressing routing issues for IP forwarding

Once you turn on IP forwarding, you probably need to change the routing table on the default router. Packets for the node addresses need to be routed through the BIG-IP Controller. For details about changing the routing table, refer to your router's documentation.

Configuring forwarding virtual servers

A forwarding virtual server is just like other virtual servers, except that the virtual server has no nodes to load balance. It simply forwards the packet directly to the node. Connections are added, tracked, and reaped just as with other virtual servers. You can also view statistics for forwarding virtual servers.

To configure forwarding virtual servers in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the toolbar, click the Add Virtual Server button.
    The add virtual server screen opens.
    Type in the virtual server attributes including address and port. Use the IP address/port combination guidelines in the previous section, To configure a forwarding virtual server from the command line, to determine what these entries should be.
  3. In Resources, click the Forwarding button.
  4. Click the Apply button.

To configure a forwarding virtual server from the command line

Use the following syntax to configure forwarding virtual servers:

bigpipe vip <vip>:<port> [ netmask <netmask> ] forward

For example, to allow only one service in:

bigpipe vip 206.32.11.6:80 forward

Use the following command to allow only one server in:

bigpipe vip 206.32.11.5:0 forward

To forward all traffic:

bigpipe vip 0.0.0.0:0 forward

Currently, there can be only one wildcard virtual server, whether that is a forwarding virtual server or not. In some of the configurations described here, you need to set up a wildcard virtual server on one side of the BIG-IP Controller to load balance connections across transparent devices. Another wildcard virtual server is required on the other side of the BIG-IP Controller to forward packets to virtual servers receiving connections from the transparent devices and forward them to their destination. You can use another new feature, per-connection routing, with forwarding virtual servers, to route connections back through the device from which the connection originated. In these configurations, you need to create a forwarding virtual server for each possible destination network or host if a wildcard virtual server is already defined to handle traffic coming from the other direction.

Interface

You can use interface attributes to configure how traffic flows through the BIG-IP Controller. Most configuration require you to set the attributes of one or more interface on the BIG-IP Controller.

The attributes you can configure for an interface are in Table 2.6

The attributes you can configure for an interface
Interface Attributes Description
Source processing You can use this attribute to configure an interface to allow source translation and routing for NATs, virtual server connections, and SNAT connections. This attribute also initiates SNAT connections for packets arriving on this interface. This attribute is a feature of versatile interface configuration.
Destination processing You can configure this attribute on an interface to allow packets arriving on an interface to be destination translated and routed according current state of a NAT, SNAT, or virtual server connection. This attribute also initiates virtual server connections for packets arriving on this interface. This attribute is a feature of versatile interface configuration.
Source translation You can configure this attribute on an interface to allow packets arriving on an interface to be source translated and routed according current state of a SNAT or virtual server connection. SNAT destination and source translation occur when a matching SNAT connection exists. This attribute is a feature of versatile interface configuration.
Interface security You can configure security at the interface level. This attribute is a feature of versatile interface configuration.
Interface failsafe Use this attribute for redundant fail-over. When you arm interface fail-safe, the controller automatically fails over if it detects there is no traffic on the interface specified.
MAC masquerade You can use this attribute to set up a media access control (MAC) address that is shared by redundant controllers. This allows you to use the BIG-IP Controllers in a topology with secure hubs.
VLAN tags You can assign VLAN tags to an interface. You can use VLAN tags to divide a single physical network into additional virtual networks. The VLAN tag determines which virtual network handles the traffic.

Versatile interface configuration

The versatile interfaces option adds more flexibility for configuring interfaces. You can now change both the source address or destination address and/or route of an IP packet.

In previous versions of the BIG-IP Controller, interfaces were designated as internal or external. With this version of the BIG-IP Controller you can configure specific interface properties based on the properties in Table 2.7.

The properties for internal and external interfaces
Interface type Interface properties
Internal Processes source addresses Administrative ports open
External Processes destination addresses Administrative ports locked down

The ability to change the source or destination can be turned on independently. Essentially, this means you can configure an interface so that it handles traffic going to virtual servers and, independently, you can configure the interface to handle traffic coming in from nodes. You can configure virtual servers and nodes on each interface installed on the BIG-IP Controller. This allows for the most flexible processing of packets by the BIG-IP Controller. When either the source or destination processing feature is turned off on an interface, there is a gain in performance.

When you enable destination processing on a BIG-IP Controller interface, the interface functions in the following manner:

  • When the destination address and port on the packet is a virtual server connection, then the interface routes the packet to the Node that is handling the connection that the packet is a part of, picking one if necessary, and depending on how the virtual server is configured, the interface translates the destination address to the node address.
  • When the destination address on the packet is the external, or translated, address of a NAT, then the interface translates the destination address to the internal address of the NAT.
  • When the destination address on the packet is the external, or translated, address and port of a SNAT connection, then the interface translates the destination address to the original address from which the SNAT connection originated.

    When you enable source processing on a BIG-IP Controller interface, the interface functions in the following manner:

  • When the source address on the packet is a node, and the packets are destined to a client for whom there is an existing virtual server connection, and depending on how the virtual server is configured, the interface translates the source address to the address and port of the virtual server.
  • When the source address on the packet is the original address of a NAT, then the interface translates the source address to the translated address of the NAT.
  • When the source address on the packet is the original address and port of a SNAT connection, then the interface translates the source address and port to the translated address of the SNAT.

    You can turn on both source and destination processing for an interface. This is possible because their functions do not overlap. For example, a NAT changes the source address on packets coming from clients so that they look like they have a different IP address, and virtual servers change the destination address to load balance the destination. There is no reason why you cannot do both the NAT translation and the virtual server translation. There are some combinations of virtual server and NAT source processing and virtual server and NAT destination processing that do not make sense. For example, if a virtual server processes a packet during source processing, the packet is not handled by virtual server destination processing. Also, if a virtual server processes a packet during destination processing, the packet is not handled by virtual server source processing.

Destination route and translation processing

When destination processing is enabled on an interface, the BIG-IP Controller processes packets arriving at the interface when those packets are addressed to a virtual server, SNAT, or NAT translated address.

It is useful to note that there are two independent activities associated with destination processing: routing and translation. For example, wildcard virtual servers load balance connections across transparent network devices (such as a router or firewall), but they do not perform translation. In fact, translation can be turned off for all virtual servers. Also, with the new forwarding virtual servers, neither next hop load balancing nor translation will occur for connections. These virtual servers only forward packets and so connections can pass through BIG-IP Controller without being manipulated in any way.

When you plan which type of processing to use in the BIG-IP Controller configuration, consider these questions:

  • What traffic is translated?
  • When those packets reach the BIG-IP Controller interface, does the source IP address, or destination IP address need to be translated?
  • Which connections are load balanced across multiple devices?

    These questions help identify what kind of processing is required for the network interfaces on the BIG-IP Controller.

Source translation processing

When source translation processing is enabled on an interface, then the BIG-IP Controller processes packets arriving at the interface when those packets are coming from a node, SNAT, or NAT internal address. In this situation, the interface rewrites the source address of the IP packet, changing it from the real server's IP address, or original NAT address, to the virtual server or translated NAT address, respectively. Also, when the last hop feature is enabled on a virtual server, the packet is routed back to the network device that first transmitted the connection request to the virtual server.

To configure source and destination processing in the Configuration utility

  1. In the navigation pane, click NICs.
    The Network Interface Cards screen opens. You can view the current settings for each interface in the Network Interface Card table.
  2. In the Network Interface Card table, click the name of the interface you want to configure.
    The Network Interface Card Properties screen opens.

    · To enable source processing for this interface, click the Enable Source Processing check box.

    · To enable destination processing for this interface, click the Enable Destination Processing check box.

  3. Click the Apply button.

To configure source and destination processing from the command line

Use the following syntax to configure source and destination processing on the specified interface:

bigpipe interface <interface> dest [ enable | disable ]

bigpipe interface <interface> source [ enable | disable ]

bigpipe interface <interface> source_translation [ enable | disable ]

The following example command enables destination processing on the interface exp0:

bigpipe interface exp0 dest enable

The following example command enables source processing on the interface exp1:

bigpipe interface exp1 source enable

Source translation

In a situation where you have an origin cache server on the network external to the BIG-IP Controller, you must configure a default SNAT, enable source translation on the external interface, and set the origin server node address to remote in addition to creating a cache rule. This section describes how to enable source translation on an interface.

To enable source translation on the external interface, type the following command:

bigpipe interface <ext_interface> source_translation enable

Substitute the name of the external interface for <ext_interface>.

Interface security

You can use the adminport option to control the security on an interface. The lockdown keyword configures the port lockdown used in previous versions of the BIG-IP Controller on the specified interface. If you use this option when you configure an interface, only ports essential to the configuration and operation of BIG-IP Controller and 3DNS Controller are opened. The open keyword allows all connections to and from BIG-IP Controller through the interface you specify.

To configure interface security in the Configuration utility

  1. In the navigation pane, click NICs.
    The Network Interface Cards screen opens. You can view the current settings for each interface in the Network Interface Card table.
  2. In the Network Interface Card table, click the name of the interface you want to configure.
    The Network Interface Card Properties screen opens.
  3. To set the administration properties, click the Enable Admin list. Choose one of the following options:

    · Lockdown
    Choose this option to lock down all ports except the ports used for administrative access on this interface.

    · Open
    Choose this option to open allow connections to all ports on this interface.

  4. Click the Apply button.

To configure interface security from the command line

Use the following syntax to configure interface security on the specified interface:

bigpipe interface <interface> adminport lockdown

bigpipe interface <interface> adminport open

Use the following example command to lock down connections to all ports except the administration ports on exp0:

bigpipe interface exp0 adminport lockdown

Use the following example command to allow connections to all ports on exp1:

bigpipe interface exp1 adminport open

Warning Use caution when redefining interfaces. When you reconfigure interfaces, make sure that you have set up the interfaces you need for operation. It is possible to accidentally take the controller out of network service by redefining interfaces.

Displaying status for interfaces

Use the following syntax to display the current status and the settings for all installed interface cards:

bigpipe interface show

Figure 2.2 is an example of the output you see when you issue this command on an active/standby controller in active mode.

Figure 2.2 The bigpipe interface show command output

exp0         11.11.11.2, dest enable, source disable, disarmed, timeout 30 
shared alias 11.11.11.3 netmask 255.0.0.0 broadcast 11.255.255.255 unit 1
exp1 11.12.11.2, dest disable, source enable, disarmed, timeout 30
shared alias 11.12.11.3 netmask 255.0.0.0 broadcast 11.255.255.255 unit 1

Use the following syntax to display the current status and the setting for a specific interface.

bigpipe interface <ifname> show

Arming and disarming the fail-safe mode

Use the following command to activate the BIG-IP Controller interface fail-safe mode.

bigpipe interface <ifname> arm

When armed, the active controller automatically fails over to the standby controller whenever the active controller detects that there is no activity on the specified interface, and subsequently detects no activity on the interface in response to ARP requests. The default fail-safe mode is set to disarm.

Warning: You should arm the fail-safe mode only after you configure the BIG-IP Controller, and both the active and standby units are ready to be placed into a production environment.

Note that you must specify a default route before using the bigpipe interface failsafe command. You specify the default route in the /etc/hosts and /etc/netstart files.

Use the following command to deactivate the BIG-IP Controller interface fail-safe mode.

bigpipe interface <ifname> failsafe disarm

Setting the fail-safe timeout

Use the following syntax to set the amount of time, in seconds, that an interface will be monitored for activity in response to a BIG-IP Controller ARP request, in order to be designated operational.

bigpipe interface <ifname> timeout <seconds>

If no activity is detected on the interface within the specified time, the BIG-IP Controller assumes that the interface is down. Note that the default setting is 30 seconds.

Warning messages and ARP requests are generated after half of the specified time-out period. In the case of an armed BIG-IP Controller in a BIG-IP redundant system, traffic is switched from the active unit to the standby unit at the end of the time-out period. Note that the fail-safe timeout is used only if the fail-safe option is armed on the interface.

Viewing the timeout setting

Use the following syntax to view the fail-over timeout setting for a specific interface:

bigpipe interface <ifname> timeout show

Setting the MAC masquerade address

Sharing the MAC masquerade address makes it possible to use BIG-IP Controllers in a network topology using secure hubs. You can view the media access control (MAC) address on a given controller using the following command:

/sbin/ifconfig -a

Use the following syntax to set the MAC masquerade address that will be shared by both BIG-IP Controllers in the redundant system.

bigpipe interface <ifname> mac_masq <MAC addr>

Warning: You must specify a default route before using the mac_masq command. You specify the default route in the /etc/hosts and /etc/netstart files.

Find the MAC address on both the active and standby units and choose one that is similar but unique. A safe technique for choosing the shared MAC address follows:

Suppose you want to set up mac_masq on the external interfaces. Using the ifconfig -a command on the active and standby units, you note that their MAC addresses are:

Active: exp0 = 0:0:0:ac:4c:a2

Standby: exp0 = 0:0:0:ad:4d:f3

In order to avoid packet collisions, you now must choose a unique MAC address. The safest way to do this is to select one of the addresses and logically OR the first byte with 0x40. This makes the MAC address a locally administered MAC address.

In this example, either 40:0:0:ac:4c:a2 or 40:0:0:ad:4d:f3 would be a suitable shared MAC address to use on both BIG-IP Controllers in the redundant system.

The shared MAC address is used only when the BIG-IP Controller is in active mode. When the unit is in standby mode, the original MAC address of the network card is used.

If you do not configure mac_masq, on startup, or when transitioning from standby mode to active mode, the BIG-IP Controller sends gratuitous ARP requests to notify the default router and other machines on the local Ethernet segment that its MAC address has changed. See RFC 826 for more details on ARP.

Note: You can use the same technique to configure a shared MAC address for each interface.

Enabling VLAN tags for an interface

To use IEEE 802.1q VLAN Trunk mode, you must first set up VLAN tags in /etc/netstart and the shared IP in BIG/db. For detailed information about setting up VLAN tags, see the BIG-IP Controller Administrator Guide, Using Advanced Network Configurations.

Use the following syntax to enable, disable, or show the VLAN status of the specified internal interface:

bigpipe interface <ifname> vlans enable | disable | show

Load Balancing

Load balancing is an integral part of the BIG-IP Controller. A load balancing mode defines, in part, the logic that a BIG-IP Controller uses to determine which node should receive a connection hosted by a particular virtual server.

The load balancing attributes you can configure for the BIG-IP Controller are in Table 2.8.

The load balancing attributes
Load Balancing Attributes Description
Load balancing modes You can configure a specific type of load balancing for the BIG-IP Controller.
Changing global load balancing modes You can set a global load balancing mode on the BIG-IP Controller. The global method is used by all pools that do not have a load balancing method defined.
Using load balancing pools You must define a specific load balancing method for a pool. You can have various pools configured with different load balancing methods.

The BIG-IP Controller supports specialized load balancing modes that dynamically distribute the connection load, rather than following a static distribution pattern such as Round Robin. Dynamic distribution of the connection load is based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. The following section describes how each load balancing mode distributes connections, as well as how to set the load balancing mode on the BIG-IP Controller. The global load balancing method is not saved as part of BIG-IP Controller configuration. When you define a global method, the global method is set for any pool with an appgen_ name prefix.

The default global load balancing mode on the BIG-IP Controller is Round Robin, and it simply passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin mode works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. If you want to use the Round Robin load balancing mode, you can skip this section, and begin configuring features that you want to add to the basic configuration.

However, if you are working with servers that differ significantly in processing speed and memory, you may want to switch to Ratio load balancing mode. In Ratio mode, the BIG-IP Controller distributes connections among machines according to ratio weights that you define, where the number of connections that each machine receives over time is proportionate to the ratio weight you define for each machine.

Tip: The default ratio weight for a node is 1. If you keep the default ratio weight for each node in a virtual server mapping, the nodes receive an equal proportion of connections as though you were using Round Robin load balancing.

Understanding individual load balancing modes

Individual load balancing modes take into account one or more dynamic factors, such as current connection count. Because each application of the BIG-IP Controller is unique, and node performance depends on a number of different factors, we recommend that you experiment with different load balancing modes, and choose the one that offers the best performance in your particular environment.

Round Robin

Round Robin passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin mode works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory.

Ratio

In Ratio mode, the BIG-IP Controller distributes connections among machines according to ratio weights that you define, where the number of connections that each machine receives over time is proportionate to the ratio weight you define for each machine.

Fastest mode

Fastest mode passes a new connection based on the fastest response of all currently active nodes. Fastest mode may be particularly useful in environments where nodes are distributed across different logical networks.

Least Connections mode

Least Connections mode is relatively simple in that the BIG-IP Controller passes a new connection to the node with the least number of current connections. Least Connections mode works best in environments where the servers or other equipment you are load balancing have similar capabilities.

Observed mode

Observed mode uses a combination of the logic used in the Least Connection and Fastest modes. In Observed mode, nodes are ranked based on a combination of the number of current connections and the response time. Nodes that have a better balance of fewest connections and fastest response time receive the a greater proportion of the connections. Observed mode also works well in any environment, but may be particularly useful in environments where node performance varies significantly.

Predictive mode

Predictive mode also uses the ranking methods used by Observed mode, where nodes are rated according to a combination of the number of current connections and the response time. However, in Predictive mode, the BIG-IP Controller analyzes the trend of the ranking over time, determining whether a node's performance is currently improving or declining. The nodes with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. Predictive mode works well in any environment.

Priority mode

Priority mode is a special type of round robin load balancing. In Priority mode, you define groups of nodes and assign a priority level to each group. The BIG-IP Controller begins distributing connections in a round robin fashion to all nodes in the highest priority group. If all the nodes in the highest priority group go down or hit a connection limit maximum, the BIG-IP Controller begins to pass connections on to nodes in the next lower priority group.

For example, in a configuration that has three priority groups, connections are first distributed to all nodes set as priority 3. If all priority 3 nodes are down, connections begin to be distributed to priority 2 nodes. If both the priority 3 nodes and the priority 2 nodes are down, connections then begin to be distributed to priority 1 nodes, and so on. Note, however, that the BIG-IP Controller continuously monitors the higher priority nodes, and each time a higher priority node becomes available, the BIG-IP Controller passes the next connection to that node.

Setting the global load balancing mode

The global load balancing mode is a system property of the BIG-IP Controller, and it applies to any pool with an appgen_ name prefix.

To set the global load balancing mode in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the Node List Load Balance Method box, choose the desired load balancing mode.
  3. Click Apply.

Warning: If you select Ratio mode or Priority mode, be sure to set the ratio weight or priority level for each node address in the configuration.

To set the load balancing mode on the command line

The command syntax for setting the load balancing mode is:

bigpipe lb <mode name>

Table 2.9 describes the valid options for the <mode name> parameter.

Options for the <mode name> parameter.
Mode Name Description
round robin Sets the load balancing mode to Round Robin mode.
ratio Sets the load balancing mode to Ratio mode.
priority Sets load balancing to Priority mode.
least_conn Sets load balancing to Least Connections mode.
fastest Sets load balancing to Fastest mode.
observed Sets load balancing to Observed mode.
predictive Sets load balancing to Predictive mode.

Setting ratio weights and priority levels for node addresses

If you set the load balancing mode to either Ratio mode or Priority mode, you need to set a special property on each node address.

  • Ratio weight
    The ratio weight is the proportion of total connections that the node address should receive. The default ratio weight for a given node address is 1. If all node addresses use this default weight, the connections are distributed equally among the nodes.
  • Priority level
    The priority level assigns the node address to a specific priority group.

To set ratio weights and priority levels in the Configuration utility

  1. In the navigation pane, click Nodes.
  2. In the Nodes list, click the node for which you want to set the ratio weight.
    The Node Properties screen opens.
  3. In the Node Properties screen, click the Address of the node.
    The Global Node Address Properties screen opens.
  4. In the Ratio or Priority box, type the ratio weight of your choice.
  5. Click the Apply button to save your changes.

To set ratio weights from the command line

The bigpipe ratio command sets the ratio weight for one or more node addresses:

bigpipe ratio <node IP> [<node IP>...] <ratio weight>

The following example defines ratio weights and priority for three node addresses. The first command sets the first node to receive half of the connection load. The second command sets the two remaining node addresses to each receive one quarter of the connection load.

bigpipe ratio 192.168.10.01 2

bigpipe ratio 192.168.10.02 192.168.10.03 1

Warning: If you set the load balancing mode to Ratio or Priority, you must define the ratio or priority settings for each node address. The value you define using the bigpipe ratio command is used as the ratio value if Ratio is the currently selected load balancing mode, and the same value is used as the priority level if Priority is the currently selected load balancing mode.

Setting the load balancing method for a pool

This example describes how to change the load balancing method for a pool to use Ratio load balancing. For information about the other load balancing methods you can use to load balance a pool, see Pool, on page 2-62.

If you want to switch the load balancing method used in a pool from Round Robin to Ratio you must modify the pool specification in the Configuration utility or from the command line. You change the load balancing mode to ratio_member, and you must assign a ratio weight to each member of the pool.

Switching to Ratio mode

First, you should set the load balancing mode to Ratio. The load balancing mode is actually a property of the BIG-IP Controller system, and it applies to all virtual servers defined on the system.

To switch the system to Ratio mode in the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the toolbar, click the Add Pool button.
    The Add Pool screen opens.
  3. In the Pool Name box, type in the name you want to use for the pool.
  4. Click on the load balancing mode list and select Ratio (member).
  5. Use the resources options to set the Ratio value for the members in the pool. In the Current Members list, click the member you want to edit. Click the back button (<<) to pull the member into the resources section. Change the Ratio value for the member.

    Ratio
    Type in a number to assign a ratio to this node within the pool. For example, if you are using the ratio load balancing mode and you type a 1 in this box, the node will have a lower priority in the load-balancing pool than a node marked 2.
  6. Click the add button (>>) to add the member back to the Current Members list.
  7. Repeat steps 5 and 6 until you have set the ratio values for each member to your satisfaction.
  8. Click the Apply button.

To switch the pool to Ratio mode on the command line

To switch the pool use the modify keyword with the bigpipe pool command. For example, if you want change the pool my_pool, to use the ratio_member load balancing mode, you can type the following command:

bigpipe pool my_pool modify { lb_mode ratio_member member 11.12.1.101:80 ratio 1 priority 1 member 11.12.1.100:80 ratio 3 priority 1 }

NAT

A network translation address provides a routable alias IP address that a node can use as its source IP address when making or receiving connections to clients on the external network. You can configure a unique NAT for each node address included in a virtual server mapping.

Note: Note that NATs do not support port translation, and are not appropriate for FTP. You cannot define a NAT if you configure a default SNAT.

The attributes you can configure for a NAT are in Table 2.10

The attributes you can configure for a NAT.
NAT Attributes Description
Original address The original address is the node IP address of a host that you want to be able to connect to through the NAT.
Translated address The translated address is an IP address that is routable on the external network of the BIG-IP Controller. This IP address is the NAT address.
Interface name You can specify an interface for the NAT if you have more than one internal interface configured in the BIG-IP Controller.
Unit ID You can specify a unit ID for a NAT if the BIG-IP Controller is configured to run in active active mode.

The IP addresses that identify nodes on the BIG-IP Controller's internal network need not be routable on the external network. This protects nodes from illegal connection attempts, but it also prevents nodes (and other hosts on the internal network) from receiving direct administrative connections, or from initiating connections to clients, such as mail servers or databases, on the BIG-IP Controller's external interface (destination processing).

Using network address translation resolves this problem. Network address translations (NATs) assign to a particular node a routable IP address that the node can use as its source IP address when connecting to servers on the BIG-IP Controller's external interface. You can use the NAT IP address to connect directly to the node through the BIG-IP Controller, rather than having the BIG-IP Controller send you to a random node according to the load balancing mode. IP forwarding provides functionality similar to a NAT. If your network does not support NATs, you may want to consider using IP forwarding.

Note: In addition to these options, you can set up forwarding virtual servers which allow you to selectively forward traffic to specific addresses. The BIG-IP Controller maintains statistics for forwarding virtual servers.

A network translation address provides a routable alias IP address that a node can use as its source IP address when making or receiving connections to clients on the external network. You can configure a unique NAT for each node address included in a virtual server mapping.

Note that NATs do not support port translation, and are not appropriate for FTP.

Note: You cannot define a NAT if you define a default SNAT.

Warning NATs do not support the NT Domain or CORBA protocols. Instead of using NATs, you need to configure IP forwarding (see Setting up IP forwarding, on page 2-30).

Defining a network address translation (NAT)

When you define standard network address translations (NATs), you need to create a separate NAT for each node that requires a NAT. You also need to use unique IP addresses for NAT addresses; a NAT IP address cannot match an IP address used by any virtual or physical servers in your network. You can configure a NAT with the Configuration utility or from the command line.

To configure a NAT in the Configuration utility

  1. In the navigation pane, click NATs.
    The Network Address Translations screen opens.
  2. On the toolbar, click Add NAT.
    The Add Nat screen opens.
  3. In the Node Address box, type the IP address of the node.
  4. In the NAT Address box, type the IP address that you want to use as the node's alias IP address.
  5. In the NAT Netmask box, type an optional netmask. If you leave this box blank, the BIG-IP Controller generates a default broadcast address based on the IP address and netmask of this virtual server.
  6. In the NAT Broadcast box, type the broadcast address. If you leave this box blank, the BIG-IP Controller generates a default broadcast address based on the IP address and netmask of this NAT.
  7. In the Interface box, you can select an external interface (destination processing) on which the NAT address is to be used. Note that this setting only applies if the BIG-IP Controller has more than one external interface.
  8. Click the Apply button.

To configure a NAT on the command line

A NAT definition maps the IP address of a node <orig_addr> to a routable address on the external interface <trans_addr>, and can include an optional interface and netmask specification. Use the following syntax to define a NAT:

bigpipe nat <orig_addr> to <trans_addr>[/<bitmask>] [<ifname>] [unit <unit ID>]

The <ifname> parameter is the internal interface of the BIG-IP Controller through which packets must pass to get to the destination internal address. The BIG-IP Controller can determine the interface to configure for the NAT in most cases. The <ifname> parameter is useful, for example, where there is more than one internal interface. You can use the unit <unit ID> parameter to specify the controller to which this NAT applies in an active-active redundant system.

The following example shows a NAT definition:

bigpipe nat 10.10.10.10 to 10.12.10.10/24 exp1

Deleting NATs

Use the following syntax to delete one or more NATs from the system:

bigpipe nat <orig_addr> [...<orig_addr>] delete

Displaying status of NATs

Use the following command to display the status of all NATs included in the configuration:

bigpipe nat show

Use the following syntax to display the status of one or more selected NATs (see Figure 2.3):

bigpipe nat <orig_addr> [...<orig_addr>] show

Figure 2.3 Output when you display the status of a NAT.

 NAT { 10.10.10.3 to 9.9.9.9 }    
(pckts,bits) in = (0, 0), out = (0, 0)
NAT { 10.10.10.4 to 12.12.12.12
netmask 255.255.255.0 broadcast 12.12.12.255 }
(pckts,bits) in = (0, 0), out = (0, 0)

Resetting statistics for a NAT

Use the following command to reset the statistics for an individual NAT:

bigpipe nat [<orig_addr>] stats reset

Use the following command to reset the statistics for all NATs:

bigpipe nat stats reset

Additional Restrictions

The nat command has the following additional restrictions:

  • The IP address defined in the <orig_addr> parameter must be routable to a specific server behind the BIG-IP Controller.
  • You must delete a NAT before you can redefine it.
  • The interface for a NAT may only be configured when the NAT is first defined.

Node

Nodes are the network devices to which the BIG-IP Controller passes traffic. A node can be referenced by a load balancing pool. You can display information about nodes and set properties for nodes.

The attributes you can configure for a node are in Table 2.11

The attributes you can configure for a node.
Node Attributes Description
Enable/Disable nodes You can enable or disable nodes independently from a load balancing pool.
Add a node as a member of a pool You can add a node to a pool as a member. This allows you to use the load balancing and persistence methods defined in the pool to control connections handled by the node.

Enabling and disabling nodes and node addresses

To enable a node address, use the node command with a node address and the enable option:

bigpipe node 192.168.21.1 enable

To disable a node address, use the node command with the disable option:

bigpipe node 192.168.21.1 disable

To enable one or more node addresses, use the node command with a node address and port, and the enable option:

bigpipe node 192.168.21.1:80 enable

To disable one or more node addresses, use the node command with disable option:

bigpipe node 192.168.21.1:80 disable

Marking nodes and node ports up or down

To mark a node address down, use the node command with a node address and the down option (Note that marking a node down prevents the node from accepting new connections. Existing connections are allowed to complete):

bigpipe node 192.168.21.1 down

To mark a node address up, use the node command with the up option:

bigpipe node 192.168.21.1 up

To mark a particular port down, use the node command with a node address and port, and the down option (Note that marking a port down prevents the port from accepting new connections. Existing connections are allowed to complete):

bigpipe node 192.168.21.1:80 down

To mark a particular port up, use the node command with up option:

bigpipe node 192.168.21.1:80 up

Setting connection limits for nodes

Use the following command to set the maximum number of concurrent connections allowed on a node:

bigpipe node <node ip>[:<port>][...<node ip>[:<port>]] \
limit <max conn>

Note that to remove a connection limit, you also issue the preceding command, but set the <max conn> variable to 0 (zero). For example:

bigpipe node 192.168.21.1:80 limit 0

Setting connection limits for node addresses

The following example shows how to set the maximum number of concurrent connections to 100 for a list of node addresses:

bigpipe node 192.168.21.1 192.168.21.1 192.168.21.1 limit 100

To remove a connection limit, you also issue this command, but set the <max conn> variable to 0 (zero).

Displaying status of all nodes

When you issue the node show command, the BIG-IP Controller displays the node status (up or down, or unchecked), and a node summary of connection statistics, which is further broken down to show statistics by port.

bigpipe node show

The report shows the following information:

  • current number of connections
  • total number of connections made to the node since last boot
  • maximum number of concurrent connections since the last boot
  • concurrent connection limit on the node
  • the total number of connections made to the node since last boot
  • total number of inbound and outbound packets and bits

Figure 2.4 shows the output of this command:

Figure 2.4 Node status and statistics

 bigpipe node 192.168.200.50:20    
NODE 192.168.200.50 UP
| (cur, max, limit, tot) = (0, 0, 0, 0)
| (pckts,bits) in = (0, 0), out = (0, 0)
+- PORT 20 UP
(cur, max, limit, tot) = (0, 0, 0, 0)
(pckts,bits) in = (0, 0), out = (0, 0)

Displaying the status of individual nodes and node addresses

Use the following command to display status and statistical information for one or more node addresses:

bigpipe node 192.168.21.1 show

The command reads the status of each node address, the number of current connections, total connections, and connections allowed, and the number of cumulative packets and bits sent and received.

Use the following command to display status and statistical information for one or more specific nodes:

bigpipe node 192.168.21.1:80 show

Resetting statistics for a node

Use the following command to reset the statistics for an individual node address:

bigpipe node [<node ip>:<port>] stats reset

Adding a node as a member to a pool

You can add a node as a member to a load balancing pool. For detailed information about how to do this, see Pool, on page 2-62.

Pool

Use the pool command to create, delete, modify, or display the pool definitions on the BIG-IP Controller. Use pools to group members together with a common load balancing mode and persistence mode.

Table 2.12 contains the attributes you can configure for a pool.

The attributes of a pool.
Pool Attributes Description
Pool name You can define the name of the pool.
Member specification You can define each network device, or node, that is a member of the pool.
Load balancing method You must define a specific load balancing method for a pool. You can have various pools configured with different load balancing methods.
Persistence method You can define a specific persistence method for a pool. You can have various pools configured with different persistence methods.

You can define pools from the command line, or define one in the web-based Configuration utility. This section describes how to define a simple pool using each of these configuration methods.

To create a pool using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the toolbar, click the Add Pool button.
    The Add Pool screen opens.
  3. In the Pool Name box, type in the name you want to use for the pool.
  4. Click the load balancing mode list and select the load balancing mode you want to use for this pool.
  5. Use the resources options to add members to the pool. To add a member to the pool, type the IP address in the Node Address box, type the port number in the Port box, and then type in the ratio or priority for this node. Finally, to add the node to the list, click the add ( >>) button.

    · Node Address
    Type in the IP address of the node you want to add to the pool.

    · Port
    Type in the port number of the port you want to use for this node in the pool.

    · Ratio
    Type in a number to assign a ratio to this node within the pool. For example, if you are using the ratio load balancing mode and you type a 1 in this box, the node will have a lower priority in the load-balancing pool than a node marked 2.

    · Priority
    Type in a number to assign a priority to this node within the pool. For example, if you are using a priority load-balancing mode and you type a 1 in this box, the node will have a lower priority in the load-balancing pool than a node marked 2.

    · Current Members
    This is a list of the nodes that are part of the load balancing pool.

  6. Click the Apply button.

To define a pool from the command line

To define a pool from the command line, use the following syntax:

bigpipe pool <pool_name> {lb_method <lb_method> member <member_definition> ... member <member_definition>}

For example, if you want to create the pool my_pool, with two members using Round Robin (rr) load balancing, from the command line, you would type the following command:

bigpipe pool my_pool { lb_method rr member 11.12.1.101:80 member 11.12.1.100:80 }

Command line options

Use the following elements to construct pools from the command line:

The elements you can use to construct a pool.
Pool Element Description
Pool name A string from 1 to 31 characters, for example: new_pool
Member definition member <ip address>:<port> [ratio <value>] [priority <value>]
lb_method_specificaton lb_method [ rr | ratio | priority | fastest | least_conn | predictive | observed | ratio_member | priority_member | least_conn_member ]
persist_mode_specification persist_mode [ cookie | simple | ssl | sticky ]

Deleting a pool

To delete a pool use the following syntax:

bigpipe pool <pool_name> delete

All references to a pool must be removed before a pool can be deleted.

Modifying pools

You can use the command line to add or delete members from a pool. You can also modify the load balancing mode for a pool from the command line. To add a new member to a pool use the following syntax:

bigpipe pool <pool_name> add { 1.2.3.2:telnet }

To delete a member from a pool use the following syntax:

bigpipe pool <pool_name> delete { 1.2.3.2:telnet }

Display pools

Use the following syntax to display all pools:

bigpipe pool show

Use the following syntax to display a specific pool:

bigpipe pool <pool_name> show

Setting up persistence for a pool

If you are setting up an e-commerce or other type of dynamic content site, you may need to configure persistence on the BIG-IP Controller. Whether you need to configure persistence or not simply depends on how you store client-specific information, such as items in a shopping cart, or airline ticket reservations. For example, you may store the airline ticket reservation information in a back-end database that all nodes can access; or on the specific node to which the client originally connected; or in a cookie on the client's machine.

If you store client-specific information on specific nodes, you need to configure persistence. When you turn on persistence, returning clients can bypass load balancing and instead can go to the node where they last connected in order to get to their saved information.

The BIG-IP Controller tracks information about individual persistent connections, and keeps the information only for a given period of time. The way in which persistent connections are identified depends on the type of persistence. The BIG-IP Controller supports two basic types of persistence, and six advanced types of persistence. The two basic types of persistence are:

  • SSL persistence
    SSL persistence is a type of persistence that tracks SSL connections using the SSL session ID, and it is a property of each individual pool. Using SSL persistence can be particularly important if your clients typically have translated IP addresses or dynamic IP addresses, such as those that Internet service providers typically assign. Even when the client's IP address changes, the BIG-IP Controller still recognizes the connection as being persistent based on the session ID.
  • Simple persistence
    Simple persistence supports TCP and UDP protocols, and it tracks connections based only on the client IP address. When a client requests a connection to a virtual server that supports simple persistence, the BIG-IP Controller checks to see if that client previously connected, and if so, returns the client to the same node.

    You may want to use SSL persistence and simple persistence together. In situations where an SSL session ID times out, or where a returning client does not provide a session ID, you may want the BIG-IP Controller to direct the client to the original node based on the client's IP address. As long as the client's simple persistence record has not timed out, the BIG-IP Controller can successfully return the client to the appropriate node.

    In addition to the simple persistence and SSL persistence options provided by the BIG-IP Controller, there are six advanced persistence options are available. The advanced options include:

  • HTTP cookie persistence
  • Destination address affinity (sticky persistence)
  • Persist masking
  • Maintaining persistence across virtual servers with the same address
  • Maintaining persistence across all virtual servers
  • Backward compatibility with node list virtual servers

    Note: All persistence methods are properties of pools

Setting up SSL persistence

SSL persistence is a property of a pool. You can set up SSL persistence from the command line or from the Configuration utility. To set up SSL persistence, you need to do two things:

  • Turn SSL persistence on.
  • Set the SSL session ID timeout, which determines how long the BIG-IP Controller stores a given SSL session ID before removing it from the system.

To configure SSL persistence using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. Click the appropriate pool in the list.
    The Pool Properties screen opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Click the SSL Persistence button.
  5. In the Timeout box, type the number of seconds that the BIG-IP Controller should store SSL sessions IDs before removing them from the system.
  6. Click the Apply button.

To activate SSL persistence from the command line

Use the following syntax to activate SSL persistence from the command line:

bigpipe pool <pool_name> modify { persist_mode ssl ssl_timeout <timeout> simple_mask <ip_mask> }

For example, if you want to set SSL persistence on the pool my_pool, type the following command:

bigpipe pool my_pool modify { persist_mode ssl ssl_timeout 3600 simple_mask 255.255.255.0 }

Display persistence information for a pool

To show the persistence configuration for the pool:

bigpipe pool <pool_name> persist show

To display all persistence information for the pool named classc_pool, use the show option:

bigpipe pool classc_pool persist show

Setting up simple persistence

You can set simple persistence properties for both an individual virtual server, and for a port. Individual virtual server persistence settings can override those of the port. When you set simple persistence on a port, all virtual servers that use the given port inherit the port's persistence settings.

Setting simple persistence on virtual servers

Persistence settings for pools apply to both TCP and UDP persistence. When the persistence timer is set to a value greater than 0, persistence is on. When the persistence timer is set to 0, persistence is off.

To configure simple persistence for pools using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. Select the pool for which you want to configure simple persistence.
    The Pool Properties screen opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence Properties screen opens.
  4. In the Persistence Type section, click the Simple Persistence button.
    Type the following information:

    · Timeout (seconds)
    Set the number of seconds for persistence on the pool. (This option is not available if you are using rules.)

    · Mask
    Set the persistence mask for the pool. The persistence mask determines persistence based on the portion of the client's IP address that is specified in the mask.

  5. Click the Apply button.

To configure simple persistence for pools from the command line

You can use the bigpipe pool command with the modify keyword sets simple persistence for a pool. Note that a timeout greater than 0 turns persistence on, and a timeout of 0 turns persistence off.

bigpipe pool <pool_name> modify { persist_mode ssl ssl_timeout <timeout> simple_mask <ip_mask> }

For example, if you want to set SSL persistence on the pool my_pool, type the following command:

bigpipe pool my_pool modify { persist_mode ssl ssl_timeout 3600 simple_mask 255.255.255.0 }

Using HTTP cookie persistence

You can set up the BIG-IP Controller to use HTTP cookie persistence. This method of persistence uses an HTTP cookie stored on a client's computer to allow the client to reconnect to the same server previously visited at a web site.

There are four types of cookie persistence available:

  • Insert mode
  • Rewrite mode
  • Passive mode
  • Hash mode

    The mode you choose affects how the cookie is handled by the BIG-IP Controller when it is returned to the client.

Insert mode

If you specify Insert mode, the information about the server to which the client connects is inserted in the header of the HTTP response from the server as a cookie. The cookie is named BIGipServer <pool_name>, and it includes the address and port of the server handling the connection. The expiration date for the cookie is set based on the timeout configured on the BIG-IP Controller.

To activate Insert mode in the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Insert mode.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Click the Active HTTP Cookie button.
  5. Select Insert mode from the Method list.
  6. Type the timeout value in days, hours, minutes, and seconds. This value determines how long the cookie lives on the client computer before it expires.
  7. Click the Apply button.

To activate Insert HTTP cookie persistence from the command line

To activate Insert mode from the command line, use the following syntax:

bigpipe pool <pool_name> { <lb_mode_specification> persist_mode cookie cookie_mode insert cookie_expiration <timeout> <member definition> }

The <timeout> value for the cookie is written using the following format:

<days>d hh:mm:ss

Rewrite mode

If you specify Rewrite mode, the BIG-IP Controller intercepts a Set-Cookie, named BIGipCookie, sent from the server to the client and overwrites the name and value of the cookie. The new cookie is named BIGipServer <pool_name> and it includes the address and port of the server handling the connection.

Rewrite mode requires you to set up the cookie created by the server. In order for Rewrite mode to work, there needs to be a blank cookie coming from the web server for the BIG-IP Controller to rewrite. With Apache variants, the cookie can be added to every web page header by adding an entry in the httpd.conf file:

Header add Set-Cookie BIGipCookie=0000000000000000000000000...

(The cookie may contain a total of 120 zeros.)

Warning For backward compatibility the blank cookie can contain only 75 zeros. However, cookies of this size do not allow you to use rules and persistence together.

To activate Rewrite mode cookie persistence in the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Rewrite mode.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Click the Active HTTP Cookie button.
  5. Select Rewrite mode from the Method list.
  6. Type the timeout value in days, hours, minutes, and seconds. This value determines how long the cookie lives on the client computer before it expires.
  7. Click the Apply button.

To activate Rewrite mode cookie persistence from the command line

To activate Rewrite mode from the command line, use the following syntax:

bigpipe pool <pool_name> { <lb_mode_specification> persist_mode cookie cookie_mode rewrite cookie_expiration <timeout> <member definition> }

The <timeout> value for the cookie is written using the following format:

<days>d hh:mm:ss

Passive mode

If you specify Passive mode, the BIG-IP Controller does not insert or search for blank Set-Cookies in the response from the server. It does not try to set up the cookie. In this mode, it is assumed that the server provides the cookie formatted with the correct node information and timeout.

In order for Passive mode to work, there needs to be a cookie coming from the web server with the appropriate node information in the cookie. With Apache variants, the cookie can be added to every web page header by adding an entry in the httpd.conf file:

Header add Set-Cookie: "BIGipServer my_pool=184658624.20480.000; expires=Sat, 19-Aug-2000 19:35:45 GMT; path=/"

In this example, my_pool is the name of the pool that contains the server node, 184658624 is the encoded node address and 20480 is the encoded port.

The equation for an address (a.b.c.d) is:

d*256^3 + c*256^2 + b*256 +a

The way to encode the port is to take the two bytes that store the port and reverse them. So, port 80 becomes 80 * 256 + 0 = 20480. Port 1433 (instead of 5 * 256 + 153) becomes 153 * 256 + 5 = 39173.

To activate Passive mode cookie persistence in the Configuration utility

After you set up the cookie created by the web server, you must activate Passive mode on the BIG-IP Controller.

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Passive mode.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Select Passive HTTP Cookie mode.
  5. Click the Apply button.

To activate Passive mode cookie persistence from the command line

After you set up the cookie created by the web server, you must activate Passive mode on the BIG-IP Controller. To activate HTTP cookie persistence from the command line, use the following syntax:

bigpipe pool <pool_name> { <lb_mode_specification> persist_mode cookie cookie_mode passive <member definition> }

Note: The <timeout> value is not used in Passive mode.

Hash mode

If you specify Hash mode, the hash mode consistently maps a cookie value to a specific node. When the client returns to the site, the BIG-IP Controller uses the cookie information to return the client to a given node. With this mode, the web server must generate the cookie. The BIG-IP Controller does not create the cookie automatically like it does with Insert mode.

To configure the cookie persistence hash option in the Configuration utility

Before you follow this procedure, you must configure at least one pool.

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up hash mode persistence.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Click the Cookie Hash button.
    Set the following values (see Table 2.14 for more information):

    · Cookie Name
    Type in the name of an HTTP cookie being set by the Web site. This could be something like Apache or SSLSESSIONID. It depends on the type of web server your site is running.

    · Hash Values
    The Offset is the number of bytes in the cookie to skip before calculating the hash value. The Length is the number of bytes to use when calculating the hash value.

  5. Click the Apply button.

To configure the hash cookie persistence option from the command line

Use the following syntax to configure the hash cookie persistence option:

bigpipe pool <pool_name> { <lb_mode_specification> persist_mode cookie cookie_mode hash cookie_hash_name <cookie_name> cookie_hash_offset <cookie_value_offset> cookie_hash_length <cookie_value_length> <member definition> }

The <cookie_name>, <cookie_value_offset>, and <cookie_value_length> values are described in Table 2.14:

The cookie hash mode values
Hash mode values Description

<cookie_name>

This is the name of an HTTP cookie being set by a Web site.

<cookie_value_offset>

This is the number of bytes in the cookie to skip before calculating the hash value.

<cookie_value_length>

This is the number of bytes to use when calculating the hash value.

Using destination address affinity (sticky persistence)

You can optimize your proxy server array with destination address affinity (also called sticky persistence). Address affinity directs requests for a certain destination to the same proxy server, regardless of which client the request comes from.

This enhancement provides the most benefits when load balancing caching proxy servers. A caching proxy server intercepts web requests and returns a cached web page if it is available. In order to improve the efficiency of the cache on these proxies, it is necessary to send similar requests to the same proxy server repeatedly. Destination address affinity can be used to cache a given web page on one proxy server instead of on every proxy server in an array. This saves the other proxies from having to duplicate the web page in their cache, wasting memory.

Warning: In order to prevent sticky entries from clumping on one server, use a static load balancing mode for the members of the pool, such as Round Robin.

To activate destination address affinity in the Configuration utility

You can only activate destination address affinity on pools directly or indirectly referenced by wildcard virtual servers. For information on setting up a wildcard virtual server, see the Administrator Guide, Defining wildcard virtual servers. Follow these steps to configure destination address affinity:

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up destination address affinity.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Click the Destination Address Affinity button to enable destination address affinity.
  5. In the Mask box, type in the mask you want to apply to sticky persistence entries.
  6. Click the Apply button.

To activate sticky persistence from the command line

Use the following command to enable sticky persistence for a pool:

bigpipe pool <pool_name> modify { persist_mode sticky <enable | disable> sticky_mask <ip address> }

Use the following command to disable sticky persistence for a pool:

bigpipe pool <pool_name> modify { persist_mode sticky disable sticky_mask <ip address> }

Use the following command to delete sticky entries for the specified pool:

bigpipe pool <pool_name> sticky clear

To show the persistence configuration for the pool:

bigpipe pool <pool_name> persist show

Using a simple timeout and a persist mask on a pool

The persist mask feature works only on pools that implement simple persistence. By adding a persist mask, you identify a range of client IP addresses to manage together as a single simple persistent connection when connecting to the pool.

To apply a simple timeout and persist mask in the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up simple persistence.
    The properties screen for the pool you clicked opens.
  3. In the toolbar, click the Persistence button.
    The Pool Persistence screen opens.
  4. Select Simple Persistence mode.
  5. In the Timeout box, type the timeout in seconds.
  6. In the Mask box, type the persist mask you want to apply.
  7. Click the Apply button.

To apply a simple timeout and persist mask from the command line

The complete syntax for the command is:

bigpipe pool <pool_name> modify { [<lb_mode_specification>] persist_mode simple simple_timeout <timeout> simple_mask <dot_notation_longword> }

For example, the following command would keep persistence information together for all clients within a C class network that connect to the pool classc_pool:

bigpipe pool classc_pool modify { persist_mode simple simple_timeout 1200 simple_mask 255.255.255.0 }

You can turn off a persist mask for a pool by using the none option in place of the simple_mask mask. To turn off the persist mask that you set in the preceding example, use the following command:

bigpipe pool classc_pool modify { simple_mask none }

To display all persistence information for the pool named classc_pool, use the show option:

bigpipe pool classc_pool persist show

Maintaining persistence across virtual servers that use the same virtual addresses

When this mode is turned on, the BIG-IP Controller attempts to send all persistent connection requests received from the same client, within the persistence time limit, to the same node only when the virtual server hosting the connection has the same virtual address as the virtual server hosting the initial persistent connection. Connection requests from the client that go to other virtual servers with different virtual addresses, or those connection requests that do not use persistence, are load balanced according to the load balancing mode defined for the pool.

If a BIG-IP Controller configuration includes the following virtual server mappings, where the virtual server v1:http references the http_pool (contains the nodes n1:http and n2:http) and the virtual server v1:ssl references the pool ssl_pool (contains the nodes n1:ssl and n2:ssl). Each virtual server uses persistence:

bigpipe vip v1:http use pool http_pool

bigpipe vip v1:ssl use pool ssl_pool

bigpipe vip v2:ssl use pool ssl_pool

For example, a client makes an initial connection to v1:http and the load balancing mechanism assigned to the pool http_pool chooses n1:http as the node. If the same client then connects to v2:ssl, the BIG-IP Controller starts tracking a new persistence session, and it uses the load balancing mode to determine which node should receive the connection request because the requested virtual server uses a different virtual address (v2) than the virtual server hosting the first persistent connection request (v1). However, if the client subsequently connects to v1:ssl, the BIG-IP Controller uses the persistence session established with the first connection to determine the node that should receive the connection request, rather than the load balancing mode. The BIG-IP Controller should send the third connection request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's first connection with which it shares a persistent session.

Warning: In order for this mode to be effective, virtual servers that use the same virtual address, as well as those that use TCP or SSL persistence, should include the same node addresses in the virtual server mappings.

The system control variable bigip.persist_on_any_port_same_vip turns this mode on and off. To activate the persistence mode, type:

sysctl -w bigip.persist_on_any_port_same_vip=1

To deactivate the persistence mode, type:

sysctl -w bigip.persist_on_any_port_same_vip=0

To activate persistence for virtual servers that use the same address in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. In the toolbar, click the Advanced Properties button.
    The BIG-IP System Control Variables screen opens.
  3. Click the Allow Persistence Across All Ports for Each Virtual Address checkbox to activate this persistence mode. Clear the checkbox to disable this persistence mode.
  4. Click the Apply button.

Maintaining persistence across all virtual servers

You can set the BIG-IP Controller to maintain persistence for all connections requested by the same client, regardless of which virtual server hosts each individual connection initiated by the client. When this mode is turned on, the BIG-IP Controller attempts to send all persistent connection requests received from the same client, within the persistence time limit, to the same node. Connection requests from the client that do not use persistence are load balanced according to the currently selected load balancing mode.

If a BIG-IP Controller configuration includes the following virtual server mappings, where the virtual servers v1:http and v2:http reference the http1_pool and http2_pool (both pools contain the nodes n1:http and n2:http) and the virtual servers v1:ssl and v2:ssl reference the pools ssl1_pool and ssl2_pool (both pools contain the nodes n1:ssl and n2:ssl). Each virtual server uses persistence:

bigpipe vip v1:http use pool http1_pool

igpipe vip v1:ssl use pool ssl1_pool

bigpipe vip v2:http use pool http2_pool

bigpipe vip v2:ssl use pool ssl2_pool

Say that a client makes an initial connection to v1:http and the BIG-IP Controller's load balancing mechanism chooses n1:http as the node. If the same client subsequently connects to v1:ssl, the BIG-IP Controller would send the client's request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's initial connection. What makes this mode different from maintaining persistence across virtual servers that use the same virtual address is that if the same client subsequently connects to v2:ssl, the BIG-IP Controller would send the client's request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's initial connection.

Warning In order for this mode to be effective, virtual servers that use TCP or SSL persistence should include the same member addresses in the virtual server mappings.

The system control variable bigip.persist_on_any_vip turns this mode on and off. To activate the persistence mode, type:

sysctl -w bigip.persist_on_any_vip=1

To deactivate the persistence mode, type:

sysctl -w bigip.persist_on_any_vip=0

To activate persistence across all virtual servers in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. In the toolbar, click the Advanced Properties button.
    The BIG-IP System Control Variables screen opens.
  3. Click the Allow Persistence Across All Virtual Servers checkbox to activate this persistence mode. Clear the checkbox to disable this persistence mode.
  4. Click the Apply button.

Backward compatible persistence for node list virtual servers

It is still possible to configure persistence by virtual server and port. You must configure virtual servers that reference a pool or a rule by modifying the pool.

Virtual server definitions containing a node list and persistence settings are converted into an independent pool with the name appgen_<virtual_addr>.<virtual_port> and a virtual server that references the pool. The pool persistence settings are set to mimic the behavior of a virtual server with persistence. For example, the nodelist virtual server definition.

vip 168.1.1.1:80 { define 10.1.1.1:80 10.2.2.2:80 special cookie rewrite 10d }

This virtual server definition is stored and written in the /etc/bigip.conf file in the following manner:

Figure 2.5 An example of an appgen_pool created from a node list virtual server

 pool  appgen_168.1.1.1.80  {    
lb_mode round_robin
persist_mode cookie
cookie_mode rewrite
cookie_expiration 10d
member 10.1.1.1:80
member 10.2.2.2:80
}
vip 168.1.1.1:80 { use pool appgen_168.1.1.1.80 }

While you can still apply vitual port simple persistence timeouts they are not saved a part of the BIG-IP Controller configuration. Defining a virtual port timeout affects the persistence configuration of pools that are directly referenced by virtual servers with a matching virtual port. When a virtual port timeout is defined, pools with a persistence mode of none are changed to simple, and the simple persistence timeouts are changed from 0 to the virtual port timeout.

The virtual server simple and sticky persistence commands operate on the pool referenced by the virtual server instead of on the virtual server itself. You cannot use commands to display information for a virtual server that does not reference a pool. Virtual server persistence modifications are:

vip <ip>:<port> persist <value>

vip <ip>:<port> persist mask <ip>

vip <ip>:<port> sticky (enable | disable | clear)

vip <ip>:<port> sticky mask <ip>

vip <ip>:<port> mirror persist (enable | disable)

All virtual server persistence queries now return error messages and a suggested pool persistence query. Virtual server persistence queries that now generate errors are:

vip <ip>:<port> persist (show | dump)

vip <ip>:<port> persist mask show

vip <ip>:<port> sticky (show | dump)

vip <ip>:<port> sticky mask show

vip <ip>:<port> mirror persist show

Port

One of the security features of the BIG-IP Controller is that all ports on the controller are locked down and unavailable for service unless you specifically open them to network access. Before clients can use the virtual servers you have defined, you must allow access to each port that the virtual servers use.

Tip: Virtual servers using the same service actually share a port on the BIG-IP Controller. This command is global, you only need to open access to a port once; you do not need to open access to a port for each instance of a virtual server that uses it.

A port is any valid port number, between 0 and 65535, inclusive, or any valid service name in the /etc/services file.

The attributes you can configure for a port are in Table 2.15

The attributes you can configure for an port.
Attributes Description
Allow access to ports As a security measure, all ports are locked down on the BIG-IP Controller. In order for the BIG-IP Controller to load balance traffic, you must enable access to the port on which the BIG-IP Controller will receive traffic.
Connection limits You can define a connection limit for a port so that a flood of connections does not overload the BIG-IP Controller.

To allow access to services in the Configuration utility

Any time you create a virtual server and define a port or service with the Configuration utility, the port or service is automatically enabled.

To allow access to services on the command line

Using the bigpipe port command, you can allow access to one or more ports at a time.

bigpipe port <port>... <port> enable

For example, in order to enable HTTP (port 80) and Telnet (port 23) services, you can enter the following bigpipe port command:

bigpipe port 80 23 443 enable

Warning: In order for FTP to function properly, you must allow both ports 20 and 21 (or ftp-data and ftp).

Allowing and denying virtual ports

You can enable or disable traffic to specific virtual ports. The default setting for all virtual ports is disabled. Use the following syntax to allow one or more virtual ports:

bigpipe port <port> [...<port>] enable

To deny access to one or more virtual ports:

bigpipe port <port> [...<port>] disable

Setting connection limits on ports

Use the following syntax to set the maximum number of concurrent connections allowed on a virtual port. Note that you can configure this setting for one or more virtual ports.

bigpipe port <port> [...<port>] limit <max conn>

To turn off a connection limit for one or more ports, use the preceding command, setting the <max conn> parameter to 0 (zero):

bigpipe port <port> [...<port>] limit 0

Displaying the status of all virtual ports

Use the following syntax to display the status of virtual ports included in the configuration:

bigpipe port show

Displaying the status for specific virtual ports

Use the following syntax to display the status of one or more virtual ports:

bigpipe port <port> [...<port>] show

Figure 2.6 shows a sample of formatted output of the port command.

Figure 2.6 Formatted output of port command showing the Telnet port statistics

 bigpipe port telnet show    

PORT 23 telnet enable
(cur, max, limit, tot, reaped) = (37,73,100,691,29)
(pckts,bits) in = (2541, 2515600), out = (2331, 2731687)

Redundant System

Redundant BIG-IP Controller systems have special settings that you need to configure, such as interface fail-safe settings. One convenient aspect of configuring a redundant system is that once you have configured one of the controllers, you can simply copy the configuration to the other controller in the system using the configuration synchronization feature in the bigpipe command line tool or in the Configuration utility.

There are two basic aspects about working with redundant systems:

  • Synchronizing configurations between two controllers
  • Configuring fail-safe settings for the interfaces

    In addition to the simple redundant features available on the BIG-IP Controller, several advanced redundant features are available. Advanced redundant system features provide additional assurance that your content is available if a BIG-IP Controller experiences a problem. These advanced redundant system options include:

  • Mirroring connection and persistence information
  • Gateway fail-safe
  • Network-based fail-over
  • Setting a specific BIG-IP Controller to be the active controller
  • Setting up active-active redundant controllers

The attributes you can configure for a redundant systems are in Table 2.16

The attributes you can configure for redundant systems.
Attributes Description
Synchronizing configurations This feature allows you to configure one controller and then synchronize the configuration with the other controller.
Fail-safe for interfaces Fail-safe for interfaces provides the ability to cause a controller to fail over if an interface is no longer generating traffic.
Mirroring connections and persistence information You can mirror connection and/or persistence information between redundant controllers. This enables you to provide seamless fail-over of client connections.
Gateway fail-safe This feature allows you to fail-over between two gateway routers.
Network-based fail-over You can configure the BIG-IP Controller to use the network to determine the status of the active controller.
Setting a dominant controller You can set up one controller in a pair to be the dominant active controller. The controller you set up as the dominant controller will always attempt to be active.
Active-active configuration The default mode for a BIG-IP Controller redundant system is Active/Standby. However, you can configure both controllers to run in active mode.

Preparing to use the synchronization command

Before you can use the bigpipe configsync command or the Configuration utility to synchronize domestic HA redundant BIG-IP Controllers, you must first run the config_failover command. This command performs the following tasks:

  • Checks for a fail-over IP address for the other controller in BIG/db.
  • Verifies that the AllowHosts entry in the /etc/sshd_config file includes the IP address of the other controller in the redundant configuration.
  • Runs the ssh-keygen command which creates the security keys for the controller.
  • Shares the security keys with the other controller in the redundant system.

    To run the config_failover command, type the following command from the command line:

    config_failover

    The config_failover utility prompts you for the root password of the other controller in the redundant system before it generates the security keys for the BIG-IP Controller.

Synchronizing configurations between controllers

Once you complete the initial configuration on the first controller in the system, you can synchronize the configurations between the active unit and the standby unit. When you synchronize a configuration, the following configuration files are copied to the other BIG-IP Controller:

  • The common keys in BIG/db
  • /etc/bigip.conf
    The /etc/bigip.conf file stores virtual server and node definitions and settings, including node ping settings, the load balancing mode, and NAT and SNAT settings.
  • /etc/bigd.conf
    The /etc/bigd.conf file stores service check settings.
  • /etc/hosts.allow
    The /etc/hosts.allow file stores the IP addresses that are allowed to make administrative shell connections to the BIG-IP Controller.
  • /etc/hosts.deny
    The /etc/hosts.deny file stores the IP addresses that are not allowed to make administrative shell connections to the BIG-IP Controller.
  • User account files
  • /etc/ipfw.conf and /etc/ipfw.filt
    The /etc/ipfw.conf and /etc/ipfw.filt files store IP filter settings.
  • rc.sysctl
    The rc. sysctl file contains system control variable settings.
  • /etc/rateclass.conf
    The /etc/rateclass.conf file stores rate class definitions.
  • /etc/ipfwrate.conf and /etc/ipfwrate.filt
    The /etc/ipfwrate.conf and /etc/ipfwrate.filt files store IP filter settings for filters that also use rate classes.
  • /etc/snmpd.conf
    The /etc/snmpd.conf file stores SNMP configuration settings.

    If you use command line utilities to set configuration options, be sure to save the current configuration to the file before you use the configuration synchronization feature. Use the following bigpipe command to save the current configuration:

    bigpipe -s

Warning: If you are synchronizing with a controller that already has configuration information defined, we recommend that you back up that controller's original configuration file(s).

To synchronize the configuration using the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. On the toolbar, click the Sync Configuration button.
    The Sync Configuration screen opens.
  3. Click the Synchronize button.

To synchronize the configuration from the command line

You use the bigpipe configsync command to synchronize configurations. When you include the all option in the command, all the configuration files are synchronized between machines.

bigpipe configsync all

If you want to synchronize only the /etc/bigip.conf file, you can use the same command without any options:

bigpipe configsync

Configuring fail-safe settings

For maximum reliability, the BIG-IP Controller supports failure detection on both internal and external interface cards. When you arm the fail-safe option on an interface card, the BIG-IP Controller monitors network traffic going through the interface. If the BIG-IP Controller detects a loss of traffic on an interface when half of the fail-safe timeout has elapsed, it attempts to generate traffic. An interface attempts to generate network traffic by issuing ARP requests to nodes accessible through the interface. Also, an ARP request is generated for the default route if the default router is accessible from the interface. Any traffic through the interface, including a response to the ARP requests, averts a fail-over.

If the BIG-IP Controller does not receive traffic on the interface before the timer expires, it initiates a fail-over, switches control to the standby unit, and reboots.

Warning: You should arm the fail-safe option on an interface only after the BIG-IP Controller is in a stable production environment. Otherwise, routine network changes may cause fail-over unnecessarily.

Arming fail-safe on an interface

Each interface card installed on the BIG-IP Controller has a unique name, which you need to know when you set the fail-safe option on a particular interface card. You can view interface card names in the Configuration utility, or you can use the bigpipe interface command to display interface names on the command line.

To arm fail-safe on an interface using the Configuration utility

  1. In the navigation pane, click NICs (network interface cards).
    The Network Interface Cards list opens and displays each installed NIC.
  2. Select an interface name.
    The Network Interface Card Properties screen opens.
  3. Check Arm Failsafe to turn on the fail-safe option for the selected interface.
  4. In the Timeout box, type the maximum time allowed for a loss of network traffic before a fail-over occurs.
  5. Click the Apply button.

To arm fail-safe on an interface from the command line

One of the required parameters for the bigpipe interface command is the name of the interface. If you need to look up the names of the installed interface cards, use the bigpipe interface command with the show keyword:

bigpipe interface show

To arm fail-safe on a particular interface, use the bigpipe interface command with the failsafe arm keyword and interface name parameter:

bigpipe interface <ifname> timeout <seconds>

bigpipe interface <ifname> failsafe arm

For example, you have an external interface named exp0 and an internal interface named exp1. To arm the fail-safe option on both cards with a timeout of 30 seconds, you need to issue the following commands:

bigpipe interface exp0 timeout 30

bigpipe interface exp1 timeout 30

bigpipe interface exp0 failsafe arm

bigpipe interface exp1 failsafe arm

Mirroring connection and persistence information

When the fail-over process puts the active controller duties onto a standby controller, the connection capability of your site returns so quickly that it has little chance to be missed. By preparing a redundant system for the possibility of fail-over, you effectively maintain your site's reliability and availability in advance. But fail-over alone is not enough to preserve the connections and transactions on your servers at the moment of fail-over; they would be dropped as the active controller goes down unless you have enabled mirroring.

The mirror feature on BIG-IP Controllers is a specialized, ongoing communication between the active and standby controllers that duplicates the active controller's real-time connection or persistence information state on the standby controller. If mirroring has been enabled, fail-over can be seamless to such an extent that file transfers can proceed uninterrupted, customers making orders can complete transactions without interruption, and your servers can generally continue with whatever they were doing at the time of fail-over.

The mirror feature is intended for use with long-lived connections, such as FTP, Chat, and Telnet sessions. Mirroring is also effective for persistence information.

Warning: If you attempt to mirror all connections, the performance of the BIG-IP Controller may degrade.

Commands for mirroring

Table 2.17 contains the commands that support mirroring capabilities. For complete descriptions, syntax, and usage examples, see the BIG-IP Controller Reference Guide, BIG/pipe Command Reference.

Mirroring command in BIG/pipe
BIG/pipe command Options

bigpipe mirror

Options for global mirroring

bigpipe vip mirror

Options for mirroring connection and persistence information on a virtual server.

bigpipe snat mirror

Options for mirroring secure NAT connections

Global mirroring on the BIG-IP Controller redundant system

You should enable mirroring on a redundant system at the global level before you can set mirroring of any specific types of connections or information. However, you can set specific types of mirroring and then enable global mirroring to begin mirroring. The syntax of the command for setting global mirroring is:

bigpipe mirror enable | disable | show

To enable mirroring on a redundant system, use the following command:

bigpipe mirror enable

To disable mirroring on a redundant system, use the following command:

bigpipe mirror disable

To show the current status of mirroring on a redundant system, use the following command:

bigpipe mirror show

Mirroring virtual server state

Mirroring provides seamless recovery for current connections, persistence information, SSL persistence, or sticky persistence when a BIG-IP Controller fails. When you use the mirroring feature, the standby controller maintains the same state information as the active controller. Transactions such as FTP file transfers continue as though uninterrupted.

Since mirroring is not intended to be used for all connections and persistence, it must be specifically enabled for each virtual server.

To control mirroring for a virtual server, use the bigpipe vip mirror command to enable or disable mirroring of persistence information, or connections, or both. The syntax of the command is:

bigpipe vip <virt addr>:<port> mirror [ persist | conn ] \
enable | disable

Use persist to mirror persistence information for the virtual server. Use conn to mirror connection information for the virtual server. To display the current mirroring setting for a virtual server, use the following syntax:

bigpipe vip <virt addr>:<port> mirror [ persist | conn ] show

If you do not specify either persist, for persistent information, or conn, for connection information, the BIG-IP Controller assumes that you want to display both types of information.

Mirroring SNAT connections

SNAT connections are mirrored only if specifically enabled. You can enable SNAT connection mirroring by specific node address, and also by enabling mirroring on the default SNAT address. Use the following syntax to enable SNAT connection mirroring on a specific address:

bigpipe snat <node addr> [...<node addr>] mirror enable | disable

In the following example, the enable option turns on SNAT connection mirroring to the standby controller for SNAT connections originating from 192.168.225.100.

bigpipe snat 192.168.225.100 mirror enable

Use the following syntax to enable SNAT connection mirroring the default SNAT address:

bigpipe snat default mirror enable | disable

Using gateway fail-safe

Fail-safe features on the BIG-IP Controller provide network failure detection based on network traffic. Gateway fail-safe monitors traffic between the active controller and the gateway router, protecting the system from a loss of the internet connection by triggering a fail-over when the gateway is unreachable for a specified duration.

You can configure gateway fail-safe in the Configuration utility or in BIG/db. If you configure gateway fail-safe in BIG/db, you can toggle it on and off with bigpipe commands.

Adding a gateway fail-safe check

When you can set up a gateway fail-safe check using the Configuration utility, you need to provide the following information:

  • Name or IP address of the router (only one gateway can be configured for fail-safe)
  • Time interval (seconds) between pings sent to the router
  • Time-out period (seconds) to wait for replies before proceeding with fail-over

To configure gateway fail-safe in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. In the Gateway Fail-safe section of the screen, make the following entries:

    · Click the Enabled box.

    · In the Router box, type the IP address of the router you want to ping.

    · In the Ping (seconds) box, type the interval, in seconds, you want the BIG-IP Controller to wait before it pings the router.

    · In the Timeout (seconds) box, type the timeout value, in seconds. If the router does not respond to the ping within the number of seconds specified, the gateway is marked down.

  3. Click the Apply button.

To configure gateway fail-safe in BIG/db

To enable gateway fail-safe in BIG/db, you need to change the settings of three specific BIG/db database keys using the bigdba utility. The keys set the following values:

  • The IP address of the router
  • The ping interval
  • The timeout period

    To set these keys, type this command to open the BIG/db database:

    bigdba

    To set the IP address of the router, type the following entry, where <gateway IP> is the IP address, or host name, of the router you want to ping:

    Local.Bigip.GatewayPinger.Ipaddr=<gateway IP>

    To set the ping interval, type the following entry, where <seconds> is the number of seconds you want the BIG-IP Controller to wait before pinging the router:

    Local.Bigip.GatewayPinger.Pinginterval=<seconds>

    To set the timeout, type the following entry, where <seconds> is the number of seconds you want the BIG-IP Controller to wait before marking the router down:

    Local.Bigip.GatewayPinger.Timeout=<seconds>

    To close bigdba and save your changes, type this command and press the Enter key:

    quit

    For more information about BIG/db and using bigdba, see Supported BIG/db configuration keys, on page 8-1.

    Note: After you make these changes, you must restart bigd to activate the gateway pinger.

Enabling gateway fail-safe

Gateway fail-safe monitoring can be toggled on or off from the command line using the bigpipe gateway command.

For example, arm the gateway fail-safe using the following command:

bigpipe gateway failsafe arm

To disarm fail-safe on the gateway, enter the following command:

bigpipe gateway failsafe disarm

To see the current fail-safe status for the gateway, enter the following command:

bigpipe gateway failsafe show

Gateway fail-safe messages

The destination for gateway fail-safe messages is set in the standard syslog configuration (/etc/syslog.conf), which directs these messages to the file /var/log/bigd. Each message is also written to the BIG-IP Controller console (/dev/console).

Using network-based fail-over

Network-based fail-over allows you to configure your redundant BIG-IP Controller to use the network to determine the status of the active controller. Network-based fail-over can be used in addition to, or instead of, hard-wired fail-over.

To configure network fail-over in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. In the Redundant Configuration section of the screen,
    click the Network Failover Enabled box.
  3. Click the Apply button.

To Configure network-based fail-over in BIG/db

To enable network-based fail-over, you need to change the settings of specific BIG/db database keys using the bigdba utility. To enable network-based fail-over, the Common.Sys.Failover.Network key must be set to one (1). To set this value to one, type this command to open the BIG/db database:

bigdba

At the bigdba prompt, type the following entry:

Common.Sys.Failover.Network=1

To close bigdba and save your changes, type this command and press the Enter key:

quit

Other keys are available to lengthen the delay to detect the fail-over condition on the standby controller, and to lengthen the heart beat interval from the active unit. To change the time required for the standby unit to notice a failure in the active unit, set the following value using the bigdba utility (the default is three seconds):

Common.Bigip.Cluster.StandbyTimeoutSec=<value>

To change the heart beat interval from the active BIG-IP Controller, change the following value using bigdba (the default is one second):

Common.Bigip.Cluster.ActiveKeepAliveSec=<value>

For more information about BIG/db and using bigdba, see Supported BIG/db configuration keys, on page 8-1.

Setting a specific BIG-IP Controller to be the preferred active unit

Setting a preferred active controller means overlaying the basic behavior of a BIG-IP Controller with a preference toward being active. A controller that is set as the active controller becomes active whenever the two controllers negotiate for active status.

To clarify how this differs from default behavior, contrast the basic behavior of a BIG-IP Controller in the following description. Each of the two BIG-IP Controllers in a redundant system has a built-in tendency to try to become the active controller. Each system attempts to become the active controller at boot time; if you boot two BIG-IP Controllers at the same time, the one that becomes the active controller is the one that boots up first. In a redundant configuration, if the BIG-IP Controllers are not configured with a preference for being the active or standby controller, either controller can become the active controller by becoming active first.

The active or standby preference for the BIG-IP Controller is defined by setting the appropriate startup parameters for sod (the switch over daemon) in /etc/rc.local. For more details on sod startup and functioning, see the BIG-IP Controller Reference Guide, System Utilities.

The following example shows how to set the controller to standby:

echo " sod."; /sbin/sod -force_slave 2> /dev/null

A controller that prefers to be standby can still become the active controller if it does not detect an active controller.

This example shows how to set a controller to active:

echo " sod."; /sbin/sod -force_master 2> /dev/null

A controller that prefers to be active can still serve as the standby controller when it is on a live redundant system that already has an active controller. For example, if an active controller that preferred to be active failed over and was taken out of service for repair, it could then go back into service as the standby controller until the next time the redundant system needed an active controller, for example, at reboot.

Setting up active-active redundant controllers

You can use the active-active feature to simultaneously load balance traffic for different virtual addresses on redundant BIG-IP Controllers. Performance improves when both BIG-IP Controllers are in active service at the same time. In active-active mode, you configure virtual servers to be served by one of the two controllers. If one controller fails, the remaining BIG-IP Controller assumes the virtual servers of the failed machine. For this configuration to work, each controller has its own unit ID number. Each virtual server, NAT, or SNAT you create includes a unit number designation that determines which active controller handles its connections.

Note: If you do not want to use this feature, redundant BIG-IP Controllers operate in active/standby mode by default.

Warning: MAC masquerading is not supported in active-active mode.

Configuring an active-active system

The default mode for BIG-IP Controller redundant systems is active/standby. You must take several steps in order to use active-active mode on the redundant BIG-IP Controller system. Details follow this brief list.

  1. Configure an additional shared IP alias on the internal interface for each unit. You must have two shared aliases for the redundant system.
  2. Set the routing configuration on the servers load balanced by the active-active BIG-IP Controller system.
  3. Make sure the BIG/db key Local.Bigip.Failover.UnitId is 1 for one of the controllers, and 2 for the other.
  4. Enable active-active mode by setting the BIG/db key Common.Bigip.Failover.ActiveMode to 1.
  5. Define the virtual servers, NATs, and/or SNATs to run on either unit 2 or on unit 1.
  6. Update the fail-over daemon (/sbin/sod) with the configuration changes made in BIG/db.
  7. Synchronize the configuration.
  8. Transition from active/standby to active-active.

    Note: We recommend making all of these configuration changes on one controller and then synchronizing the configuration.

Step 1: Configure an additional shared IP alias

When you configure a redundant system, you enter a shared IP alias. In active/standby mode, this shared IP alias runs on the active controller. You can determine if you already have a shared IP alias by running the bigpipe interface command. If you have one, it is probably configured as belonging to unit one.

In an active-active configuration, each BIG-IP Controller must have a shared IP alias on the internal, source processing, interface. This is the address to which the servers behind the BIG-IP Controller route traffic. Since you already have a shared IP alias for one controller, add a shared IP alias for the other controller by using the bigpipe ipalias command. For example:

bigpipe ipalias exp1 172.20.10.2 netmask 255.255.0.0 unit 2

If you do not have a shared IP alias for unit 1, add one using this command. To view the IP aliases for the controller, type the bigpipe interface command on the command line.

If the BIG-IP Controller fails over, its shared IP address is assumed by the remaining unit and the servers continue routing through the same IP address.

You can configure additional shared IP aliases on an external, destination processing, interface of each BIG-IP Controller, as well. This makes it possible for routers to route to a virtual server using vip noarp mode.

To configure the additional shared IP alias in the Configuration utility

  1. In the navigation pane, click NICs.
    The Network Interface Cards screen opens.
  2. On the Network Interface Cards screen, click the name of the interface you want to configure.
    The Network Interface Card properties screen opens. You must choose an internal (source processing) interface.
  3. In the Redundant Configuration section, check for a Unit 1 Alias and a Unit 2 Alias.
  4. If one of the unit aliases is not present, type in an alias for the unit.
  5. Click the Apply button.

    Repeat this procedure on the other controller or use the Sync Configuration option in the toolbar of the BIG-IP System Properties page. Note that these settings should be identical on both controllers.

Step 2: Configuring servers for active-active

The active-active feature causes some restrictions on the servers behind the BIG-IP Controllers. The servers must be logically segregated to accept connections from one BIG-IP Controller or the other. To do this, set the default route to the BIG-IP Controller IP alias (see Step 1: Configure an additional shared IP alias) from which it accepts connections. In the case of a fail-over, the surviving BIG-IP Controller assumes the internal IP alias of the failed machine, providing each server a default route.

Step 3: Check the BIG-IP Controller unit number

Using the bigdba utility, check the value of the BIG/db key Local.Bigip.Failover.UnitId. This value should be 1 for one of the controllers, and 2 for the other.

Each BIG-IP Controller in an active-active configuration requires a unit number: either a 1 or a 2. The First-Time Boot utility allows a user to specify a unit number for each BIG-IP Controller. In an active-active configuration, specify the unit number when you configure virtual addresses, NATs, and SNATs.

Note: You can only set this value directly in BIG/db. It cannot be set in the Configuration utility.

To check the BIG-IP Controller unit number in the Configuration utility

Follow this procedure on each BIG-IP Controller in a redundant system to check the BIG-IP Controller unit number with the Configuration utility:

  1. Open the Configuration utility.
  2. In the navigation pane, check the description next to the BIG-IP Controller icon.
    The status of the controller is Active and the unit number is either 1 or 2.

Step 4: Active-active BIG/db configuration parameters

To enable active-active, you must set the Common.Bigip.Failover.ActiveMode key to one (1). To set this value to one, follow these steps:

Type the following command to open the BIG/db database:

bigdba

At the bigdba prompt, type the following entry:

Common.Bigip.Failover.ActiveMode=1

Type quit to exit BIG/db and save the configuration.

The default for this entry is off and fail-over runs in active/standby mode.

To enable active-active in the Configuration utility

Perform this procedure on the active controller first. After the active box is enabled, follow this procedure on the standby controller. After you perform this feature on the standby controller, wait 30 seconds and click the Refresh button (Microsoft Internet Explorer) or Reload button (Netscape Navigator) on the browser for both controllers.

  1. In the navigation pane, click the BIG-IP Controller icon.
  2. The BIG-IP Controller System Properties screen opens.
  3. Click the Active-Active Mode Enabled check box.
  4. Click the Apply button.

Step 5: Virtual address configuration

Both BIG-IP Controllers must have the exact same configuration file (/etc/bigip.conf). When a virtual server is defined, it must be defined with a unit number that specifies which BIG-IP Controller handles connections for the virtual server. Each BIG-IP Controller has a unit number, 1 or 2, and serves the virtual servers with corresponding unit numbers. If one of the BIG-IP Controllers fails over, the remaining BIG-IP Controller processes the connections for virtual servers for both units.

Defining virtual servers, NATs, and SNATs on active-active controllers

Use the following commands to define virtual servers, NATs, and SNATs on active-active controllers:

bigpipe vip <virt addr>:<port> define [unit <1|2>]
<node addr>:<port>

bigpipe nat <internal_ip> to <external_ip> ... [unit <1|2>]

bigpipe snat map <orig_ip> to <trans_ip> ... [unit <1|2>]

Note: If not specified, the unit number defaults to 1.

Each BIG-IP Controller in an active-active configuration requires a unit number: either a 1 or a 2. Use the First-Time Boot utility to specify a unit number for each BIG-IP Controller. If you do not specify a unit number, the unit number for the virtual server defaults to 1.

Note: You must specify the unit number when defining virtual servers, NATs, and SNATs. You cannot add the unit number at a later time without redefining the virtual server, NAT, or SNAT.

To define virtual servers, NATs, and SNATs on active-active controllers in the Configuration utility

The following example illustrates the unit ID number in a virtual server definition. Although the steps to create a NAT or SNAT are slightly different, the unit ID number serves the same purpose.

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the toolbar, click the Add Virtual Server button.
  3. Type in the address, netmask, and port for the virtual server.
  4. Click the Unit ID list. Select the unit number for the virtual server.
  5. The connections served by this virtual server are managed by the controller assigned this unit ID.
  6. Complete the Resources section of the screen. For more information about individual settings, refer to the online help.
  7. Click the Apply button.

Step 6: Update the fail-over daemon (/sbin/sod) with the configuration changes made in BIG/db

Active-active mode is implemented by the fail-over daemon (/sbin/sod). If you change a BIG/db key that affects the fail-over daemon (keys that contain the word Failover) the fail-over daemon needs to be updated with the change. To update the fail-over daemon, type the following command:

bigpipe failover init

Step 7: Synchronize the configuration

After you complete steps 1 through 6 on each controller in the active-active system, synchronize the configurations on the controllers with the Configuration utility, or from the command line.

To synchronize the configuration in the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP Properties screen opens.
  2. In the toolbar, click the Sync Configuration button.
    The Synchronize Configuration screen opens.
  3. Click the Synchronize button.

To synchronize the configuration from the command line

To synchronize the configuration between two controllers from the command line, use the following command:

bigpipe configsync all

Step 8: Transition from active/standby to active-active

To transition from active/standby to active-active, type the following command on the active BIG-IP Controller:

bigpipe failover standby

This command puts the active BIG-IP Controller into partial active-active mode. To complete the transition, type in the following command on the other BIG-IP Controller which now considers itself the active unit.

bigpipe failover standby

Now both units are in active-active mode.

Note: This step is not required if you enable active-active in the Configuration utility. The transition is made during Step 4: Active- active BIG/db configuration parameters, on page 2-104.

Active-active system fail-over

Before a failure in an active-active installation, one BIG-IP Controller is servicing all requests for virtual servers configured on unit 1, and the other BIG-IP Controller is servicing all requests for virtual servers configured on unit 2. If one of the BIG-IP Controllers fails, the remaining BIG-IP Controller handles all requests for virtual servers configured to run on unit 1 and also those configured to run on unit 2. In other words, the surviving BIG-IP Controller is acting as both units 1 and 2.

If the BIG-IP Controller that failed reboots, it re-assumes connections for the unit number with which it was configured. The BIG-IP Controller that was running as both units stops accepting connections for the unit number that has resumed service. Both machines are now active.

When the unit that was running both unit numbers surrenders a unit number to the rebooted machine, all connections are lost that are now supposed to run on the rebooted machine, unless they were mirrored connections.

Disabling automatic fail back

In some cases, you may not want connections to automatically fail back. The fact that a machine has resumed operation may not be reason enough to disrupt connections that are running on the BIG-IP Controller serving as both units. Note that because of addressing issues, it is not possible to slowly drain away connections from the machine that was running as both units, giving new requests to the recently rebooted machine.

To disable automatic fail back, set the BIG/db key Common.Bigip.Failover.ManFailBack to 1. When you set this key to 1, a BIG-IP Controller running as both units does not surrender a unit number to a rebooted peer until it receives the bigpipe failover failback command. By default, this key is not set.

Taking an active-active controller out of service

You can use the bigpipe failover standby command to place an active controller in standby mode. In active-active mode, type the following command to place a one of the active controllers in standby mode:

bigpipe failover standby

This command causes the BIG-IP Controller to surrender its unit number to its peer. That is, its peer now becomes both units 1 and 2, the BIG-IP Controller appears out of service from a fail-over perspective, it has no unit numbers. You can make any changes, such as configuration changes, before causing the machine to resume normal operation.

Placing an active-active controller back in service if automatic failback is disabled

If the Common.Bigip.Failover.ManFailBack key is set to 0 (off), normal operation is restored when you issue a bigpipe failover failback command on the controller with no unit number.

In active-active mode, type the following command to place a standby controller back in service:

bigpipe failover failback

This command causes the BIG-IP Controller to resume its unit number. That is, the peer now relinquishes the unit number of the controller that has resumed service.

However, if the Common.Bigip.Failover.ManFailBack key is set to 1 (on), normal operations are restored when you issue a bigpipe failover failback command on the controller running with both unit numbers.

Additional active-active BIG/db configuration parameters

There are several new BIG/db parameters for active-active mode.

Common.Bigip.Failover.ActiveMode
Set this BIG/db parameter to 1 to enable active-active mode. The default setting is off, and redundant systems run in active/standby mode.

Local.Bigip.Failover.UnitId
This is the default unit number of the BIG-IP Controller. This value is set by the First-Time Boot utility or when you upgrade your controllers to this version of the BIG-IP Controller.

Common.Bigip.Failover.ManFailBack
This is set to 1 so that manual intervention is required (the bigpipe failover failback command is issued) before a BIG-IP Controller running both unit numbers surrenders a unit number to its peer. This feature is off by default, fail-back is automatic. For more details, see the section Active-active system fail-over, on page 2-108.

Common.Bigip.Failover.NoSyncTime
Set this to 1 if you do not want to synchronize the time of the two BIG-IP Controllers. Normally, their time is synchronized. For some cases, this is not desirable, for example, if you are running ntpd.

Common.Bigip.Failover.AwaitPeerDeadDelay
The BIG-IP Controller checks to see that its peer is still alive at this rate (in seconds). The default value for this parameter is one second.

Common.Bigip.Failover.AwaitPeerAliveDelay
Check status of a peer BIG-IP Controller while waiting for it to come to life with this frequency (in seconds). The default value of this parameter is three seconds.

Common.Bigip.Failover.DbgFile
If a file name is specified, the fail-over daemon logs state change information in this file. This value is not set by default.

Common.Bigip.Failover.PrintPeerState
Causes the fail-over daemon to periodically write the state of its connections to its peer (hard-wired and/or network) to the log file Common.Bigip.Failover.DbgFile.

Additional commands for displaying active vs. mirrored data

The dump commands explicitly show those connections (and other objects) that are active on the BIG-IP Controller, and those that are standby connections for the peer BIG-IP Controller. In prior versions of the BIG-IP Controller, one controller is the active unit and the other is the standby. When the bigpipe conn dump command is issued on the active unit, each of the connections shown is active. Similarly, when the bigpipe conn dump command is issued on the standby unit, it is clear that each of the connections listed is a standby connection. These standby connections are created by mirroring the active connections on the standby unit.

In an active-active installation, each unit can be considered a standby for its peer BIG-IP Controller. By default, the dump command only shows items that are active on the given unit. To see standby items you must use the mirror qualifier. You can use the following commands with the mirror option:

bigpipe conn dump [mirror]

bigpipe vip persist dump [mirror]

bigpipe sticky dump [mirror]

Also, the bigpipe snat show command output has been modified to show whether a connection listed is an active connection or a mirror connection.

Specific active-active bigpipe commands

Several specific commands are included in bigpipe to reflect new or changed functionality.

bigpipe failover init

This command causes the fail-over daemon (/sbin/sod) to read the BIG/db database and refresh its parameters.

bigpipe failover failback

After a bigpipe failover standby command is issued, issue this command to allow the BIG-IP Controller to resume normal operation. If manual fail back is enabled, this command causes a BIG-IP Controller that is running as both units to release a unit number to its peer unit when the peer becomes active. You can use the following commands to view the unit number on the controller you are logged into:

bigpipe unit [show]

To view the unit number, or numbers, of the peer BIG-IP Controllers in a redundant system, type the following command:

bigpipe unit peer [show]

Running mixed versions of BIG-IP Controller software in active-active mode

The BIG-IP Controller provides the option to install a new version of the BIG-IP Controller software on one BIG-IP Controller, while the other BIG-IP Controller runs a previous production version of the software. This allows you to fail back and forth between the two units, testing the new software yet having the ability to return to the prior installation.

This is possible with the new fail-over software in version 3.1, whether using active-active mode or active/standby mode. However, there are some exceptions:

State mirroring is not compatible between the version 3.0 and prior versions of the software. Network fail-over is also not compatible.

If you are running the BIG-IP Controller version 3.1 in active-active mode, you should assign unit two to the BIG-IP Controller running version 3.1.

Returning an active-active installation to active/standby mode

Returning to active/standby mode from active-active mode is relatively simple in that only a few things need be undone.

  1. Enable active/standby mode by setting the BIG/db key Common.Bigip.Failover.ActiveMode to 0.
  2. Update the fail-over daemon with the change by typing bigpipe failover init.
  3. To synchronize the configuration, type the command bigpipe configsync all.
  4. Since each BIG-IP Controller is an active unit, type the command bigpipe failover standby on each controller. This transitions each controller into active/standby mode.

    When in active/standby mode, the active BIG-IP Controller runs all objects (virtual servers, SNATs and NATs) that are defined to run on unit 1 or unit 2. It is not necessary to redefine virtual servers, SNATS, or NATs when you transition from active-active mode to active/standby mode.

Rule

You can create a rule that references two or more load balancing pools. In other words, a rule selects a pool for a virtual server. A rule is referenced by 1- to 31-character name. When a packet arrives that is destined for a virtual server that does not match a current connection, the BIG-IP Controller can select a pool by evaluating a virtual server rule to pick a node pool. The rule is configured to ask true or false questions such as:

  • HTTP header load-balancing: Does the packet data contain an HTTP request with a URI ending in cgi?
  • IP header load balancing: Does the source address of the packet begin with the octet 206?

The attributes you can configure for a rule are in Table 2.18

The attributes you can configure for a rule.
Attributes Description
Pool selection based on HTTP request data This type of rule sends connections to a pool, or pools based on HTTP header information you specify.
Pool selection based on IP packet header information This type of rule sends connections to a pool, or pools based on IP header information you specify.
Cache control rule This type of rule is any rule that contains a cache statement. A cache control rule selects a pool based on HTTP header data. You cannot use it with FTP.

Pool selection based on HTTP request data

The rule specifies what action the BIG-IP Controller takes depending on whether a question is answered true or false. The rule may either select a pool or ask another question. For example, you may want a rule that states if the packet data contains an HTTP request with a URI ending in cgi, then load balance using the pool cgi_pool. Otherwise, load balance using the pool default_pool.

Figure 2.7 shows a rule with an HTTP request variable that illustrates this example:

Figure 2.7 A rule based on an HTTP header variable

 rule cgi_rule {    
if (http_uri ends_with "cgi") {
use ( cgi_pool )
}
else {
use ( default_pool )
}
}

Load balancing normally happens right after the BIG-IP Controller receives a packet that does not match a current connection. However, in the case of an HTTP request, the first packet is a TCP SYN packet that does not contain the HTTP request. In this case, the BIG-IP Controller proxies the TCP handshake with the client and begins evaluating the rule again when the packet containing the HTTP request is received. When a pool has been selected and a server node selected, the BIG-IP Controller proxies the TCP handshake with the server node and then passes traffic normally.

Pool selection based on IP packet header information

In addition to the HTTP variables, you can also use IP packet header information such as the client_addr or ip_protocol variables to select a pool. For example, if you want to load balance based on part of the client's IP address, you may want a rule that states:

"All client requests with the first byte of their source address equal to 206 will load balance using a pool named clients_from_206 pool. All other requests will load balance using a pool named other_clients_pool."

Figure 2.8 shows a rule based on the client IP address variable that illustrates this example:

Figure 2.8 A rule based on the client address variable

 rule clients_from_206_rule {    
if ( client_addr equals 206.0.0.0 netmask 255.0.0.0 ) {
use ( clients_from_206 )
}
else {
use ( other_clients_pool )
}
}

Statements

A rule consists of statements. Rules support three kinds of statements:

  • An if statement asks a true or false question and, depending on the answer, decides what to do next
  • A discard statement discards the request. This statement must be conditionally associated with an if statement
  • A use statement uses a selected pool for load balancing. This statement must be conditionally associated with an if statement
  • A cache statement uses a selected pool for load balancing. This statement can be conditionally associated with an if statement

    The three possible statements expressed in command line syntax are:

    if (<question>) {<statement>} [else {<statement>}]

    discard

    use ( <pool_name> )

    cache ( <expressions> )

Questions (expressions)

A question or expression is asked by an if statement and has a true or false answer. A question or expression has two parts: a predicate (operator), and one or two subjects (operands).

There are two types of subjects (operands); some subjects change and some subjects stay the same.

  • Changing subjects are called variable operands.
  • Subjects that stay the same are called constant operands.

    A question, or expression, asks questions about variable operands by comparing their current value to constant operands with relational operators.

Constant operands

Possible constant operands are:

  • IP protocol constants, for example:
    UDP or TCP
  • IP addresses expressed in masked dot notation, for example:
    206.0.0.0 netmask 255.0.0.0
  • Strings of ASCII characters, for example:
    "pictures/bigip.gif"
  • Regular expression strings

Variable operands (variables)

Since variable operands change their value, they need to be referred to by a constant descriptive name. The variables available depend on the context in which the rule containing them is evaluated. Possible variable operands are:

  • IP packet header variables, such as:
    • Client request source IP address with the client_addr variable. The client_addr variable is replaced with an unmasked IP address.
    • IP protocol, UDP or TCP, with the ip_protocol variable. The ip_protocol variable is replaced with either the UDP or TCP protocol value.
  • HTTP request strings (see HTTP request string variables, on page 2-119). All HTTP request string variables are replaced with string literals.

The evaluation of a rule is triggered by the arrival of a packet. Therefore, variables in the rule may refer to features of the triggering packet. In the case of a rule containing questions about an HTTP request, the rule is evaluated in the context of the triggering TCP SYN packet until the first HTTP request question is encountered. After the proxy, the rule continues evaluation in the context of the HTTP request packet, and variables may refer to this packet. Before a variable is compared to the constant in a relational expression, it is replaced with its current value.

In a rule, relational operators compare two operands to form relational expressions. Possible relational operators and expressions are described in Table 2.19:

The relational operators
Expression Relational Operator
Are two IP addresses equal?

<address> equals <address>

Do a string and a regular expression match?

<variable_operand> matches_regex <regular_expression>

Are two strings identical?

<string> equals <string>

Is the second string a suffix of the first string?

<variable_operand> ends_with <string>

Is the second string a prefix of the first string?

<variable_operand> starts_with <string>

Does the first string contain the second string?

<variable_operand> contains <literal_string>

In a rule, logical operators modify an expression or connect two expressions together to form a logical expression. Possible logical operators and expressions are described in Table 2.20:

The logical operators
Expression Logical Operator
Is the expression not true?

not <expression>

Are both expressions true?

<expression> and <expression>

Is either expression true?

<expression> or <expression>

HTTP request string variables

HTTP request variables are referred to in command line syntax by a predefined set of names. Internally, an HTTP request variable points to a method for extracting the desired string from the current HTTP request header data. Before an HTTP request variable is used in a relational expression, it is replaced with the extracted string. The allowed variable names are:

http_method
The http_method is the action of the HTTP request. Common values are GET or POST.

http_uri
The http_uri is the URL, but does not include the protocol and the fully qualified domain name (FQDN). For example, if the URL is "http://www.url.com/buy.asp", then the URI is "/buy.asp".

http_version
The http_version is the HTTP protocol version string. Possible values are HTTP/1.0 or HTTP/1.1.

http_host
The http_host is the value in the Host: header of the HTTP request. It indicates the actual FQDN that the client requested. Possible values are a FQDN or a host IP address in dot notation.

http_cookie <cookie name>
The HTTP cookie header is value in the Cookie: for the specified cookie name. An HTTP cookie header line can contain one or more cookie name value pairs. The http_cookie <cookie name> variable evaluates to the value of the cookie with the name <cookie name>.

For example, given a request with the following cookie header line:

Cookie: green-cookie=4; blue-cookie=horses

The variable http_cookie blue-cookie evaluates to the string horses. The variable http_cookie green-cookie evaluates to the string 4.

http_header <header_tag_string>
The variable http_header evaluates the string following an HTTP header tag that you specify. For example, you can specify the http_host variable with the http_header variable. In a rule specification, if you wanted to load balance based on the host name "andrew" it might look like this:

if ( http_header "Host" starts_with "andrew" ) { use ( andrew_pool ) } else { use ( main_pool ) }

Configuring rules

You can create rules from the command line or with the Configuration utility. Each of these methods is described in this section.

To add a rule in the Configuration utility

  1. In the navigation pane, click Rules.
    This opens the Rules screen.
  2. In the toolbar, click the Add Rule button.
    The Add Rule screen opens.
  3. In the Rule Name box, type in the name you want to use for the rule.
  4. In the Text box, type in a rule. Note that you should not enclose the rule with curly braces { } as you do when you create a rule directly in the bigip.conf file.
  5. You can type in the rule as an unbroken line, or you can use the Enter key to add line breaks.
  6. Click the Add button to add the rule to the BIG-IP Controller configuration.

To define a rule from the command line

To define a rule from the command line, use the following syntax:

bigpipe rule <rule_name> ' { <if statement> } '

For more information about the elements of a rule, see Table 2.22, on page 2-122.

Configuring virtual servers that reference rules

Using either the Configuration utility or the command line, you can define a virtual server that references a rule.

To configure a virtual server that references a rule in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. Add the attributes you want for the virtual server such as Address, Port, Unit ID, and Interface.
  3. In the Resources section, click Rule.
  4. In the Rule list, select the rule you want to apply to the virtual server.
  5. Click the Apply button.

To configure a virtual server that references a rule from the command line

There are several elements required for defining a virtual server that references a rule from the command line:

bigpipe vip <virt_serv_key> { <vip_options> <rule_name_reference> }

Each of these elements is described in Table 2.21:

The command line rule elements
Rule element Description
<virt_serv_key> A virtual server key definition:

<virtual_address>:<virt_port> [<interface_name>] [unit <ID>]

<vip_options> Virtual server options such as IP netmask and broadcast address. For more information, see the BIG-IP Controller Reference Guide, bigpipe Command Reference.
<rule_name_reference> A rule name reference. Rule names are strings of 1 to 31 characters.

use rule <rule_name>

Note: You must define a pool before you can define a rule that references the pool.

Table 2.22 contains descriptions of all the elements you can use to create rules.

The elements you can use to construct rules
Element Description
A rule definition is

rule { <if_statement> }

A statement is

<use_statement>
<if_statement>
discard

<cache_statement>

A use statement

use ( <pool_name> )

An if statement

if ( <expression> ) { <statement> }
[ { else <statement> } ]
<cache_statement>

An expression

<literal>
<variable>
( <expression> )
exist <variable>
not <expression>
<expression> <binary_operator> <expression>

IP protocol constants

UDP

TCP

literal

<regex_literal>
<string_literal>
<address_literal>

A regular expression literal Is a string of 1 to 63 characters enclosed in quotes that may contain regular expressions
A string literal Is a string of 1 to 63 characters enclosed in quotes
An address literal

<dot_notation_longword> [netmask <dot_notation_longword>]

Dot notation longword

<0-255>.<0-255>.<0-255>.<0-255>

variable

http_method
http_header <header tag>
http_version
http_uri
http_host
http_cookie <cookie_name>
client_addr

ip_protocol

binary operator

or
and
contains
matches
equals
starts_with
ends_with
matches_regex

Element Description

Cache statement syntax

A cache statement may be either the only statement in a rule or it may be nested within an if statement. The syntax of a cache statement is:

Figure 2.9 An example of cache statement syntax

 cache ( <expression> ) {    
origin_pool <pool_name>
cache_pool <pool_name>
[ hot_pool <pool_name> ]
[ hot_threshold <hit_rate> ]
[ cool_threshold <hit_rate> ]
[ hit_period <seconds> ]
[ content_hash_size <sets_in_content_hash> ]
}

The following table describes the cache rule syntax:

Description of rule syntax
Rule Syntax Description
origin_pool <pool_name> This required attribute specifies a pool of servers with all the content to which requests are load balanced when the requested content is not cacheable or when all the cache servers are unavailable or when you use a BIG-IP Controller to redirect a miss request from a cache.
cache_pool <pool_name> This required attribute specifies a pool of cache servers to which requests are directed to optimize cache performance.
hot_pool <pool_name> This optional attribute specifies a pool of servers that contain content to which requests are load balanced when the requested content is frequently requested (hot). If you specify any of following attributes in this table, the hot_pool attribute is required.
hot_threshold <hit_rate> This optional attribute specifies the minimum number of requests for content that cause the content to change from cool to hot at the end of the period (hit_period).
cool_threshold <hit_rate> This optional attribute specifies the maximum number of requests for specified content that cause the content to change from hot to cool at the end of the period.
hit_period <seconds> This optional attribute specifies the period in seconds over which to count requests for particular content before deciding whether to change the hot or cool state of the content.
content_hash_size <sets_in_content_hash> This optional attribute specifies the number subsets into which the content is divided when calculating whether content is hot or cool. The requests for all content in the same subset are summed and a single hot or cool state is assigned to each subset. This attribute should be within the same order of magnitude as the actual number of requests possible. For example, if the entire site is composed of 500,000 pieces of content, a content_hash_size of 100,000 would be typical.

A cache statement returns either the origin pool, the hot pool, or the cache pool. When the cache pool is selected, it is accompanied by the indicated node address and port. When a rule returns both a pool and a node, the BIG-IP Controller does not do any additional load balancing or persistence processing.

The following is an example of a rule containing a cache rule statement.

Figure 2.10 An example of a cache load balancing rule

 rule my_rule {    
if ( http_host starts_with "dogfood" ) {
cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) {
origin_pool origin_server
cache_pool cache_servers
hot_pool cache_servers
hot_threshold 100
cool_threshold 10
hit_period 60
content_hash_size 1024
}
}
else {
use ( catfood_servers )
}
}

Using a default SNAT if your origin server is external to the BIG-IP Controller

If your origin server is external to the BIG-IP Controller, you must configure a default SNAT, enable source translation on the external interface, and set each origin server node address to remote in addition to creating a cache rule. This configuration allows source translation of non-cachable requests directed to the origin server without causing other incoming traffic to be source translated. To create a default SNAT, type the following command:

bigpipe snat map default to <trans_ip_addr>

Substitute the IP address of the default SNAT for <trans_ip_addr>.

To enable source translation on the external interface

To enable source translation on the external interface, type the following command:

bigpipe interface <ext_interface> source_translation enable

Substitute the name of the external interface for <ext_interface>.

To set the remote attribute for each origin server in the BIG-IP Controller configuration

To add the remote origin server to the BIG-IP Controller configuration, use the following command:

bigpipe node <origin_ip> remote

Substitute the IP address of the origin server for <origin_ip>.

Additional rule examples

This section includes additional examples or rules. The following rule examples are included:

  • Cookie rule
  • Language rule
  • Cacheable contents rule
  • AOL rule
  • Protocol specific rule

Cookie rule

This example is a cookie rule that load balances based on the user ID that contains the word VIP.

Figure 2.11 An example cookie rule

 if ( exists http_cookie "user-id" and    
http_cookie "user-id" contains "VIP" ) {
use ( vip_pool )
}
else {
use ( other_pool )
}

Language rule

This is an example of a rule that load balances based on the language requested by the browser:

Figure 2.12 An example of a rule that load balances based on the language requested by the browser

 if ( exists http_header "Accept-Language" ) {    
if ( http_header "Accept-Language" equals "fr" ) {
use ( french_pool )
}
else {
if ( http_header "Accept-Language" equals "sp" ) {
use (spanish_pool )
}
else {
use ( english_pool )
}
}
else {
use ( english_pool )
}

Cache content rule

This is an example of a rule that you can use to send cache content, such as gifs, to a specific pool.

Figure 2.13 An example of a cache content rule

 if ( http_uri ends_with "gif" or    
http_uri ends_with "html" ) {
use ( cache_pool )
}
else {
use ( server_pool )
}

AOL rule

This is an example of a rule that you can use to load balance incoming AOL connections.

Figure 2.14 An example of an AOL rule

 port 80 443 enable    

pool aol_pool {
lb_method priority_member
member 12.0.0.31:80 priority 4
member 12.0.0.32:80 priority 3
member 12.0.0.33:80 priority 2
member 12.0.0.3:80 priority 1
}
pool other_pool {
lb_method round_robin
member 12.0.0.31:80
member 12.0.0.32:80
member 12.0.0.33:80
member 12.0.0.3:80
}
pool aol_pool_https {
lb_method priority_member
member 12.0.0.31:443 priority 4
member 12.0.0.32:443 priority 3
member 12.0.0.33:443 priority 2
member 12.0.0.3:443 priority 1
}
pool other_pool_https{
lb_method round_robin
member 12.0.0.31:443
member 12.0.0.32:443
member 12.0.0.33:443
member 12.0.0.3:443
}
rule aol_rule {
if ( client_addr equals 152.163.128.0 netmask 255.255.128.0
or client_addr equals 195.93.0.0 netmask 255.255.254.0
or client_addr equals 205.188.128.0 netmask 255.255.128.0 ) {
use ( aol_pool )
}
else {
use ( other_pool)
}
}
rule aol_rule_https {
if ( client_addr equals 152.163.128.0 netmask 255.255.128.0
or client_addr equals 195.93.0.0 netmask 255.255.254.0
or client_addr equals 205.188.128.0 netmask 255.255.128.0 ) {
use ( aol_pool_https )
}
else {
use ( other_pool_https)
}
}
vip 15.0.140.1:80 { use rule aol_rule }
vip 15.0.140.1:443 { use rule aol_rule_https special ssl 30 }

IP protocol specific rule

This is an example of a rule that you can use to send TCP DNS to the pool tcp_pool and UDP DNS to the pool udp_pool.

Figure 2.15 An example of an IP protocol rule

 rule myrule {     
if ( ip_protocol equals UDP ) {
use ( udp_pool )
}
else {
use ( tcp_pool )
}

Comparing load balancing configurations

You can use the method from previous versions of the BIG-IP Controller to define a virtual server with a single node list. However, with this version of the BIG-IP Controller, node list virtual servers are being phased out. Node lists use the global load balancing mode set on the BIG-IP Controller. The global mode cannot be set to the ratio_member, priority_member, least_conn_member, observed_member, or predictive_member load balancing modes. For an example of a node list, see Figure 2.16.

Figure 2.16 The node list method of defining virtual servers

 lb ratio    
vip 15.0.140.1:80 {
define 12.0.0.44:80 12.0.0.45:80
}
ratio {
12.0.0.44
} 1
ratio {
12.0.0.45
} 2

In contrast to a node list virtual server, you can share pools with a number of virtual servers on the BIG-IP Controller. For example, Figure 2.17 shows the gif_pool shared by two virtual servers:

Figure 2.17 An example of a pool shared by two virtual servers

 pool cgi_pool {    
lb_method ratio_member
member 12.0.0.44:80 ratio 1
member 12.0.0.45:80 ratio 2
}
pool gif_pool {
lb_method ratio_member
member 12.0.0.44:80 ratio 1
member 12.0.0.45:80 ratio 3
}
rule http_rule {
if ( http_uri ends_with "gif" ) {
use ( gif_pool )
}
else {
use ( cgi_pool )
}
}

vip 15.0.140.1:80 {
netmask 255.255.0.0 broadcast 15.0.255.255
use rule http_rule
}
vip 15.0.140.2:80 {
netmask 255.255.0.0 broadcast 15.0.255.255
use pool gif_pool
}

SNAT

When you define secure network address translations (SNATs), you can assign a single SNAT address to multiple nodes. Note that a SNAT address does not necessarily have to be unique; for example, it can match the IP address of a virtual server.

SNAT addresses have global properties that apply to all SNATs that you define in the BIG-IP Controller configuration as well as to the SNAT mappings you define. You can configure SNATs in the Configuration utility or from the command line.

The attributes you can configure for a SNAT are in Table 2.24.

The attributes you can configure for a SNAT
Attributes Description
Global SNAT properties Before you can configure a SNAT, you must configure global properties for all SNATs on the BIG-IP Controller.
Default SNAT If you do not wish to configure specific SNATs, you can configure a default SNAT.
Individual SNAT You can configure individual SNATs for specific hosts in the network.

Setting SNAT global properties

The SNAT feature supports three global properties that apply to all SNAT addresses:

  • Connection limits
    The connection limit applies to each node that uses a SNAT, and each individual SNAT can have a maximum of 50,000 simultaneous connections.
  • TCP idle connection timeout
    This timer defines the number of seconds that TCP connections initiated using a SNAT address are allowed to remain idle before being automatically disconnected.
  • UDP idle connection timeout
    This timer defines the number of seconds that UDP connections initiated using a SNAT address are allowed to remain idle before being automatically disconnected. This value should not be set to 0.

To configure SNAT global properties in the Configuration utility

  1. In the navigation pane, click Secure NATs.
    The Secure Network Address Translations screen opens.
  2. In the Connection Limit box, type the maximum number of connections you want to allow to each node using a SNAT. To turn connection limits off, set the limit to 0. If you turn connection limits on, keep in mind that each SNAT can support only 50,000 simultaneous connections.
  3. In the TCP Idle Connections box, type the number of seconds that TCP connections initiated by a node using a SNAT are allowed to remain idle.
  4. In the UDP Idle Connections box, type the number of seconds that UDP connections initiated by a node using a SNAT are allowed to remain idle. This value should not be set to 0.
  5. Click the Apply button.

To configure SNAT global properties on the command line

Configuring global properties for a SNAT requires that you enter three bigpipe commands. The following command sets the maximum number of connections you want to allow for each node using a SNAT.

bigpipe snat limit <value>

The following commands set the TCP and UDP idle connection timeouts:

bigpipe snat timeout tcp <seconds>

bigpipe snat timeout udp <seconds>

Configuring SNAT address mappings

Once you have configured the SNAT global properties, you can configure SNAT address mappings. The SNAT address mappings define each SNAT address, and also define the node or group of nodes that uses the SNAT address. Note that a SNAT address does not necessarily have to be unique; for example, it can match the IP address of a virtual server. A SNAT address cannot match an address already in use by a NAT, SNAT, or BIG-IP Controller address.

To configure a SNAT mapping in the Configuration utility

  1. In the navigation pane, click Secure NATs.
    The Secure Network Address Translations screen opens.
  2. On the toolbar, click Add SNAT.
    The Add SNAT screen opens.
  3. In the Translation Address box, type the IP address that you want to use as the alias IP address for the node(s).
  4. In the Interface box, you can select the external interface (destination processing) on which the SNAT address is to be used. Note that this setting applies only if your BIG-IP Controller has more than one destination processing interface.
  5. In the Original Address box, type the IP address of the node or nodes that are assigned to the SNAT. Click the add button (>>) to add the address to the Current List.
  6. To remove an address from the Current List, click the remove button (<<).
  7. Click the Apply button.

To configure a SNAT mapping on the command line

The bigpipe snat command defines one SNAT for one or more node addresses.

bigpipe snat map <node addr>... <node addr> to <SNAT_addr>

For example, the command below defines a secure network address translation for two nodes:

bigpipe snat map 192.168.75.50 192.168.75.51 to 192.168.100.10

Defining the default SNAT

Use the following syntax to define the default SNAT. If you use the netmask parameter and it is different from the external interface default netmask, the command sets the netmask and derives the broadcast address.

You can use the unit <unit ID> parameter to specify a unit in an active-active redundant configuration.

bigpipe snat map default to <translated_ip> [<ifname>] [unit <unit ID>] [netmask <ip>]

Creating individual SNAT addresses

Use the following command syntax to create a SNAT mapping:

bigpipe snat map <orig_ip> [...<orig_ip>] to \
<SNAT ip> [<ifname>] [unit <unit ID>] [netmask <ip>]

If the netmask is different from the external interface default netmask, the command sets the netmask and derives the broadcast address.

Deleting SNAT Addresses

The following syntax deletes a specific SNAT:

bigpipe snat <SNAT ip> | default delete

Showing SNAT mappings

The following bigpipe command shows mappings:

bigpipe snat [<SNAT ip>] [...<SNAT ip>] show

bigpipe snat default show

The <SNAT ip> can be either the translated or original IP address of the SNAT.

The following command shows the current SNAT connections:

bigpipe snat [<SNAT ip>] [...<SNAT ip>] dump [ verbose ]

bigpipe snat default dump [ verbose ]

The optional verbose keyword provides more detailed output.

The following command prints the global SNAT settings:

bigpipe snat globals show

Enabling mirroring for redundant systems

The following example sets SNAT mirroring for all SNAT connections originating at 192.168.225.100:

bigpipe snat 192.168.225.100 mirror enable

Clearing statistics

You can reset statistics by node or by SNAT address. Use the following syntax to clear all statistics for one or more nodes:

bigpipe snat <node ip> [ ...<node ip> ] stats reset

Use the following syntax to clear all statistics for one or more SNAT addresses:

bigpipe snat <SNAT ip> [ ...<SNAT ip> ] stats reset

Use the following command to reset the statistics to zero for the default:

bigpipe snat default stats reset

timer settings

There are two essential timer settings that you need to configure:

  • The node ping timer defines how often the BIG-IP Controller will ping node addresses to verify whether a node is up or down. It also defines how long the BIG-IP Controller waits for a response from a node before determining that the node is unresponsive and marking the node down.
  • The idle connection timer defines how long an inactive connection is allowed to remain open before the BIG-IP Controller deletes the record of the connection, closing it and disconnecting the client.

    The service check timer is optional, and you need to set it only if you want the BIG-IP Controller to check if a service, or even specific content, is available on a particular node.

    The attributes you can configure for timer settings are in Table 2.25.

The attributes you can configure for timer settings
Attributes Description
Node ping timer This timer determines how often the BIG-IP Controller checks to see if nodes are up or down.
Timer for reaping idle connections This timer reaps TCP and UDP connections that have been idle for a specified time.
Service check timer This timer determines whether a server is available by verifying that a particular service is running on a node.
Service checking wildcard ports Service checking wildcard servers and ports requires you to specify a port, other than port 0, to be service checked.

Note: If you plan to use simple service checks, or ECV or EAV service checks, you need to set the service check timer.

Setting the node ping timer

The node ping timer is an essential setting on the BIG-IP Controller that determines how often the BIG-IP Controller checks node addresses to see whether they are up and available or down and unavailable. The node ping timer setting applies to all nodes configured for use by the BIG-IP Controller, and it is part of the BIG-IP Controller system properties.

The node ping timer sets the amount of time that a server has to respond to a BIG-IP Controller ping in order for the server to be marked up. If a server fails to respond within the specified time, the BIG-IP Controller assumes that the server is down, and the BIG-IP Controller no longer sends packets to the services hosted by the server. If the server responds to the next ping, or to subsequent pings, the BIG-IP Controller then marks the server up, and resumes sending packets to those services.

Note: If the Node ping timer (timeout_node) interval is shorter than the Service timer (timeout_svc) setting, a node can be marked down before the services on the node are marked down.

To set the node ping timer using the Configuration utility

  1. In the navigation pane, click the BIG-IP Controller icon.
    The BIG-IP System Properties screen opens.
  2. In the Node Ping section of the table, in the Ping box, type the frequency (in seconds) at which you want the BIG-IP Controller to ping each node address it manages. A setting of 5 seconds is adequate for most configurations.
  3. In the Node Ping section of the table, in the Timeout box, type the number of seconds you want the BIG-IP Controller to wait to receive a response to the ping. If the BIG-IP Controller does not receive a response to the ping before the node ping timeout expires, the BIG-IP Controller marks the node down and does not use it for load balancing. A setting of 16 seconds is adequate for most configurations.

To set the node ping timer from the command line

To define node ping settings, you use two commands. First, you set the node ping frequency using the bigpipe tping_node command, and then you set the node ping timer using the bigpipe timeout_node command.

bigpipe tping_node <seconds>

bigpipe timeout_node <seconds>

For example, the following commands sets the ping frequency at 5 seconds, and the timer to 16 seconds, which should be adequate for most configurations.

bigpipe tping_node 5

bigpipe timeout_node 16

Displaying the current timeout value

Use the following command to display the current timeout setting for node ping:

bigpipe timeout_node show

Displaying the current node ping setting

Use the following command to display the current node ping setting:

bigpipe tping_node show

Setting a timeout value for node ping

Use the following syntax to set the timeout setting for node ping:

bigpipe timeout_node <seconds>

The sample command below sets the timeout to 33 seconds.

bigpipe timeout_node 33

Disabling node ping

To disable node ping, you simply set the node ping timeout value to 0 (zero):

bigpipe timeout_node 0

To turn node ping off, set the tping_node interval to 0 seconds:

bigpipe tping_node 0

Warning: Node ping is the only form of verification that the BIG-IP Controller uses to determine status of node addresses. If you turn node ping off while one or more node addresses are currently down, the node addresses remain marked down until you turn node ping back on and allow the BIG-IP Controller to verify the node addresses again.

Setting the timer for reaping idle connections

The BIG-IP Controller supports two timers for reaping idle connections, one for TCP traffic and one for UDP traffic. These timers are essential, and if they are set too high, or not at all, the BIG-IP Controller may run out of memory. Each individual port on the BIG-IP Controller has its own idle connection timer settings.

An idle connection is one in which no data has been received or sent for the number of seconds specified for TCP or UDP connections. For effectively reap idle connections, you should set the idle connection timeout values to be greater than the configured timeouts for the service daemons installed on your nodes.

The TCP idle connection timeout clears the connection tables, avoiding memory problems due to the accumulation of dead, but not terminated, connections.

Warning: The BIG-IP Controller accepts UDP connections only if you set the UDP idle connection timer.

To set the inactive connection timer in the Configuration utility

  1. In the navigation pane, click the expand button (+) next to Virtual Servers.
    The Virtual Server tree opens and displays the Ports option.
  2. Click Ports.
    The Global Virtual Ports screen opens.
  3. In the Port box, click the port number or service name for which you want to configure the idle connections timeouts.
    The Global Virtual Port screen opens.
  4. In the Idle Connection Timeout TCP box, type the number of seconds you want to elapse before the BIG-IP Controller drops an idle TCP connection. For HTTP connections, 60 seconds should be adequate, but for other services such as Telnet, higher settings may be necessary.
  5. In the Idle Connection Timeout UDP box, type the number of seconds you want to elapse before the BIG-IP Controller drops UDP connections.
  6. Click the Apply button.

To set TCP idle connection timers on the command line

Use the bigpipe treaper to define a TCP idle connection timeout for one or more ports at a time. For HTTP connections we recommend only 60 seconds, but for other services such as Telnet we recommend higher settings. The default setting for this timer is 16 minutes (1005 seconds). Use the following syntax for this command.

bigpipe treaper <port>... <port> <seconds>

For example, the following command sets a 120 second time limit for idle connections on port 443:

bigpipe treaper 443 120

To set UDP idle connection timers on the command line

You can define a UDP idle connection timeout for one or more ports at a time using the bigpipe udp command.

bigpipe udp <port>... <port> <seconds>

For example, the following command sets a 120-second time limit for idle connections on port 53:

bigpipe udp 53 120

Setting the service check timer

The service check feature is similar to node ping, but instead of testing the availability of a server, it tests the availability of a particular service running on a server. The service check timer affects the three different types of service checks: simple service check, ECV service check, and EAV service check. To set up simple service check, you need only set the service check timer as described below. To set up ECV service check or EAV service check, however, you need to configure additional settings (see Extended Content Verification (ECV), on page 2-13).

Note that each individual service managed by the BIG-IP Controller has its own service check timer settings.

To set the service check timer in the Configuration utility

  1. In the navigation pane, click the expand button (+) next to Nodes.
    The Nodes tree opens and displays the Ports option.
  2. Click Ports.
    The Global Node Ports screen opens.
  3. Click the port you want to configure.
    The Global Node Port Properties screen opens.
  4. In the Frequency box, type the frequency (in seconds) at which you want the BIG-IP Controller to check the service on the node for all defined nodes using this port. Five seconds is adequate for most configurations.
  5. In the Timeout box, type the number of seconds you want the BIG-IP Controller to wait to receive a response to the service check. If the BIG-IP Controller does not receive a response to the service check before the timeout expires, the BIG-IP Controller marks the service on the node down and does not use it for load balancing. Sixteen (16) seconds is adequate for most configurations.
  6. Click the Apply button.

To set the service check timer on the command line

To define service check settings, you actually use a series of two commands. First, you set the service check frequency using the bigpipe tping_svc command, and then set the service check timer using the bigpipe timeout_svc command.

bigpipe tping_svc <port> <seconds>

bigpipe timeout_svc <port> <seconds>

For example, the following series of commands sets the service check frequency at 5 seconds, and the timer to 16 seconds, which is adequate for most configurations.

bigpipe tping_svc 80 5

bigpipe timeout_svc 80 16

Service checking for wildcard servers and ports

When you configure a wildcard virtual server with a 0 port using nodes with standard ports, such as 80, with port translation turned off, the BIG-IP Controller uses the standard service check timeout values (port 80, for example) to service check the port. For more information about setting the service check timer, see Setting the service check timer, on page 2-143.

Using the simple keyword

The simple keyword is being phased out in future releases. This information is provided in order to support existing configurations.

The simple keyword is necessary only if you specified a node port of 0. In previous versions of the BIG-IP Controller, this was the only way to set up a wildcard virtual server that handled connections for all services. However, we now recommend that you specify a node port and then turn off port translation for the virtual server.

To set up a simple service check for this type of virtual server, add the following entry to the /etc/bigd.conf file. Use the following syntax to set a check on a node where the check port is not the node port:

simple [<node addr>:]<node port> <check port>

For example, a wildcard server is defined with a wildcard port, like this:

bigpipe vip 0.0.0.0:0 define n1:0

In this case, you must use the simple keyword to designate the wildcard <server:><port> and <check port> for the service check:

simple n1:0 80

Virtual server

Virtual servers provide the ability to map a number of network devices to a single virtual address. A virtual server in combination with a load balancing pool, or rule, provides the ability to load balance connections, use persistence, and also provide high availability features on the BIG-IP Controller.

You must configure a pool of servers before you can create a virtual server that references the pool. Before you configure virtual servers, you need to know:

  • If standard virtual servers or wildcard virtual servers meet the needs of your network
  • Whether you need to activate optional virtual server properties

    Once you know which virtual server options are useful in your network, you can:

  • Define standard virtual servers
  • Define wildcard virtual servers

The attributes you can configure for a virtual server are in Table 2.26.

The attributes you can configure for a virtual server
Attributes Description
Standard virtual server A standard virtual server sends connection requests to load balancing pools or rules.
Wildcard virtual server A wildcard virtual server is typically used to make requests to hosts on the internet from a network behind the BIG-IP Controller.
Network virtual server A network virtual server handles a whole range of addresses in a network.
Other virtual server attributes You can set connection limits, translation properties, last hop pools, and mirroring information for virtual servers.

Using standard or wildcard virtual servers

Virtual servers reference a pool you create that contains a group of content servers, firewalls, routers, or cache servers, and they are associated with one or more external interfaces on the BIG-IP Controller.

You can configure two different types of virtual servers:

  • Standard virtual servers
    A standard virtual server represents a site, such as a web site or an FTP site, and it provides load balancing for a pool of content servers or other network devices. The virtual server IP address should be the same IP address that you register with DNS for the site that the virtual server represents.
  • Wildcard virtual servers
    A wildcard virtual server load balances a pool of transparent network devices such as firewalls, routers, or cache servers. Wildcard virtual servers are configured with an IP address of 0.0.0.0, and sometimes with a virtual port of 0.

    Note that both the Configuration utility and the BIG/pipe command line utility accept host names in place of IP addresses, and also accept standard service names in place of port numbers.

Defining virtual servers

A standard virtual server represents a specific site, such as an Internet web site or an FTP site, and it load balances content servers that are members of a pool. The IP address that you use for a standard virtual server should match the IP address that DNS associates with the site's domain name.

Note: If you are using a 3-DNS Controller in conjunction with the BIG-IP Controller, the 3-DNS Controller uses the IP address associated with the registered domain name in its own configuration. For details, refer to the 3-DNS Controller Administrator Guide.

To define a standard virtual server that references a pool in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. On the toolbar, click Add Virtual Server.
    The Add Virtual Server screen opens.
  3. In the Address box, enter the virtual server's IP address or host name.
  4. In the Netmask box, type an optional netmask. If you leave this setting blank, the BIG-IP Controller uses a default netmask based on the IP address you entered for the virtual server. Use the default netmask unless your configuration requires a different netmask.
  5. In the Broadcast box, type the broadcast address for this virtual server. If you leave this box blank, the BIG-IP Controller generates a default broadcast address based on the IP address and netmask of this virtual server.
  6. In the Port box, either type a port number, or select a service name from the drop-down list.
  7. For Interface, select the external (destination processing) interface on which you want to create the virtual server. Select default to allow the Configuration utility to select the interface based on the network address of the virtual server. If no external interface is found for that network, the virtual server is created on the first external interface. If you choose None, the BIG-IP Controller does not create an alias and generates no ARPs for the virtual IP address. In this case, the BIG-IP Controller accepts traffic on all interfaces.
  8. In Resources, click the Pool button.
    If you want to assign a load balancing rule to the virtual server, click Rule and select a rule you have configured.
  9. In the Pool list, select the pool you want to apply to the virtual server.
  10. Click the Apply button.

To define a standard virtual server mapping on the command line

Type the bigpipe vip command as shown below. Also, you can use host names in place of IP addresses, and that you can use standard service names in place of port numbers.

bigpipe vip <virt IP>:<port> use pool <pool_name>

bigpipe vip <virt IP>:<port> use rule <rule_name>

For example, the following command defines a virtual server that maps to the pool my_pool:

bigpipe vip 192.200.100.25:80 use pool my_pool

Defining wildcard virtual servers

Wildcard virtual servers are a special type of virtual server designed to manage network traffic for transparent network devices, such as transparent firewalls, routers, proxy servers, or cache servers. A wildcard virtual server manages network traffic that has a destination IP address unknown to the BIG-IP Controller. A standard virtual server typically represents a specific site, such as an Internet web site, and its IP address matches the IP address that DNS associates with the site's domain name. When the BIG-IP Controller receives a connection request for that site, the BIG-IP Controller recognizes that the client's destination IP address matches the IP address of the virtual server, and it subsequently forwards the client to one of the content servers that the virtual server load balances.

However, when you are load balancing transparent nodes, a client's destination IP address is going to seem random. The client is connecting to an IP address on the other side of the firewall, router, or proxy server. In this situation, the BIG-IP Controller cannot match the client's destination IP address to a virtual server IP address. Wildcard virtual servers resolve this problem by not translating the incoming IP address at the virtual server level on the BIG-IP Controller. For example, when the BIG-IP Controller does not find a specific virtual server match for a client's destination IP address, it matches the client's IP address to a wildcard virtual server. The BIG-IP Controller then forwards the client's packet to one of the firewalls or routers that the wildcard virtual server load balances, which in turn forwards the client's packet to the actual destination IP address.

A note about wildcard ports

When you configure wildcard virtual servers and the nodes that they load balance, you can use a wildcard port (port 0) in place of a real port number or service name. A wildcard port handles any and all types of network services.

A wildcard virtual server that uses port 0 is referred to as a default wildcard virtual server, and it handles traffic for all services. A port-specific wildcard virtual server handles traffic only for a particular service, and you define it using a service name or a port number. If you use both a default wildcard virtual server and port-specific wildcard virtual servers, any traffic that does not match either a standard virtual server or one of the port-specific wildcard virtual servers is handled by the default wildcard virtual server.

You can use port-specific wildcard virtual servers for tracking statistics for a particular type of network traffic, or for routing outgoing traffic, such as HTTP traffic, directly to a cache server rather than a firewall or router.

We recommend that when you define transparent nodes that need to handle more than one type of service, such as a firewall or a router, you specify an actual port for the node and turn off port translation for the virtual server.

Note: When you define a virtual server with port translation turned off, and you want to perform a service check on that node, you must configure service check intervals and timeouts using the port specified for the node. Then you can configure a service check. See Service checking for wildcard servers and ports, on page 2-144, for more details.

Defining the wildcard virtual server mappings

There are two procedures required to set up a wildcard virtual server. First, you must define the wildcard virtual server. Then you must turn port translation off for the virtual server.

To define a wildcard virtual server mapping using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. On the toolbar, click Add Virtual Server.
    The Add Virtual Server screen opens.
  3. In the Address box, type the wildcard IP address 0.0.0.0.
  4. In the Netmask box, type an optional netmask.
    If you leave this box blank, the BIG-IP Controller generates a default netmask address based on the IP address of this virtual server. Use the default netmask unless your configuration requires a different netmask.
  5. In the Broadcast box, type the broadcast address for this virtual server.
    If you leave this box blank, the BIG-IP Controller generates a default broadcast address based on the IP address and netmask of this virtual server.
  6. In the Port box, type a port number, or select a service name from the drop-down list. Note that port 0 defines a wildcard virtual server that handles all types of services. If you specify a port number, you create a port-specific wildcard virtual server. The wildcard virtual server only handles traffic for the port specified.
  7. For Interface, select the external (destination processing) interface on which you want to create the virtual server.
    If you choose None, the BIG-IP Controller does not create an alias and generates no ARPs for the virtual IP address (see the BIG-IP Controller Administrator Guide, Optimizing large configurations for details).
  8. In Resources, click the Pool button.
  9. In the Pool list, select the pool you want to apply to the virtual server.
  10. Click the Apply button.

To turn off port translation for a wildcard virtual server in the Configuration utility

After you define the wildcard virtual server with a wildcard port, you must disable port translation for the virtual server.

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the virtual server list, click the virtual server for which you want to turn off port translation.
    The Virtual Server Properties screen opens.
  3. In the Enable Translation section, clear the Port box.
  4. Click the Apply button.

To define a wildcard virtual server mapping on the command line

There are three commands required to set up a wildcard virtual server. First, you must define a pool that contains the addresses of the transparent devices. Next, you must define the wildcard virtual server. Then you must turn port translation off for the virtual server. To define the pool of transparent devices, use the bigpipe pool command. For example, you can create a pool of transparent devices called transparent_pool that uses the Round Robin load balancing mode:

bigpipe pool transparent_pool { lb_mode rr member <member_definition>... member <member_definition> }

To define the virtual server, use the bigpipe vip command:

bigpipe vip <virtual IP>:<port> use pool <pool_name>

After you define the virtual server, you can enable or disable port translation using the following command:

bigpipe vip <virtual IP>:<port> translate port enable | disable

For example, you can create a pool of transparent devices called transparent_pool that uses the Round Robin load balancing mode:

bigpipe pool transparent_pool { lb_mode rr member 10.10.10.101:80 member 10.10.10.102:80 member 10.10.10.103:80 }

After you create the pool of transparent nodes, use the following command to create a wildcard virtual server that maps to the pool transparent_pool. Because the members are firewalls and need to handle a variety of services, the virtual server is defined using port 0 (or * or any). You can specify any valid non-zero port for the node port and then turn off port translation for that port. In this example, service checks ping port 80.

bigpipe vip 0.0.0.0:0 use pool transparent_pool

After you define the virtual server, turn off port translation for the port in the virtual server definition. In this example, port 80 is used for service checking. If you do not turn off port translation, all incoming traffic would be translated to port 80.

bigpipe vip 0.0.0.0:0 translate port disable

Configuring a network virtual server

You can configure a network virtual server to handle a whole network range, instead of just one IP address, or all IP addresses (wildcard virtual servers). For example, the virtual server in Figure 2.18 handles all traffic addresses in the 192.168.1.0 network:

Figure 2.18 A sample network virtual server

bigpipe vip 192.168.1.0:0 none { 
netmask 255.255.255.0 broadcast 192.168.1.255
use pool ingress_firewalls
}

Note: Network virtual servers should be assigned to interface none.

A network virtual server is a virtual server that has no bits set in the host portion of the IP address. In other words, the host portion is zero. You must specify a network mask to indicate which portion of the address is the network address and which portion is the host address. In the previous example, since the network mask is 255.255.255.0, the network portion of the address is 192.168.1 and the host portion is .0. The previous example would direct all traffic destined to the subnet 192.168.1.0/24 through the BIG-IP Controller to the ingress_firewalls pool.

Another way you can use this feature is to create a catch-all webserver for an entire subnet. For example, you could create the following network virtual server (Figure 2.19).

Figure 2.19 A catch-all web server configuration.

bigpipe vip 192.168.1.0:http none { 
netmask 255.255.255.0 broadcast 192.168.1.255
use pool default_webservers
}

This configuration directs a web connection destined to any address within the subnet 192.168.1.0/24 to the default_webservers pool.

Displaying information about virtual servers

Use the following syntax to display information about all virtual servers included in the configuration:

bigpipe vip show

Use the following syntax to display information about one or more virtual servers included in the configuration:

bigpipe vip <virt ip>:<port> [...<virt ip>:<port>] show

The command displays information such as the nodes associated with each virtual server, the nodes' status, and the current, total, and maximum number of connections managed by the virtual server since the BIG-IP Controller was last rebooted.

Defining an interface for a virtual server

If you have multiple external (destination processing) interfaces, you can specify one of them when you define a virtual server. If you specify an interface name, the BIG-IP Controller responds to ARP requests for the virtual address on that interface. If you do not specify an interface name, the BIG-IP Controller responds to ARP requests for the virtual server on the default interface. If you do not want the BIG-IP Controller to respond to ARP requests on any interface, use the option none in place of the an <ifname> parameter.

All virtual servers that share a virtual address must use the same external interface. Changing the interface for a virtual server changes the interface for all virtual servers having the same virtual address.

Setting a user-defined netmask and broadcast

The default netmask for a virtual address, and for each virtual server hosted by that virtual address, is determined by the network class of the IP address entered for the virtual server. The default broadcast is automatically determined by the BIG-IP Controller, and it is based on the virtual address and the current netmask. You can override the default netmask and broadcast for any virtual address.

All virtual servers hosted by the virtual address use the netmask and broadcast of the virtual address, whether they are default values or they are user-defined values.

Note that if you want to use a custom netmask and broadcast, you define both when you define the virtual server:

bigpipe vip <virt ip>[:<port>] [<ifname>] [netmask <ip>] \
[broadcast <ip>] use pool <pool_name>

Note: The BIG-IP Controller calculates the broadcast based on the IP address and the netmask. A user-defined broadcast address is not necessary.

Again, even when you define a custom netmask and broadcast in a specific virtual server definition, the settings apply to all virtual servers that use the same virtual address. The following sample command shows a user-defined netmask and broadcast:

bigpipe vip www.SiteOne.com:http netmask 255.255.0.0 \
broadcast 10.0.140.255 use pool my_pool

The /bitmask option shown in the following example applies network and broadcast address masks. In this example, a 24-bit bitmask sets the network mask and broadcast address for the virtual server:

bigpipe vip 206.168.225.1:80/24 use pool my_pool

You can generate the same broadcast address by applying the 255.255.255.0 netmask. The effect of the bitmask is the same as applying the 255.255.255.0 netmask. The broadcast address is derived as 206.168.225.255 from the network mask for this virtual server.

Setting a connection limit

The default setting is to have no limit to the number of concurrent connections allowed on a virtual server. You can set a concurrent connection limit on one or more virtual servers using the following command:

bigpipe vip <virt ip>[:<port>] [...<virt ip>[:<port>] ] limit \ <max conn>

The following example shows two virtual servers set to have a concurrent connection limit of 5000 each:

bigpipe vip www.SiteOne.com:http www.SiteTwo.com:ssl limit 5000

To turn the limit off, set the <max conn> variable to zero:

bigpipe vip <virt ip>[:<port>] [...<virt ip>[:<port>] ] limit 0

Setting translation properties for virtual addresses and ports

Turning port translation off for a virtual server is useful if you want to use the virtual server to load balance connections to any service. Use the following syntax to enable or disable port translation for a virtual server.

bigpipe vip <virt ip>:<port> translate port enable | disable | show

You can also configure the translation properties for a virtual server address. This option is useful when the BIG-IP Controller is load balancing devices which have the same IP address. This is typical with the nPath routing configuration where duplicate IP addresses are configured on the loopback device of several servers. Use the following syntax to enable or disable address translation for a virtual server.

bigpipe vip <virt ip>:<port> translate addr enable | disable | show

Setting up last hop pools for virtual servers

In cases where you have more than one router sending connections to a BIG-IP redundant system, you may want to route connections back through the same router from which they were received. To configure a last hop pool, you must first create a pool that contains the routers for the BIG-IP redundant system. After you create a router pool, use the following syntax to configure a last hop pool for a virtual server.

bigpipe vip <virt ip>:<port> lasthop pool <pool_name> | none | show

Mirroring connection information

Mirroring provides seamless recovery for current connections and when a BIG-IP Controller fails. When you use the mirroring feature, the peer controller maintains the same current connection and persistence information as its partner controller. Transactions such as FTP file transfers continue as though uninterrupted.

To control mirroring for a virtual server, use the mirror command to enable or disable mirroring of connections. The syntax of the command is:

bigpipe vip <virt ip>:<port> mirror conn enable | disable

To print the current mirroring setting for a virtual server:

bigpipe vip <virt ip>:<port> mirror conn show

If you do not specify conn, the BIG-IP Controller displays all mirrored connection information.

Note: If you set up mirroring on a virtual server that supports FTP connections, you need to mirror the control port virtual server, and the data port virtual server.

The following example shows the two commands used to enable mirroring for virtual server v1 on the FTP control and data ports:

bigpipe vip v1:21 mirror conn enable

bigpipe vip v1:20 mirror conn enable

Removing and returning a virtual server to service

You can remove an existing virtual server from network service, or return the virtual server to service, using the disable and enable keywords. When you disable a virtual server, the virtual server no longer accepts new connection requests, but it allows current connections to finish processing before the virtual server goes down.

Use the following syntax to remove a virtual server from network service:

bigpipe vip <virt ip>:<port> [...<virt ip>:<port>] disable

Use the following syntax to return a virtual server to network service:

bigpipe vip <virt ip>:<port> enable

Removing and returning a virtual address to service

You can remove an existing virtual address from network service, or return the virtual address to service, using the disable and enable keywords. Note that when you enable or disable a virtual address, you inherently enable or disable all of the virtual servers that use the virtual address.

bigpipe vip <virt ip> disable

Use the following syntax to return a virtual address to network service:

bigpipe vip <virt ip> enable

Displaying information about virtual addresses

You can also display information about the virtual addresses that host individual virtual servers. Use the following syntax to display information about one or more virtual addresses included in the configuration:

bigpipe vip <virt ip> [... <virt ip> ] show

The command displays information such as the virtual servers associated with each virtual address, the status, and the current, total, and maximum number of connections managed by the virtual address since the BIG-IP Controller was last rebooted, or since the BIG-IP Controller became the active unit (redundant configurations only).

Deleting a virtual server

Use the following syntax to permanently delete one or more virtual servers from the BIG-IP Controller configuration:

bigpipe vip <virt ip>:<port> [... <virt ip>:<port>] delete

Resetting statistics for a virtual server

Use the following command to reset the statistics for an individual virtual server:

bigpipe vip [<vip ip:port>] stats reset

Turning software acceleration off for virtual servers using IPFW rate filters

Additional enhancements are included in this release that speed packet flow for TCP connections when the packets are not fragmented. In most configurations these software enhancements are automatically turned on and do not require any additional configuration.

However, you may want to turn off these enhancements for individual virtual servers that use IPFW rate filters. With the speed enhancements on, IPFW only examines the first SYN packet in any given connection. If want to filter all packets, you should turn the speed enhancements off. To do this, you must first set the global state of the system on, and then you must turn the feature off for individual virtual servers that use IPFW rate filtering. You can change the settings for these enhancements from the command line or in the Configuration utility.

Setting software acceleration controls from the command line

Before you can turn off software acceleration for a virtual server, you must set the sysctl variable bigip.fastpath_active to on (2) with the following command:

sysctl -w bigip.fastpath_active=2

After you set the sysctl variable, use the following bigpipe commands to disable software acceleration for existing virtual servers that use IPFW rate filtering:

bigpipe vip <ip>:<port> accelerate disable

For example, if you want to turn acceleration off for the virtual server 10.10.10.50:80, type the following command:

bigpipe vip 10.10.10.50:80 accelerate disable

You can define a virtual server with acceleration disabled using the following syntax:

bigpipe vip <ip>:<port> use pool the_pool accelerate disable

For example, if you want to define the virtual server 10.10.10.50:80 with the pool IPFW_pool and acceleration turned off, type the following command:

bigpipe vip 10.10.10.50:80 use pool IPFW_pool accelerate disable

Using additional features with virtual servers

After you create a pool and define a virtual server that references the pool, you can set up additional features, such as network address translation (NAT) or extended content verification (ECV). If you are planning on using any of these features, you may want to read the corresponding section before you actually begin the virtual server configuration process: