Manual Chapter : BIG-IP Administrator guide v2.1: Preparing for the Installation

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 2.1.4 PTF-01, 2.1.4, 2.1.3 PTF-04, 2.1.3 PTF-03, 2.1.3 PTF-02, 2.1.3 PTF-01, 2.1.3, 2.1.2 PTF-02, 2.1.2 PTF-01, 2.1.2, 2.1.1, 2.1.0
Manual Chapter


2

Preparing for the Installation



Planning the BIG/ip Controller installation

This chapter provides detailed information about configuration planning issues that you need to address before installing the BIG/ip Controller. It also covers other important issues such as how to configure network routing, and how to set up and distribute site content before you actually connect the BIG/ip Controller to the network.

There are essentially two types of installations you can do:

  • Quick setup
    The quick setup installation simply gets the BIG/ip Controller up and running, doing basic round robin load balancing for web servers. It uses default settings, and requires you to perform only a few minimum configuration tasks, such as entering the IP addresses for your site and host servers, and opening access to the ports that your site needs.
  • Standard/advanced setup
    A standard/advanced setup takes advantage of additional features, such as service check, that most users want to implement.

    The planning sections for standard and advanced setup provide you a list of the main BIG/ip Controller features, and explain what if any configuration issues you may need to address before you implement that particular feature.

Planning for a quick setup installation

The quick setup installation sets up a basic, round robin load balancing configuration. The quick setup installation requires that you do only the following four tasks:

  • Get the hardware connected and run the First-Time Boot utility (a wizard that helps you define the settings necessary for connecting the BIG/ip Controller to the network).
  • Open access to the ports that your clients need to connect to.
  • Define at least one virtual server.
  • Set connection timeouts.

    There are a few things you should probably take into consideration before doing a quick setup installation. First, we recommend that you review the section on Configuring virtual servers and nodes, on page 2-18. The section simply helps you understand how to map the IP address of your web site to the different back-end web servers that host individual client connections. We also recommend that you review the section on Preparing additional network components, on page 2-23, which covers basic issues involved with integrating a BIG/ip Controller into your overall network.

    Once you are ready to do the install, turn to Chapter 3, Unpacking and installing the hardware, which walks you through the process of connecting the hardware and running the First-Time Boot utility. After you complete that process, simply follow the instructions in Chapter 4, Getting Started with a Basic Configuration.

Planning for a standard or advanced installation

When planning a standard/advanced installation, you might want to review the list of main BIG/ip Controller features in this section and choose which features you want to implement in your own configuration. For each main feature, the following sections give you an overview of what the installation and planning issues are, if any.

In addition to reviewing the following features, we recommend that you also review the section on Configuring virtual servers and nodes, on page 2-18, as well as the section on Preparing additional network components, on page 2-23, which covers basic issues involved with inserting a BIG/ip Controller into your overall network.

Once you are ready to begin the install, you can start with Chapter 3, Setting up the Hardware, which walks you through the process of connecting the hardware and running the First-Time Boot utility. After you complete that process, simply follow the instructions in Chapter 4, Getting Started with a Basic Configuration. For information about configuring advanced features, turn to Chapter 5, Working with Special Features.

Choosing a load balancing mode

The BIG/ip platform supports seven different load balancing modes, both static and dynamic. A static load balancing mode distributes connections based solely on user-defined settings, while a dynamic load balancing mode distributes connections based on various aspects of real-time server performance analysis. Note that the load balancing mode you choose applies to all of your virtual servers. Setting the load balancing mode is easy; you can either choose a different mode from a list box in the web-based F5 Configuration utility (see Changing the load balancing mode, on page 4-21), or you can issue a single bigpipe command (see Appendix B, BIG/pipe Command Reference).

Because each application of the BIG/ip Controller is unique, and server performance depends on a number of different factors, we recommend that you experiment with different load balancing modes, and choose the one that offers the best performance in your particular environment. For many sites, a static load balancing mode, such as Round Robin, achieves very acceptable results. Sites that have specific concerns, such as servers that vary significantly in speed and capability, may benefit from using dynamic load balancing modes.

Round Robin mode

Round Robin mode, a static load balancing mode, is the default mode. In Round Robin mode, the BIG/ip Controller distributes connections evenly across the nodes that it manages. Each time a new connection is requested, the BIG/ip Controller passes the connection to the next node in line. Over time, the total number of connections received by each node associated with a specific virtual server is the same.

Ratio mode

Ratio mode, another static load balancing mode, allows you to assign weights to each node. Over time, the total number of connections for each node is in proportion to the weights you specify. For example, in simple configuration, you might have one new, fast server and two older, slower servers. To get the newer server to host the bulk of the traffic, you could use Ratio mode. You would assign a higher weight to the fast server, such as 2, and lower weights, such as 1, to the two slower servers. Over time, these weight settings result in the faster server receiving 50% of the network traffic, while each of the slower servers would receive only 25% of the network traffic.

Warning: The default ratio weight for all nodes is 1. If you use the Ratio load balancing mode, you must change the weight setting for at least one node; otherwise, Ratio mode has the same result as Round Robin mode.

Priority mode

Priority mode is also a static load balancing mode. In Priority mode, you create groups of nodes and assign a priority level to each group. The BIG/ip Controller distributes connections in a round robin fashion to all nodes in the highest priority group. If all the nodes in the highest priority group go down, the BIG/ip Controller begins to pass connections on to nodes in the next lower priority group. For example, in a configuration that has three priority groups, connections are first distributed to all nodes set as priority 1. If all priority 1 nodes are down, connections begin to be distributed to priority 2 nodes. If both the priority 1 nodes and the priority 2 nodes are down, connections then begin to be distributed to priority 3 nodes, and so on. Note, however, that the BIG/ip Controller continuously monitors the higher priority nodes, and each time a higher priority node becomes available, the BIG/ip Controller passes the next connection to that node.

Least Connections mode

Least Connection mode is a dynamic load balancing mode which takes into account the number of connections each host server is currently handling. The BIG/ip Controller simply passes a new connection to the node with the least number of current connections.

Fastest mode

Fastest mode, also a dynamic load balancing mode, passes a new connection based on the fastest response of all currently active nodes. Fastest mode works well in any environment, but you may find it particularly useful in environments where the nodes are hosted by servers of varying capabilities, or where nodes are distributed across different logical networks.

Observed mode

Observed mode is a more sophisticated dynamic load balancing mode that uses a combination of the logic used in the Least Connection and Fastest modes. In Observed mode, nodes are ranked based on a combination of the number of current connections and the response time. The node that has the best balance of fewest connections and fastest response time receives the next connection from the BIG/ip Controller.

Predictive mode

Predictive mode is also a sophisticated dynamic load balancing mode, which uses the ranking methods used by Observed mode, where nodes are rated according to a combination of the number of current connections and the response time. However, in Predictive mode, the BIG/ip Controller analyzes the trend of the ranking over time, determining whether a node's performance is currently improving or declining. The node with the best performance ranking that is currently improving, rather than declining, receives the next connection from the BIG/ip Controller.

Setting up node ping and service checking

The BIG/ip Controller has four different methods available to determine whether a specific node is available to receive connections. The default method node ping is on by default, but you may want to turn on other verification features that allow for more comprehensive checking. If you are managing web site content in particular, you may want to use the Extended Content Verification, or Extended Application Verification features.

Node ping

Node ping is the simplest method of availability checking, and it only guarantees that the server hosting the node can respond to a ping. The node ping setting applies to all virtual servers on the system, and it is turned on by default. Normally you do not have to worry about this setting during intital installation.

Simple service check

Simple service check verifies that the service the client needs is available on the server. For example, if the client is looking to connect to a standard web site, a simple service check for the site would verify that a given server currently accepts connections to port 80.

Setting up simple service check is a matter of turning the feature on for a specific node, or for a global node port. Plan on setting up simple service check after you define your virtual servers. (Remember that you cannot define a node separately from a virtual server; therefore, you cannot set any node properties until you have defined the node by way of defining the virtual server.) Depending on the number of nodes that you want to configure for service check, you may want to set the service check on a node port first, because all nodes that use the port inherit the service check settings. If you need to, you can override the port service check settings for an individual node.

Extended Content Verification (ECV) service check

ECV service check verifies that a given server returns specific content. Similar to simple service check, ECV service check is a property of both individual nodes and global node ports, and you set it up only after you define your virtual servers.

The concept behind ECV service check is acutally pretty simple. The BIG/ip Controller tries to retrieve specific content from a server, such as a web site's home page. It searches the content it receives, looking for text that you specify. If it finds a match, the BIG/ip Controller considers the service check to be successful and continues to send clients to the server.

Most users can work with the standard send string "GET /" which simply returns a site's home page. However, ECV service check offers lots of other options that you may want to take advantage of, including ECV for transparent nodes. If you want to use ECV service check, we recommend that you review Configuring Extended Content Verification service checking, on page 4-30, to plan your ECV needs. For advanced configuration, such as Transparent Node mode, refer to Using advanced service check options, on page 5-3.

Extended Application Verification (EAV) service check

EAV service check performs a custom service check function. The BIG/ip Controller essentially runs a script to do a service check on its behalf. Several EAV service check scripts are bundled with the BIG/ip Controller. The scripts bundled with the BIG/ip Controller include scripts for checking FTP, NNTP, SMTP, SQL, and POP3. Some customers write their own custom checker programs, and others prefer to get assistance from the F5 Help Desk. You can review Using advanced service check options, on page 5-3, for further details on this feature.

Setting up network address translations and IP forwarding

The BIG/ip Controller supports three related features that provide nodes with IP addresses that are routable on the external network. Remember that nodes actually run on the BIG/ip Controller's internal interface, and, by default, their true IP addresses are protected by the BIG/ip Controller. The features are important because they allow you to make direct administrative connections to nodes, or allow nodes to initiate connections to hosts on the external network. Plan on setting up these features only after you have defined the virtual servers on the BIG/ip Controller.

Network Address Translations (NATs)

NATs allow nodes to receive direct incoming connections, and also allow nodes to make connections to hosts on the external network. For example, if you have a node that runs the Sendmail utility, the node may need to connect to a mail server that sits on the BIG/ip Controller's external interface. Also, if your administrative workstation is on the BIG/ip Controller's external interface, you probably need to define a NAT address for each node that you want to be able to administrate remotely.

If you plan on using network address translations, keep the following in mind:

  • Certain protocols, such as the NT Domain or CORBA protocols, are not compatible with NAT.
  • You need to provide a unique IP address for each NAT you want to define.

Warning: NAT is not compatible with the NT Domain or CORBA protocols. Instead, you need to use the IP forwarding feature.

Secure Network Address Translation (SNAT)

The SNAT feature essentially provides additional firewall functionality for the BIG/ip Controller. It translates source IP addresses for nodes that are initiating connections with hosts on the external network. SNATs are more secure because they do not allow clients on the external network to connect to nodes on the internal network.

If you plan on using SNAT, keep the following in mind:

  • You can assign a SNAT address to one or more nodes.
  • A SNAT address can be the same as one of the virtual addresses configured for the BIG/ip Controller.

IP forwarding

If your administrative workstation is on the BIG/ip Controller's external interface, but you cannot use the NAT feature, you need to turn on IP forwarding instead. This feature is somewhat less secure because it exposes the true addresses of your nodes to the external network. However, it does allow you the direct administrative access that you need. IP forwarding is controlled by a system control variable, and you simply have to turn it on.

IP forwarding is also useful if you wish to maintain NT domain authentication between networks. You can set up IP forwarding for NT domain authentication.

Setting up redundant systems

Before you begin configuring the two units in the redundant system, be sure that you have decided on the IP addresses for both systems. The First-Time Boot utility on each system prompts you for the IP address of other BIG/ip Controller, so that it can set up for synchronization of the configurations between the two units.

Choosing a fail-over set up

You also need to decide whether you are going to use hardware fail-over, network fail-over, or both. Hardware fail-over is the most reliable, because it provides a direct, hardwired connection between the two units. You can actually add a layer of redundancy to the standard hardware fail-over by turning on network fail-over and using it as the secondary means of transfering fail-over data between the two units.

If your BIG/ip Controllers are not physically located near each other, you want to use network fail-over as the primary means of exchanging fail-over data for the redundant system. Network fail-over is actually a feature that you simply turn on or off. The default is off, but you can turn it on any time after you have completed the First-Time Boot utility. Hardware fail-over does not have any additional special settings.

Using connection and persistence mirroring features

If your site handles a lot of FTP or Chat traffic, persistent connections, or other traffic that is highly sensitive to state loss during a fail-over, you probably want to use this feature. In a redundant system, this feature requires that both the active unit and the standby unit maintain the state of each current connection. If a fail-over occurs, no connection or persistence information is lost, and connections continue virtually uninterrupted.

You can turn on connection and persistence mirroring for virtual servers on an individual basis, and you can even specify whether you want all connections mirrored, or just persistent information. To help reduce the amount of overhead that this feature can potentially generate, you should configure connection mirroring only on those individual virtual servers that need it.

Using gateway fail-safe

Gateway fail-safe is an advanced feature that offers you yet another layer of network redundancy. If each BIG/ip Controller in your redundant system can use a separate router on the external interface, you may want to implement this feature. When the connection between a given BIG/ip Controller and its corresponding router fails, the unit can fail-over to the standby unit where the gateway connection is presumably still good. This feature is supported only on BIG/ip Controller HA products.

Setting up persistence features

The BIG/ip platform supports persistence for TCP, UDP, and SSL connections. You want to use persistence only if you have clients that need to bypass load balancing and go to a specific server. For example, if you run an airline reservation site and you allow clients to reserve tickets for 24 hours before purchasing the ticket, you need to use persistence if you store a specific client's reservation only on the server to which the client originally connected. If you store reservation information on a back-end database or file server that all of your web servers have access to, you would not need to implement persistence.

The BIG/ip Controller now offers four basic persistence options:

  • Simple persistence
    Bases persistence on a clients' source IP address. The persistence connection information is stored on the BIG/ip Controller, and it applies to both TCP and UDP traffic.
  • Destination address affinity
    Bases persistence on the destination IP address of a connection. This is actually a special type of persistence that you can use for load balancing cache servers.
  • SSL persistence
    Bases persistence on an SSL session ID stored in a table on the BIG/ip Controller.
  • HTTP cookie persistence
    Bases persistence on connection information stored in a cookie on the client.

Important issues for all types of persistence

There are two issues you should consider when using persistence:

  • Timeouts
    Each type of persistence, other than destination address affinity, uses timeouts that determine how long an individual persistence connection information is considered valid. To get the best performance, you should set the timeout settings so that they correlate to the amount of time that nodes typically retain the information that would be associated with a connection requiring persistence.
  • Persistence masks and sticky masks
    If you plan to implement persist masks or sticky masks, you should choose a static load balancing mode, such as Round Robin, for the BIG/ip Controller. A dynamic mode, such as Fastest, combined with a persistence mask, could cause persistent connections to clump, or accumulate, on one server.

HTTP cookie persistence

HTTP cookie persistence requires HTTP 1.0 or 1.1 communications, and it does not work when data packets are encrypted. However, there are a couple significant benefits to using HTTP cookie persistence. For example, unlike other persistence methods, it does not depend on a client's source IP address, which can change if the client is connecting to your site via an ISP or other organization that uses dynamically assigned IP addresses. Also, HTTP cookies store the persistent connection information on the client's hard drive rather than on the BIG/ip Controller the way other persistence methods do.

You set up HTTP cookie persistence for individual virtual servers, and you need to choose a method, as well as a timeout. The timeout simply defines how long the persistent connection information is valid, and the method determines whether the BIG/ip Controller inserts the persistent server information into the header of the HTTP response from the server, or rewrites the cookie as it is passed from the server to the client. For more details on HTTP cookie persistence, refer to Using HTTP cookie persistence, on page 5-12.

SSL persistence

SSL persistence applies only to sites that use the SSL protocol, which is typical of e-commerce sites in particular. You can turn on SSL persistence for individual virtual servers, and you only need to define the timeout value that determines how long a client's SSL session ID is valid. For additional information on SSL persistence, see Setting up SSL persistence, on page 4-38.

Simple persistence

When simple TCP persistence is enabled, the BIG/ip Controller actually records the IP address of the client, and it also records the particular node that received the initial client connection. When a new connection request comes from the same client, the BIG/ip Controller uses a look-up table to determine the appropriate node that should host the connection. The client record is cleared from the look-up table when the persistence timeout expires.

Using SSL persistence with simple persistence

You may want to use SSL persistence and simple persistence together. In situations where the SSL persistence times out and the session information is discarded, or if a returning client does not provide a session ID, it may still be desirable for the BIG/ip Controller to direct the client to the original node using the IP address. The BIG/ip Controller can accomplish this as long as the client's simple persistence record is still in the BIG/ip Controller look-up table.

A note about persistence timeout settings

The BIG/ip platform supports two types of persistence timeout settings:

  • The standard persistence timeout mode is where the timer resets itself upon receipt of each packet. Essentially, this keeps the timer from running as long as there is traffic flow over the connection. Once traffic stops on the connection, the timer runs as normal. Note that the timer is reset if traffic over the current connection resumes, or if the client subsequently reconnects before the timer actually expires.
  • An alternate persistence timeout mode is to start the timer when a connection is first made. The timer runs until the timeout expires. The BIG/ip Controller sends subsequent connections to the same node until the timeout expires. Once the timeout expires, however, the BIG/ip Controller treats a request for a subsequent connection as if it were new, and starts a new timeout period.

Configuring multiple network interface cards

The BIG/ip Controller supports multiple network interface cards (NICs). In order to configure the BIG/ip Controller for multiple NICs, you need to address the following configuration issues:

  • The First-Time Boot utility
    Use the First-Time Boot utility to detect and configure additional interfaces if there are more than two NICs installed. For details about how to use the First-Time Boot utility to configure multiple interfaces, see Running the First-Time Boot utility, on page 3-9.
  • RDP for more than one internal NIC
    Use Router Discovery Protocol (RDP) for routing if you plan to implement multiple NICs in the BIG/ip Controller.
    By using RDP, a server can have its default route point to the active BIG/ip Controller without using a shared alias. This is useful when the server is multihomed (has more than one NIC and multiple IP addresses) and you do not want to set the default route to a specific IP address. If you do, and then one of your NICs, cables, or ports goes down, there is no alternate route to switch to. RDP allows you to implement default rerouting to any of the BIG/ip Controller interfaces.
  • Editing httpd.conf
    The httpd.conf file defines the virtual web servers for the external and internal interfaces to which IP addresses are mapped. If the BIG/ip Controller contains multiple NICs, you must edit this file, using a text editor such as vi or pico, to change access to specific interfaces.

Using IP filters and rate filters

The BIG/ip Controller supports two different types of filters: IP filters and rate filters.

IP filters

You can use IP filters to control the traffic flowing in and out of the BIG/ip Controller. You can create and apply a single IP filter, or a number of IP filters, on the BIG/ip Controller in the F5 Configuration utility. Once these filters are created , you can apply them in a specified hierarchical order. You can filter network traffic using IP filters in a number of different ways:

  • Source IP address
    Applies the filter to all network traffic coming from the specified IP address, or range of IP addresses.
  • Source port
    Applies the filter to all network traffic coming from the specified port, or range of port numbers.
  • Destination IP address
    Applies the filter to all network traffic going to the specified IP address, or range of IP addresses.
  • Destination ports
    Applies the filter to all network traffic going to the specified port, or range of port numbers.

Note: The BIG/ip Controller only supports pre-input IP filters.

Rate filters

Rate filters allow you to control the amount of bandwidth used by network traffic as it leaves the BIG/ip Controller. The first step in creating a rate filter is to create a rate class. The rate class contains the specific bandwidth limitations you want to apply to a rate filter. After you have created at least one rate class, you can create a rate filter.

You can apply rate filters in a hierarchical order by moving a rate filter up or down in the rate filter table.

You can filter network traffic using rate filters in a number of different ways:

  • Rate Class
    Applies the filter to network traffic based on the bits per second, minimum bits outstanding, and queue length specified in the rate class.
  • Source IP Address
    Applies the filter to all network traffic coming from the specified IP address, or range of IP addresses.
  • Source Port
    Applies the filter to all network traffic coming from the specified port, or range of port numbers.
  • Destination IP Address
    Applies the filter to all network traffic going to the specified IP address, or range of IP addresses.
  • Destination Ports
    Applies the filter to all network traffic going to the specified port, or range of port numbers.

Setting up the SNMP agent

The BIG/ip Controller contains an SNMP agent and MIBs for managing and monitoring the BIG/ip Controller. This SNMP agent supports the F5 Networks management product, see/IT Network Manager, or your standard network management station (NMS).

The BIG/ip SNMP agent supports two MIBs, an F5 vendor-specific MIB, and the UC Davis MIB:

  • BIG/ip MIB
    This is a vendor MIB that contains specific information for properties associated with F5 specific functionality, such as load balancing, NATs, and SNATs.
  • UC Davis MIB
    This is a MIBII (RFC 1213) that provides standard management information.

    You can configure the BIG/ip SNMP agent to send traps to your management system with the F5 Configuration utility and by editing several configuration files. For more information about configuring the SNMP agent for the BIG/ip Controller, see Chapter 7, Configuring SNMP.

Setting up large configurations

The BIG/ip Controller supports up to 40,000 virtual servers and nodes combined. Larger configurations on a BIG/ip Controller, such as those that exceed 1,000 virtual servers or 1,000 nodes, introduce special configuration issues. To ensure a high performance level, you need to change certain aspects of the BIG/ip Controller's management of virtual servers and nodes. There are a number of steps you can take to ensure your large configuration is configured for optimum performance:

  • Reduce ARP traffic on the external network.
  • Reduce the number of node pings and service checks issued by the BIG/ip Controller.

    For information about configuring your large installation, refer to Optimizing large configurations, on page 5-46.

Configuring virtual servers and nodes

Virtual servers essentially represent the sites that the BIG/ip Controller manages, and they use the IP address that you register with DNS for your domain. The BIG/ip Controller manages virtual servers on its external interface, the interface that always receives the incoming client connection requests.

A virtual server is actually a specific combination of a virtual address and a port. If you happen to have two related sites that use the same IP address, but support different Internet services such as HTTP and SSL, you would have to create two separate virtual servers, one to manage each service. The port that you use in a virtual server should generally be the same TCP or UDP port number that is known to client programs looking to connect to the site.

For example, our sample domain, www.MySite.com, is a standard HTTP web site, and the related store.MySite.com site is an e-commerce site that sells items to www.MySite.com customers. Both sites use the same IP address, 192.168.200.10, but www.MySite.com requires port 80 for its HTTP traffic, and store.MySite.com requires port 443 for its SSL traffic. If you were to set up virtual servers on the BIG/ip Controller to manage these sites, you would have to define www.MySite.com as 192.168.200.10:80, and store.MySite.com as 192.168.200.10:443.

Mapping virtual servers to nodes

An individual virtual server maps to at least one physical port on a physical server, referred to as a node. Similar to a virtual server, a node definition must contain both an IP address and a port. The BIG/ip Controller manages nodes on its internal interface, the interface through which the BIG/ip Controller always forwards connection requests.

Although the topology shown in the Figure 2.1 contains only three physical servers, it actually supports four separate nodes. Server 2 supports two different services, and therefore can be used as two different nodes.

Figure 2.1 Sample web site configuration

The two different virtual server definitions in the example are easy to understand when represented in a simple mapping format:

192.168.200.10:80 to 192.168.100.1:80, 192.168.100.2:80

192.168.200.10:443 to 192.168.100.2:443, 192.168.100.3:443

Note that you can also map the configuration using host and service names in place of IP addresses and port numbers:

www.MySite.com:HTTP to Server 1:HTTP, Server 2:HTTP

store.MySite.com:SSL to Server 2:SSL, Server 3:SSL

Virtual server mappings typically include multiple nodes, and each node included in the mapping is referred to as a member of the virtual server. Depending on your configuration needs, you can use a node as a member of more than one virtual server.

Setting properties for virtual servers and nodes

You can control several attributes of virtual servers and nodes, as well as their individual component IP addresses and ports. If you set a particular property for a virtual server or a node, the setting applies only to that virtual server or node. However, if you set a property for an IP address or a port, the property setting is essentially global because it applies to any virtual server or node that uses the IP address or port. Note that there are certain property settings, such as simple persistence, that you can override at the virtual server or node level.

Say for example, that you need to configure several virtual servers to handle a group of web sites. If you want all but one of the virtual servers to use persistence, it is easier to turn persistence on for port 80, and then simply disable persistence for the one virtual server that does not need it.

Property settings for virtual addresses

For each virtual server (an IP address used for one or more virtual servers), you can control the following settings:

  • You can enable or disable the virtual address.
  • You can set a maximum number of concurrent connections allowed for the address.
  • You can define a custom netmask and broadcast address.
  • You can associate the IP address with a specific external interface.

    The BIG/ip Controller allows you to configure basic properties for a virtual address including a connection limit, a netmask, and broadcast address. The default netmask is determined by the network class of the IP address you enter, and the default broadcast address is a combination of the virtual address and the netmask. You can override the default netmask and broadcast address if necessary.

    All virtual servers that have the same virtual address inherit the properties of the virtual address.

Property settings for virtual ports

For convenience, the BIG/ip Controller allows you to define default configuration settings for a virtual port number or service name. Each virtual server that uses the port number or service name inherits the default properties for that port number or service. The only default property setting that a specific virtual server can override is whether the port is enabled or disabled for that virtual server.

The configurable settings for a virtual port include:

  • Whether the port is currently enabled or disabled
  • A connection limit
  • A time-out for idle TCP connections
  • A time-out for idle UDP connections
  • Simple persistence for TCP and UDP sessions

Property settings for virtual servers

For each virtual server (a virtual address and port pair), you can control the following settings:

  • You can enable or disable the virtual server.
  • You can set a maximum number of concurrent connections allowed for the address.
  • You can mirror persistence and/or connection information.
  • You can override simple persistence settings and define a persist mask.
  • You can set up destination address affinity (for Transparent Node mode).
  • You can set up SSL or cookie persistence.

    Once you define a virtual server, you can set its properties. For example, you can set a connection limit for the virtual server, and you can configure persistence settings for SSL connections for virtual servers. You can also enable or disable a virtual server. The enable/disable feature allows you to take a virtual server down for maintenance without interrupting any of the virtual servers' current connections. When you disable a virtual server, it does not accept new connections, but it allows the current connections to complete.

Property settings for node addresses

Node addresses have property settings that apply to all nodes hosted by the node address. Node address property settings include:

  • Whether the node address is currently enabled or disabled
  • A connection limit
  • A load balancing ratio weight or priority level used when the load balancing mode is set to Ratio or Priority
  • An IP alias that the BIG/ip Controller can ping instead of the true node address

    Aliases for node addresses are useful for BIG/ip Controllers that manage thousands of nodes. For more information about optimizing large configurations, see Chapter 5, Optimizing large configurations.

Property settings for node ports

You can set global properties for port numbers or service names used by nodes. These settings apply to all nodes that include the port number or service name, regardless of which physical server hosts the node. You can override all global node port properties for specific node except the service check frequency and service check timeout settings. Node port properties include:

  • Whether the node port is currently enabled or disabled
  • A service check frequency and timeout, check port, extended (type, first string, second string)

Property settings for nodes

Once you define a node, you can set specific properties on the node itself including a connection limit, and special content verification settings. You can enable or disable a node, which makes the node available, or unavailable, to accept new connections. If you disable a node while it is currently hosting connections, the node allows those connections to complete, but does not allow any new connections to start. This is useful when you want to take a node down for maintenance without interrupting network traffic.

Preparing additional network components

Before you install a BIG/ip Controller in your network, you need to make sure that your network meets several requirements. The existing network should be fully functional, and it should support one or more IP services. Several individual network components including routers, hubs, gateways, and content servers, must also meet specific requirements.

Working with router configurations

The BIG/ip Controller must communicate properly with both the network router and the content servers that the BIG/ip Controller manages. Because there are a multitude of router configurations and varying levels of direct control an administrator has over each router, you need to carefully review the router configurations in your own network, and evaluate whether you need to change any existing configuration before you install the BIG/ip Controller.

Each router connected to the BIG/ip Controller must be IP compatible, and the router's interface must be compatible with the external interface on the BIG/ip Controller (either IEEE 802.3z/Ethernet or FDDI, depending on the model of BIG/ip Controller that you purchase).

  • The default route for the BIG/ip Controller must be set to the gateway address of the router connected to the BIG/ip Controller's external interface (the interface from which it receives connection requests). You can set the default route during the First-Time Boot configuration, or you can set the default route by editing the /etc/netstart file.
  • The routers connected to the BIG/ip Controller's external interface must have appropriate routes to get to all of the virtual addresses hosted by the BIG/ip Controller, and to get to the BIG/ip Controller's administrative address.

Routing between a BIG/ip Controller and a router

Fortunately, you do not have to modify routing tables on a router that routes to a BIG/ip Controller. Instead, the BIG/ip Controller uses Address Resolution Protocol (ARP) to notify a router of the IP addresses of its external interface as well as its virtual servers. The BIG/ip Controller supports static route configurations, dynamic routing (via BGP4, RIP1, RIP2, and OSPF), and subnetting.

You may use dynamic routing with the BIG/ip Controller, but it is not normally required. Refer to Chapter 4, Setting up dynamic routing with GateD, for information about implementing dynamic routing in a BIG/ip Controller environment.

Routing between a BIG/ip Controller and content servers

All network traffic coming into and going out of the content servers in the array must pass through the BIG/ip Controller. In order for routing to these servers to work properly, you need to set each server's default route to be the IP address of the BIG/ip Controller internal interface.

Setting up the servers to be load balanced

All servers managed by the BIG/ip Controller must have TCP/IP-compliant operating systems. For each server that the BIG/ip Controller manages, you should verify the following information and have it available when you begin the installation:

  • Verify that the ports on the content server are properly configured for the Internet services that the content server needs to support.
  • Verify that each server has at least one unique IP address defined. Note that a BIG/ip Controller can use multiple IP aliases defined on a single physical server.
  • Verify that the content server is communicating with other devices on the network.

    Each TCP/IP service supported by the BIG/ip virtual servers must be configured on at least one of the servers in the array. For specific information about configuring TCP/IP servers, and verifying TPC/IP services on specific ports, refer to the documentation provided by the server manufacturer.

Setting up content servers on different logical networks

A content server can be installed on a different logical network than that of the BIG/ip Controller, as long as the path of the content server's default route goes through the BIG/ip Controller. If your network environment includes this type of configuration, you need to modify the /etc/rc.local file on the BIG/ip Controller. The /etc/rc.local file stores the BIG/ip Controller's routing information, and you can edit it in a UNIX editor, such as vi or pico.

Warning: Routing statements must be added to the beginning of the /etc/rc.local file.

With this type of network configuration, you need to resolve one of two different routing issues, depending on whether the logical networks are running on the same LAN.

  • If the logical networks are on the same LAN, they either share media directly, or they have a switch or a hub between them. In this configuration, you need to add an interface route to the BIG/ip Controller's internal interface. For example, if the BIG/ip Controller's internal interface were on logical network 192.168.5/24, and a content server's were on logical network 192.168.6/24, you would need to add the following line to the /etc/rc.local file:
route add -net 192.168.6 -interface exp1
  • If the logical networks are on different LANs, they have a router between them. In this environment, you need to do three things:
    • On the BIG/ip Controller, you need to add a static gateway route to the top of the /etc/rc.local file. In the example above, where the BIG/ip Controller is on logical network 192.168.5/24 and the content servers are on logical network 192.168.6/24, you would need to add the following line to the /etc/rc.local file:
      route add -net 192.168.6.0 -gateway 192.168.5.254
    • On each content server, you need to set the default route to point to the router between the LANs. The content server's default route using the above example would be:
      route add default -gateway 192.168.6.254
    • On the router between the LANs, you need to set the default route to the internal interface address on the BIG/ip Controller. The router's default route using the above example would be:
      route add default -gateway 192.168.5.200

Preparing administrative workstations

Before you can do command line administration from your workstation, you may need to install the proper shell software. BIG/ip HA and HA+ Controllers (distributed only in the US) support a secure shell connection using F-Secure SSH. You can actually download the SSH client directly from the BIG/ip Controller's web server once you complete the First-Time Boot utility, which sets up the server for network access.

BIG/ip LB Controllers, as well as all BIG/ip Controllers distributed outside the US, support remote shell administration via a Telnet session. Most PCs usually have a Telnet client installed, but you may want to check to verify that yours does.

You also need to review which administrative workstations should be allowed to connect to your BIG/ip Controller and do command line mainenance. When you run the First-Time Boot utility on any BIG/ip Controller, it prompts you to enter the IP address, or range of IP addresses, from which it will accept administrative connections.

Preparing web site content

There are two basic configurations for site content that offer different configuration considerations: static content, and dynamic content.

Static web site content

If your web site content is read-only, you probably use a distributed, replicated content scheme. With a replicated content scheme, the content on one server is identical to that of the other servers managing content for the same web site. This ensures that all client requests access the same content, no matter which physical server they are actually connected to.

In this setup, basic load balancing works well. You do not need to address complex issues, such as configuring persistence features.

Dynamic site content

If your web site content is dynamic, such as content created with Active Server Pages, and you store the stateful information, if not all the content, on a single shared file server, you do not have to address persistence issues. However, if you maintain stateful site content on individual servers instead of a shared file server or back-end database, you need to plan on configuring at least some type of persistence on the BIG/ip Controller. See Mirroring connection and persistence information, on page 5-20 for details.