Manual Chapter : About Pools

Applies To:

Show Versions Show Versions

BIG-IP AAM

  • 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP APM

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP Analytics

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP Link Controller

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP LTM

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP PEM

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP AFM

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP DNS

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0

BIG-IP ASM

  • 17.1.2, 17.1.1, 17.1.0, 17.0.0, 16.1.5, 16.1.4, 16.1.3, 16.1.2, 16.1.1, 16.1.0, 16.0.1, 16.0.0, 15.1.9, 15.1.8, 15.1.7, 15.1.6, 15.1.5, 15.1.4, 15.1.3, 15.1.2, 15.1.1, 15.1.0, 15.0.1, 15.0.0, 14.1.5, 14.1.4, 14.1.3, 14.1.2, 14.1.0
Manual Chapter

About Pools

Introduction to pools

A
pool
is a logical set of devices, such as web servers, that you group together to receive and process traffic. Instead of sending client traffic to the destination IP address specified in the client request, the BIG-IP® system sends the request to any of the nodes that are members of that pool.
A pool consists of pool members. A
pool member
is a logical object that represents a physical node on the network. Once you have assigned a pool to a virtual server, the BIG-IP system directs traffic coming into the virtual server to a member of that pool. An individual pool member can belong to one or multiple pools, depending on how you want to manage your network traffic.
You can create three types of pools on the system: server pools, gateway pools, and clone pools.

About server pools

A
server pool
is a pool containing one or more server nodes that process application traffic. The most common type of server pool contains web servers.
One of the properties of a server pool is a load balancing method. A
load balancing method
is an algorithm that the BIG-IP® system uses to select a pool member for processing a request. For example, the default load balancing method is
Round Robin
, which causes the BIG-IP system to send each incoming request to the next available member of the pool, thereby distributing requests evenly across the servers in the pool.

About gateway pools

One type of pool that you can create is a gateway pool. A
gateway pool
is a pool of routers.

About clone pools

You use a
clone pool
when you want to configure the BIG-IP system to send traffic to a pool of intrusion detection systems (IDSs). An
intrusion detection system
(IDS) is a device that monitors inbound and outbound network traffic and identifies suspicious patterns that might indicate malicious activities or a network attack. You can use the clone pool feature of a BIG-IP system to copy traffic to a dedicated IDS or a sniffer device.
A clone pool receives all of the same traffic that the server pool receives.
To configure a clone pool, you first create a pool of IDS or sniffer devices and then assign the pool as a clone pool to a virtual server. The clone pool feature is the recommended method for copying production traffic to IDS systems or sniffer devices. Note that when you create the clone pool, the service port that you assign to each node is irrelevant; you can choose any service port. Also, when you add a clone pool to a virtual server, the system copies only new connections; existing connections are not copied.
You can configure a virtual server to copy client-side traffic, server-side traffic, or both:
  • A
    client-side clone pool
    causes the virtual server to replicate client-side traffic (prior to address translation) to the specified clone pool.
  • A server-side clone pool
    causes the virtual server to replicate server-side traffic (after address translation) to the specified clone pool.
You can configure an unlimited number of clone pools on the BIG-IP system.

Creating a server pool

Before starting this task:
  • Decide on the IP addresses or FQDNs for the servers that you want to include in your server pool.
  • If your system is using DHCP, make sure your DNS servers are not configured for round robin DNS resolutions; instead, they should be configured to return all available IP addresses in a resolution.
Use this task to create a pool of servers with pool members. The pool identifies which servers you want the virtual server to send client requests to. As an option, you can identify the servers by their FQDNs instead of their IP addresses. In this way, the system automatically updates pool members whenever you make changes to their corresponding server IP addresses on your network.
  1. On the Main tab, click
    Local Traffic
    Pools
    .
    The Pool List screen opens.
  2. Click
    Create
    .
    The New Pool screen opens.
  3. In the
    Name
    field, type a unique name for the pool.
  4. For the
    Health Monitors
    setting, from the
    Available
    list, select a monitor and move the monitor to the
    Active
    list.
    A pool containing nodes represented by FQDNs cannot be monitored by
    inband
    or
    sasp
    monitors.
  5. From the
    Load Balancing Method
    list, select how the system distributes traffic to members of this pool.
    The default is
    Round Robin
    .
  6. For the
    New Members
    setting, add each server that you want to include in the pool:
    1. Select
      New Node
      or
      New FQDN Node
      .
    2. (Optional) In the
      Node Name
      field, type a name for the node.
    3. If you chose
      New Node
      , then in the
      Address
      field, type the IP address of the server. If you chose
      New FQDN Node
      , then in the
      FQDN
      field, type the FQDN of the server.
      If you want to use FQDNs instead of IP addresses, you should still type at least one IP address. Typing one IP address ensures that the system can find a pool member if a DNS server is not available.
    4. For the
      Service Port
      option, pick a service from the list.
    5. If you are using FQDNs for the server names, then for
      Auto Populate
      , keep the default value of
      Enabled
      .
      When you leave
      Auto Populate
      turned on, the system creates an ephemeral node for each IP address returned as an answer to a DNS query. Also, when a DNS answer shows that the IP address of an ephemeral node no longer exists, the system deletes the ephemeral node.
    6. Click
      Add
      .
    7. Do this step again for each node.
  7. Click
    Finished
    .

Pool and pool member status

An important part of managing pools and pool members is viewing and understanding the status of a pool or pool member at any given time. The BIG-IP Configuration utility indicates status by displaying one of several icons, distinguished by shape and color, for each pool or pool member:
  • The shape of the icon indicates the status that the monitor has reported for that pool or pool member. For example, a circle-shaped icon indicates that the monitor has reported the pool member as being
    up
    , whereas a diamond-shaped icon indicates that the monitor has reported the pool member as being
    down
    .
  • The color of the icon indicates the actual status of the node itself. For example, a green shape indicates that the node is
    up
    , whereas a red shape indicates that the node is
    down
    . A black shape indicates that user-intervention is required.
At any time, you can determine the status of a pool. The status of a pool is based solely on the status of its members. Using the BIG-IP Configuration utility, you can find this information by viewing the Availability property of the pool. You can also find this information by displaying the list of pools and checking the Status column.

Pool features

You can configure the BIG-IP® system to perform a number of different operations for a pool. For example, you can:
  • Associate health monitors with pools and pool members
  • Enable or disable SNAT connections
  • Rebind a connection to a different pool member if the originally-targeted pool member becomes unavailable
  • Specify a load balancing algorithm for a pool
  • Set the Quality of Service or Type of Service level within a packet
  • Assign pool members to priority groups within a pool
You use the BIG-IP Configuration utility to create a load balancing pool, or to modify a pool and its members. When you create a pool, the BIG-IP system automatically assigns a group of default settings to that pool and its members. You can retain these default settings or modify them. Also, you can modify the settings at a later time, after you have created the pool.

Associating a health monitor with a pool

Health monitors are a key feature of the BIG-IP system. Health monitors help to ensure that a server is in an
up
state and able to receive traffic. When you want to associate a monitor with an entire pool of servers, you do not need to explicitly associate that monitor with each individual server. Instead, you can simply assign the monitor to the pool itself. the BIG-IP system then automatically monitors each member of the pool.
The BIG-IP system contains many different pre-configured monitors that you can associate with pools, depending on the type of traffic you want to monitor. You can also create your own custom monitors and associate them with pools. The only monitor types that are not available for associating with pools are monitors that are specifically designed to monitor nodes and not pools or pool members. That is, the destination address in the monitor specifies an IP address only, rather than an IP address and a service port. These monitor types are:
  • ICMP
  • TCP Echo
  • Real Server
  • SNMP DCA
  • SNMP DCA Base
  • WMI
With the BIG-IP system, you can configure your monitor associations in many useful ways:
  • You can associate a health monitor with an entire pool instead of an individual server. In this case, the BIG-IP system automatically associates that monitor with all pool members, including those that you add later. Similarly, when you remove a member from a pool, the BIG-IP system no longer monitors that server.
  • When a server that is designated as a pool member allows multiple processes to exist on the same IP address and port, you can check the health or status of each process. To do this, you can add the server to multiple pools, and then within each pool, associate a monitor with the that server. The monitor you associate with each server checks the health of the process running on that server.
  • When associating a monitor with an entire pool, you can exclude an individual pool member from being associated with that monitor. In this case, you can associate a different monitor for that particular pool member, or you can exclude that pool member from health monitoring altogether. For example, you can associate pool members
    A
    ,
    B
    , and
    D
    with the
    http
    monitor, while you associate pool member
    C
    with the
    https
    monitor.
  • You can associate multiple monitors with the same pool. For instance, you can associate both the
    http
    and
    https
    monitors with the same pool.

Pool member availability

You can specify a minimum number of health monitors. Before Local Traffic Manager can report the pool member as being in an
up
state, this number of monitors, at a minimum, must report a pool member as being available to receive traffic.

SNATs and NATs

When configuring a pool, you can specifically disable any secure network address translations (SNATs) or network address translations (NATs) for any connections that use that pool. By default, these settings are enabled. You can change this setting on an existing pool by displaying the Properties screen for that pool.
One case in which you might want to configure a pool to disable SNAT or NAT connections is when you want the pool to disable SNAT or NAT connections for a specific service. In this case, you could create a separate pool to handle all connections for that service, and then disable the SNAT or NAT for that pool.

Action when a service becomes unavailable

You can specify the action that you want the BIG-IP system to take when the service on a pool member becomes unavailable.
Possible actions are:
  • None. This is the default action.
  • The BIG-IP® system sends an RST (TCP-only) or ICMP message.
  • the BIG-IP system simply cleans up the connection.
  • the BIG-IP system selects a different node.
You should configure the system to select a different node in certain cases only, such as:
  • When the relevant virtual server is a Performance (Layer 4) virtual server with address translation disabled.
  • When the relevant virtual server’s Protocol setting is set to
    UDP
    .
  • When the pool is a gateway pool (that is, a pool or routers)

Slow ramp time

When you take a pool member offline, and then bring it back online, the pool member can become overloaded with connection requests, depending on the load balancing method for the pool. For example, if you use the Least Connections load balancing method, the system sends all new connections to the newly-enabled pool member (because, technically, that member has the least amount of connections).
With the slow ramp time feature, you can specify the number of seconds that the system waits before sending traffic to the newly-enabled pool member. The amount of traffic is based on the ratio of how long the pool member is available compared to the slow ramp time, in seconds. Once the pool member is online for a time greater than the slow ramp time, the pool member receives a full proportion of the incoming traffic.

Type of Service (ToS) level

Another pool feature is the Type of Service (ToS) level. The
ToS
level is one means by which network equipment can identify and treat traffic differently based on an identifier.
As traffic enters the site, the BIG-IP system can set the ToS level on a packet. Using the IP ToS to Server ToS level that you define for the pool to which the packet is sent. the BIG-IP system can apply an iRule and send the traffic to different pools of servers based on that ToS level.
The BIG-IP system can also tag outbound traffic (that is, the return packets based on an HTTP GET) based on the IP ToS to Client ToS value set in the pool. That value is then inspected by upstream devices and given appropriate priority.
For example, to configure a pool so that a ToS level is set for a packet sent to that pool, you can set both the IP ToS to Client level and the IP ToS to Server levels to
16
. In this case, the ToS level is set to
16
when sending packets to the client and when sending packets to the server.
If you change the ToS level on a pool for a client or a server, existing connections continue to use the previous setting.

Quality of Service (QoS) level

Another setting for a pool is the Quality of Service (QoS) level. In addition to the ToS level, the QoS level is a means by which network equipment can identify and treat traffic differently based on an identifier. Essentially, the QoS level specified in a packet enforces a throughput policy for that packet.
As traffic enters the site, the BIG-IP system can set the QoS level on a packet. Using the Link QoS to Server QoS level that you define for the pool to which the packet is sent, the BIG-IP system can apply an iRule that sends the traffic to different pools of servers based on that QoS level.
The BIG-IP system can also tag outbound traffic (that is, the return packets based on an HTTP GET) based on the Link QoS to Client QoS value set in the pool. That value is then inspected by upstream devices and given appropriate priority.
For example, to configure a pool so that a QoS level is set for a packet sent to that pool, you can set the Link QoS to Client level to 3 and the Link QoS to Server level to 4. In this case, the QoS level is set to 3 when sending packets to the client, and set to 4 when sending packets to the server.

Number of reselect tries

You can specify the number of times that the system tries to contact a new pool member after a passive failure. A
passive failure
consists of a server-connect failure or a failure to receive a data response within a user-specified interval. The default value of
0
indicates no reselects.
This setting is for use primarily with TCP profiles. Using this setting with a Fast L4 profile is not recommended.

About TCP request queue

TCP request queuing
provides the ability to queue connection requests that exceed the capacity of connections for a pool, pool member, or node, as determined by the connection limit. Consequently, instead of dropping connection requests that exceed the capacity of a pool, pool member, or node, TCP request queuing enables those connection requests to reside within a queue in accordance with defined conditions until capacity becomes available.
When using session persistence, a request becomes queued when the pool member connection limit is reached.
Without session persistence, when all pool members have a specified connection limit, a request becomes queued when the total number of connection limits for all pool members is reached.
Conditions for queuing connection requests include:
  • The maximum number of connection requests within the queue, which equates to the maximum number of connections within the pool, pool member, or node. Specifically, the maximum number of connection requests within the queue cannot exceed the cumulative total number of connections for each pool member or node. Any connection requests that exceed the capacity of the request queue are dropped.
  • The availability of server connections for reuse. When a server connection becomes available for reuse, the next available connection request in the queue becomes dequeued, thus allowing additional connection requests to be queued.
  • The expiration rate of connection requests within the queue. As queue entries expire, they are removed from the queue, thus allowing additional connection requests to be queued.
Connection requests within the queue become dequeued when:
  • The connection limit of the pool is increased.
  • A pool member's slow ramp time limit permits a new connection to be made.
  • The number of concurrent connections to the virtual server decreases below the connection limit.
  • The connection request within the queue expires.

About load balancing methods

Load balancing is an integral part of the BIG-IP system. Configuring load balancing on a BIG-IP system means determining your load balancing scenario, that is, which pool member should receive a connection hosted by a particular virtual server. Once you have decided on a load balancing scenario, you can specify the appropriate load balancing method for that scenario.
A
load balancing method
is an algorithm or formula that the BIG-IP system uses to determine the server to which traffic will be sent. Individual load balancing methods take into account one or more dynamic factors, such as current connection count. Because each application of the BIG-IP system is unique, and server performance depends on a number of different factors, we recommend that you experiment with different load balancing methods, and select the one that offers the best performance in your particular environment.

Default load balancing method

The default load balancing method for the BIG-IP system is Round Robin, which simply passes each new connection request to the next server in line. All other load balancing methods take server capacity and/or status into consideration.
If the equipment that you are load balancing is roughly equal in processing speed and memory, Round Robin method works well in most configurations. If you want to use the Round Robin method, you can skip the remainder of this section, and begin configuring other pool settings that you want to add to the basic pool configuration.

BIG-IP system load balancing methods

The BIG-IP system provides several load balancing methods for load balancing traffic to pool members.
Method
Description
When to use
Round Robin
This is the default load balancing method. Round Robin method passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced.
Round Robin method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory.
Ratio (member) Ratio (node)
The BIG-IP system distributes connections among pool members or nodes in a static rotation according to ratio weights that you define. In this case, the number of connections that each system receives over time is proportionate to the ratio weight you defined for each pool member or node. You set a ratio weight when you create each pool member or node.
These are static load balancing methods, basing distribution on user-specified ratio weights that are proportional to the capacity of the servers.
Dynamic Ratio (member) Dynamic Ratio (node)
The Dynamic Ratio methods select a server based on various aspects of real-time server performance analysis. These methods are similar to the Ratio methods, except that with Dynamic Ratio methods, the ratio weights are system-generated, and the values of the ratio weights are not static. These methods are based on continuous monitoring of the servers, and the ratio weights are therefore continually changing.
To implement Dynamic Ratio load balancing, you must first install and configure the necessary server software for these systems, and then install the appropriate performance monitor.
The Dynamic Ratio methods are used specifically for load balancing traffic to RealNetworks RealSystem Server platforms, Windows platforms equipped with Windows Management Instrumentation (WMI), or any server equipped with an SNMP agent such as the UC Davis SNMP agent or Windows 2000 Server SNMP agent.
Fastest (node) Fastest (application)
The Fastest methods select a server based on the least number of current sessions. These methods require that you assign both a Layer 7 and a TCP type of profile to the virtual server.
If the OneConnect feature is enabled, the Least Connections methods do not include idle connections in the calculations when selecting a pool member or node. The Least Connections methods use only active connections in their calculations.
The Fastest methods are useful in environments where nodes are distributed across separate logical networks.
Least Connections (member) Least Connections (node)
The Least Connections methods are relatively simple in that the BIG-IP system passes a new connection to the pool member or node that has the least number of active connections.
If the OneConnect feature is enabled, the Least Connections methods do not include idle connections in the calculations when selecting a pool member or node. The Least Connections methods use only active connections in their calculations.
The Least Connections methods function best in environments where the servers have similar capabilities. Otherwise, some amount of latency can occur. For example, consider the case where a pool has two servers of differing capacities, A and B. Server A has 95 active connections with a connection limit of 100, while server B has 96 active connections with a much larger connection limit of 500. In this case, the Least Connections method selects server A, the server with the lowest number of active connections, even though the server is close to reaching capacity. If you have servers with varying capacities, consider using the Weighted Least Connections methods instead.
Weighted Least Connections (member) Weighted Least Connections (node)
Similar to the Least Connections methods, these load balancing methods select pool members or nodes based on the number of active connections. However, the Weighted Least Connections methods also base their selections on server capacity. The Weighted Least Connections (member) method specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example, member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. The Weighted Least Connections (node) method specifies that the system uses the value you specify in the node's Connection Limit setting and the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. If all servers have equal capacity, these load balancing methods behave in the same way as the Least Connections methods.
If the OneConnect feature is enabled, the Weighted Least Connections methods do not include idle connections in the calculations when selecting a pool member or node. The Weighted Least Connections methods use only active connections in their calculations.
Weighted Least Connections methods work best in environments where the servers have differing capacities. For example, if two servers have the same number of active connections but one server has more capacity than the other, the BIG-IP system calculates the percentage of capacity being used on each server and uses that percentage in its calculations.
Observed (member) Observed (node)
With the Observed methods, nodes are ranked based on the number of connections. The Observed methods track the number of Layer 4 connections to each node over time and create a ratio for load balancing.
The need for the Observed methods is rare, and they are not recommended for large pools.
Predictive (member) Predictive (node)
The Predictive methods use the ranking methods used by the Observed methods, where servers are rated according to the number of current connections. However, with the Predictive methods, the BIG-IP system analyzes the trend of the ranking over time, determining whether a node’s performance is currently improving or declining. The servers with performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections.
The need for the Predictive methods is rare, and they are not recommend for large pools.
Least Sessions
The Least Sessions method selects the server that currently has the least number of entries in the persistence table. Use of this load balancing method requires that the virtual server reference a type of profile that tracks persistence connections, such as the Source Address Affinity or Universal profile type.
The Least Sessions methods are incompatible with cookie persistence.
The Least Sessions method works best in environments where the servers or other equipment that you are load balancing have similar capabilities.
Ratio Least Connections
The Ratio Least Connections methods cause the system to select the pool member according to the ratio of the number of connections that each pool member has active.

About priority-based member activation

Priority-based member activation
is a feature that allows you to categorize pool members into priority groups, so that pool members in higher priority groups accept traffic before pool members in lower priority groups. The priority-based member activation feature has two configuration settings:
Priority group activation
For the priority group activation setting, you specify the minimum number of members that must remain available in each priority group in order for traffic to remain confined to that group. The allowed value for this setting ranges from
0
to
65535
. Setting this value to
0
disables the feature (equivalent to using the default value of
Disabled
).
Priority group
When you enable priority group activation, you also specify a priority group for each member when you add that member to the pool. Retaining the default priority group value of
0
for a pool member means that the pool member is in the lowest priority group and only receives traffic when all pool members in higher priority groups are unavailable.
If the number of available members assigned to the highest priority group drops below the number that you specify, the BIG-IP® system distributes traffic to the next highest priority group, and so on.
For example, this configuration has three priority groups,
3
,
2
, and
1
, with the priority group activation value (shown here as
min active members
) set to
2
.
pool my_pool { lb_mode fastest min active members 2 member 10.12.10.7:80 priority 3 member 10.12.10.8:80 priority 3 member 10.12.10.9:80 priority 3 member 10.12.10.4:80 priority 2 member 10.12.10.5:80 priority 2 member 10.12.10.6:80 priority 2 member 10.12.10.1:80 priority 1 member 10.12.10.2:80 priority 1 member 10.12.10.3:80 priority 1 }
Connections are first distributed to all pool members with priority
3
(the highest priority group). If fewer than two priority
3
members are available, traffic is directed to the priority
2
members as well. If both the priority
3
group and the priority
2
group have fewer than two members available, traffic is directed to the priority
1
group. The BIG-IP system continuously monitors the priority groups, and whenever a higher priority group once again has the minimum number of available members, the BIG-IP system limits traffic to that group.

Pool member features

A pool member consists of a server’s IP address and service port number. An example of a pool member is
10.10.10.1:80
. Pool members have a number of features that you can configure when you create the pool.
By design, a pool and its members always reside in the same administrative partition.

About ratio weights

When using a ratio-based load balancing method for distributing traffic to servers within a pool, you can assign a ratio weight to the corresponding pool members. The ratio weight determines the amount of traffic that the pool member receives. The ratio-based load balancing methods are: Ratio (node, member, and sessions), Dynamic Ratio (node and member), and Ratio Least Connections (node and member).

About priority group numbers

Using the priority group feature, you can assign a priority number to the pool member. The BIG-IP® system then distributes traffic in the pool according to the priority number that you assigned to the pool member.
For example, pool members assigned to group 3, instead of pool members in group 2 or group 1, normally receive all traffic. Thus, members that are assigned a high priority receive all traffic until the load reaches a certain level or some number of members in the group become unavailable. If either of these events occurs, some of the traffic goes to members assigned to the next higher priority group.
This setting is used in tandem with the pool feature known as priority group activation. You use the
priority group activation
feature to configure the minimum number of members that must be available before the BIG-IP system begins directing traffic to members in a lower priority group.

About connection limits

Connection limits
You can specify the maximum number of concurrent connections allowed for a pool member. Note that the default value of
0
(zero) means that there is no limit to the number of concurrent connections that the pool member can receive.
Connection rate limits
The maximum rate of new connections allowed for the pool member. When you specify a connection rate limit, the system controls the number of allowed new connections per second, thus providing a manageable increase in connections without compromising availability. The default value of
0
specifies that there is no limit on the number of connections allowed per second. The optimal value to specify for a pool member is between
300
and
5000
connections. The maximum valued allowed is
100000
.

About health monitors

Explicit monitor associations
Once you have associated a monitor with a pool, the BIG-IP system automatically associates that monitor with every pool member, including those members that you add to the pool later. However, in some cases you might want the monitor for a specific pool member to be different from that assigned to the pool. In this case, you must specify that you want to explicitly associate a specific monitor with the individual pool member. You can also prevent the BIG-IP system from associating any monitor with that pool member.
Explicit monitor association for a pool member
the BIG-IP system contains many different monitors that you can associate with a pool member, depending on the type of traffic you want to monitor. You can also create your own custom monitors and associate them with pool members. The only monitor types that are not available for associating with pool members are monitors that are specifically designed to monitor nodes and not pools or pool members. These monitor types are:
  • ICMP
  • TCP Echo
  • Real Server
  • SNMP DCA
  • SNMP DCA Base
  • WMI
Multiple monitor association for a pool member
The BIG-IP system allows you to associate more than one monitor with the same pool member. You can:
  • Associate more than one monitor with a member of a single pool. For example, you can create monitors
    http1
    ,
    http2
    , and
    http3
    , where each monitor is configured differently, and associate all three monitors with the same pool member. In this case, the pool member is marked as
    down
    if any of the checks is unsuccessful.
  • Assign one IP address and service to be a member of multiple pools. Then, within each pool, you can associate a different monitor with that pool member. For example, suppose you assign the pool member
    10.10.10.20:80
    to three separate pools:
    my_pool1
    ,
    my_pool2
    , and
    my_pool3
    . You can then associate all three custom HTTP monitors to that same pool member. The result is that the BIG-IP system uses the
    http1
    monitor to check the health of pool member
    10.10.10.20:80
    in
    my_pool1
    , the
    http2
    monitor to check the health of pool member
    10.10.10.20:80
    in
    my_pool2
    , and the
    http3
    monitor to check the health of pool member
    10.10.10.20:80
    in
    my_pool3
    .
You can make multiple-monitor associations either at the time you add the pool member to each pool, or by later modifying a pool member’s properties.
Availability requirement
You can specify a minimum number of health monitors. Before the BIG-IP system can report the pool member as being in an
up
state, this number of monitors, at a minimum, must report a pool member as being available to receive traffic.

About pool member state

You can enable or disable individual pool members. A
pool member
is a logical object on the BIG-IP® system that represents a specific server node and service. For example, a node with an IP address of
12.10.10.3
can have a corresponding pool member
12.10.10.3:80
.
When you disable a pool member, the node continues to process any active connections or any connections for the current persistence session.