Manual Chapter : High Availability Configuration

Applies To:

Show Versions Show Versions

BIG-IP LTM

  • 15.0.1, 15.0.0
Manual Chapter

High Availability Configuration

About DSC configuration on a VIPRION system

The way you configure device service clustering (DSC®) (also known as high availability) on a VIPRION® system varies depending on whether the system is provisioned to run the vCMP® feature.
When configuring high availability, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION® systems.

DSC configuration for non-vCMP VIPRION systems

For a Sync-Failover device group that consists of VIPRION systems that are not licensed and provisioned for vCMP, each VIPRION cluster constitutes an individual device group member. The following table describes the IP addresses that you must specify when configuring high availability.
Required IP addresses for DSC configuration on a non-vCMP system
Feature
IP addresses required
Device trust
The primary floating management IP address for the VIPRION cluster.
ConfigSync
The unicast non-floating self IP address assigned to VLAN
internal
.
Failover
  • Recommended: The unicast non-floating self IP address that you assigned to an internal VLAN (preferably VLAN
    HA
    ), as well as a multicast address.
  • Alternative: All unicast management IP addresses that correspond to the slots in the VIPRION cluster.
Connection mirroring
For the primary address, the non-floating self IP address that you assigned to VLAN
HA
. The secondary address is not required, but you can specify any non-floating self IP address for an internal VLAN.
Connection mirroring requires that both devices have identical hardware platforms (chassis and blades). If you plan to enable connection mirroring between two VIPRION chassis, each chassis within the Sync-Failover device group must contain the same number of blades in the same slot numbers. For more information, see the section
Configuring connection mirroring between VIPRION clusters
.
When configuring high availability, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION systems.

DSC configuration for vCMP systems

On a vCMP® system, the devices in a device group are virtual devices, known as
vCMP guests
. You configure device trust, config sync, failover, and mirroring to occur between equivalent vCMP guests in separate chassis.
For example, if you have a pair of VIPRION® systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. This table shows an example.
Sample device groups for two VIPRION systems with vCMP
Device groups for vCMP
Device group members
Device-Group-A
  • Guest1
    on chassis1
  • Guest1
    on chassis2
Device-Group-B
  • Guest2
    on chassis1
  • Guest2
    on chassis2
Device-Group-C
  • Guest3
    on chassis1
  • Guest3
    on chassis2
By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest. The next table describes the IP addresses that you must specify when configuring high availability.
Required IP addresses for DSC configuration on a VIPRION system with vCMP
Feature
IP addresses required
Device trust
The cluster management IP address of the guest.
ConfigSync
The non-floating self IP address on the guest that is associated with VLAN
internal
on the host.
Failover
  • Recommended: The unicast non-floating self IP address on the guest that is associated with an internal VLAN on the host (preferably VLAN
    HA
    ), as well as a multicast address.
  • Alternative: The unicast management IP addresses for all slots configured for the guest.
Connection mirroring
For the primary address, the non-floating self IP address on the guest that is associated with VLAN
internal
on the host. The secondary address is not required, but you can specify any non-floating self IP address on the guest that is associated with an internal VLAN on the host.
Connection mirroring requires that both devices have identical hardware platforms (chassis and blades). If you plan to enable connection mirroring between two guests, each guest must reside on a separate chassis, be assigned to the same number of blades in the same slot numbers, and have the same number of cores allocated per slot. For more information, see the section
Configuring connection mirroring between VIPRION clusters
.
When configuring high availability, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION® systems.

About DSC configuration for systems with APM

When you configure a VIPRION® system (or a VIPRION system provisioned for vCMP®) to be a member of a Sync-Failover device group, you can specify the minimum number of cluster members (physical or virtual) that must be available to prevent failover. If the number of available cluster members falls below the specified value, the chassis or vCMP guest fails over to another device group member.
When one of the BIG-IP modules provisioned on your VIPRION® system or guest is Application Policy Manager ®(APM®), you have a special consideration. The BIG-IP system automatically mirrors all APM session data to the designated next-active device instead of to an active member of the same VIPRION or vCMP cluster. As a result, unexpected behavior might occur if one or more cluster members becomes unavailable.
To prevent unexpected behavior, you should always configure the chassis or guest so that the minimum number of available cluster members required to prevent failover equals the total number of defined cluster members. For example, if the cluster is configured to contain a total of four cluster members, you should specify the
Minimum Up Members
value to be
4
, signifying that if fewer than all four cluster members are available, failover should occur. In this way, if even one cluster member becomes unavailable, the system or guest will fail over to the next-active mirrored peer device, with full cluster member availability.

About connection mirroring for VIPRION systems

For VIPRION® systems, each device in a Sync-Failover device group can be either a physical cluster of slots within a chassis, or a virtual cluster for a vCMP® guest. In either case, you can configure a device to mirror an active traffic group's connections to its next-active device.
For mirroring to work, both the active device and its next-active device must have identical chassis platform and blade models.
You enable connection mirroring on the relevant virtual server, and then you configure each VIPRION cluster or vCMP guest to mirror connections by choosing one of these options:
Within a cluster
You can configure the BIG-IP system to mirror connections between blades within a single VIPRION cluster on the same chassis. This option is not available on VIPRION systems provisioned to run vCMP.
With this option, the BIG-IP system mirrors Fast L4 connections only.
Between clusters (recommended)
You can configure the BIG-IP system to mirror connections between two chassis or between two vCMP guests that reside in separate chassis. When you choose this option, the BIG-IP system mirrors a traffic group's connections to the traffic group's next-active device. For VIPRION systems that are not provisioned for vCMP, each chassis must have the same number of blades in the same slot numbers. For VIPRION systems provisioned for vCMP, each guest must be assigned to the same number of blades in the same slot numbers, with the same number of cores allocated per slot.
In addition to enabling connection mirroring on the virtual server, you must also assign the appropriate profiles to the virtual server. For example, if you want the BIG-IP system to mirror SSL connections, you must assign one or more SSL profiles to the virtual server.

Configuring connection mirroring within a VIPRION cluster

Using the BIG-IP Configuration utility, you can configure mirroring among blades of a VIPRION® cluster within the same chassis.
Mirroring among blades within the same chassis applies to Fast L4 connections only. Also, mirroring can only occur between identical blade models.
  1. From a browser window, log in to the BIG-IP Configuration utility, using the cluster IP address.
  2. On the Main tab, click
    Device Management
    Devices
    .
    The Devices screen opens.
  3. In the Device list, in the Name column, click the name of the device you want to configure.
  4. From the Device Connectivity menu, choose Mirroring.
  5. From the
    Network Mirroring
    list, select
    Within Cluster
    .
  6. Click
    Update
    .

Configuring connection mirroring between VIPRION clusters

Before doing this task, you must enable connection mirroring on the relevant virtual server.
Using the BIG-IP Configuration utility, you can configure connection mirroring between two VIPRION or vCMP clusters as part of your high availability setup:
  • When you configure mirroring on a VIPRION system where vCMP is not provisioned (a bare-metal configuration), an active traffic group on one chassis mirrors its connections to the next-active chassis in the device group.
  • When you configure mirroring on a vCMP guest, an active traffic group mirrors its connections to its next-active guest in another chassis.
Connection mirroring requires that both devices have identical hardware platforms (chassis and blades).
You must perform this task locally on every device (chassis or vCMP guest) in the device group. For VIPRION systems with bare-metal configurations (no vCMP provisioned), each chassis must contain the same number of blades in the same slot numbers. For VIPRION systems provisioned for vCMP, each guest must reside on a separate chassis, be assigned to the same number of blades in the same slot numbers, and have the same number of cores allocated per slot.
  1. From a browser window, log in to the BIG-IP Configuration utility, using the cluster IP address.
  2. On the Main tab, click
    Device Management
    Devices
    .
    The Devices screen opens.
  3. In the Device list, in the Name column, click the name of the device you want to configure.
  4. From the Device Connectivity menu, choose Mirroring.
  5. From the
    Network Mirroring
    list, select
    Between Clusters
    .
  6. Click
    Update
    .