Applies To:
Show VersionsBIG-IP AAM
- 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
BIG-IP APM
- 12.1.6, 12.1.5, 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
BIG-IP LTM
- 12.1.6, 12.1.5, 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
BIG-IP AFM
- 12.1.6, 12.1.5, 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
BIG-IP DNS
- 12.1.6, 12.1.5, 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
BIG-IP ASM
- 12.1.6, 12.1.5, 12.1.4, 12.1.3, 12.1.2, 12.1.1, 12.1.0
Redundant System Configuration
About DSC configuration on a VIPRION system
The way you configure device service clustering (DSC®) (also known as redundancy) on a VIPRION® system varies depending on whether the system is provisioned to run the vCMP® feature.
DSC configuration for non-vCMP systems
For a device group that consists of VIPRION® systems that are not licensed and provisioned for vCMP®, each VIPRION cluster constitutes an individual device group member. The following table describes the IP addresses that you must specify when configuring redundancy.
Feature | IP addresses required |
---|---|
Device trust | The primary floating management IP address for the VIPRION cluster. |
ConfigSync | The unicast non-floating self IP address assigned to VLAN internal. |
Failover |
|
Connection mirroring | For the primary address, the non-floating self IP address that you assigned to VLAN HA. The secondary address is not required, but you can specify any non-floating self IP address for an internal VLAN.. |
DSC configuration for vCMP systems
On a vCMP® system, the devices in a device group are virtual devices, known as vCMP guests. You configure device trust, config sync, failover, and mirroring to occur between equivalent vCMP guests in separate chassis.
For example, if you have a pair of VIPRION® systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. This table shows an example.
Device groups for vCMP | Device group members |
---|---|
Device-Group-A |
|
Device-Group-B |
|
Device-Group-C |
|
By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest. The next table describes the IP addresses that you must specify when configuring redundancy.
Feature | IP addresses required |
---|---|
Device trust | The cluster management IP address of the guest. |
ConfigSync | The non-floating self IP address on the guest that is associated with VLAN internal on the host. |
Failover |
|
Connection mirroring | For the primary address, the non-floating self IP address on the guest that is associated with VLAN internal on the host. The secondary address is not required, but you can specify any non-floating self IP address on the guest that is associated with an internal VLAN on the host. |
About DSC configuration for systems with APM
When you configure a VIPRION® system (or a VIPRION system provisioned for vCMP®) to be a member of a Sync-Failover device group, you can specify the minimum number of cluster members (physical or virtual) that must be available to prevent failover. If the number of available cluster members falls below the specified value, the chassis or vCMP guest fails over to another device group member.
When one of the BIG-IP® modules provisioned on your VIPRION® system or guest is Application Policy Manager ®(APM®), you have a special consideration. The BIG-IP system automatically mirrors all APM session data to the designated next-active device instead of to an active member of the same VIPRION or vCMP cluster. As a result, unexpected behavior might occur if one or more cluster members becomes unavailable.
To prevent unexpected behavior, you should always configure the chassis or guest so that the minimum number of available cluster members required to prevent failover equals the total number of defined cluster members. For example, if the cluster is configured to contain a total of four cluster members, you should specify the Minimum Up Members value to be 4, signifying that if fewer than all four cluster members are available, failover should occur. In this way, if even one cluster member becomes unavailable, the system or guest will fail over to the next-active mirrored peer device, with full cluster member availability.
About connection mirroring
Connection mirroring ensures that if a blade, or a cluster within a device service clustering (redundant system) configuration, becomes unavailable, the system can still process existing connections. You can choose between two types of mirroring to configure for a VIPRION® system:
- Intra-cluster mirroring
- The VIPRION system mirrors the connections and session persistence records within the cluster, that is, between the blades in the cluster. You can configure intra-cluster mirroring on both single devices and redundant configurations. It is important to note that F5 Networks® does not support intra-cluster mirroring for Layer 7 (non-FastL4) virtual servers.
- Inter-cluster mirroring
- The VIPRION system mirrors the connections and session persistence records to another cluster
in a redundant configuration. You can configure inter-cluster mirroring on a redundant system
configuration only, and only on identical hardware platforms. Moreover, on a VIPRION® system running the vCMP® feature, the two guests as
mirrored peers must each reside on a separate chassis, with the same number of slots, on the
same slot numbers, and with the same number of cores allocated per slot. Note: Inter-cluster connection mirroring for CMP-disabled virtual servers is not supported.
Intra-cluster mirroring and inter-cluster mirroring are mutually exclusive. Note that although connection mirroring enhances the reliability of your system, it might affect system performance.