Manual Chapter : Deploying BIG-IP Virtual Edition

Applies To:

Show Versions Show Versions

BIG-IP AAM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP APM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP GTM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP Analytics

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP LTM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP AFM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP PEM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP ASM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1
Manual Chapter

Host machine requirements and recommendations

To successfully deploy and run the BIG-IP® VE system, the host system must satisfy minimum requirements.

The host system must include:

  • VMware ESX or ESXi. The Virtual Edition and Supported Hypervisors Matrix, published on the AskF5™ web site, http://support.f5.com identifies the versions that are supported.
  • For SR-IOV support, you need a network interface card that supports SR-IOV; also, make sure that SR-IOV BIOS support is enabled.
  • For SR-IOV support, load the ixgbe driver and blacklist the ixgbevf driver.
  • VMware vSphere client
  • Connection to a common NTP source (this is especially important for each host in a redundant system configuration)
The hypervisor CPU must meet the following requirements:
  • Use 64-bit architecture.
  • Have support for virtualization (AMD-V or Intel VT-x) enabled.
  • Support a one-to-one thread-to-defined virtual CPU ratio, or (on single-threading architectures) support at least one core per defined virtual CPU.
  • If you use an Intel processor, it must be from the Core (or newer) workstation or server family of CPUs.

SSL encryption processing on your VE will be faster if your host CPU supports the Advanced Encryption Standard New Instruction (AES-NI). Contact your CPU vendor for details on which CPUs provide AES-NI support.

The hypervisor memory requirement depends on the number of licensed TMM cores. The table describes these requirements.

Number of Cores Memory Required
1 2 Gb
2 4 Gb
4 8 Gb
8 16 Gb

About BIG-IP VE VMware deployment

To deploy the BIG-IP® Virtual Edition (VE) system on VMware ESXi, you need to perform these tasks:

  • Verify the host machine requirements.
  • Deploy an instance of the BIG-IP system as a virtual machine on a host system.
  • Power on the BIG-IP VE virtual machine.
  • Assign a management IP address to the BIG-IP VE virtual machine.
  • Configure CPU reservation.

After you complete these tasks, you can log in to the BIG-IP VE system and run the Setup utility. Using the Setup utility, you can perform basic network configuration tasks, such as assigning VLANs to interfaces.

Deploying a BIG-IP VE virtual machine

To create an instance of the BIG-IP® system that runs as a virtual machine on the host system,complete the steps in this procedure.

Important: Do not modify the configuration of the VMware guest environment with settings less powerful than the ones recommended in this document. This includes the settings for the CPU, RAM, and network adapters. Doing so might produce unexpected results.
  1. In a browser, open the F5 Downloads page (https://downloads.f5.com).
  2. Download the BIG-IP VE file package ending with scsi.ova.
  3. Start your vSphere Client and log in.
  4. From the vSphere Client File menu, choose Deploy OVF Template.
    The Deploy OVF Template wizard starts.
  5. In the Source pane, click Deploy from file or URL, and, using the Browse button, locate the OVA file, open it, and then click Next.
    For example: \MyDocuments\Work\Virtualization\<BIG-IP_OVA_filename>
    The OVF Template Details pane opens.
  6. Verify that the OVF template details are correct, and click Next.
    This displays the End-User License Agreement (EULA).
  7. Read and accept the license agreement, and click Next.
    The Name and Location pane opens.
  8. In the Name field, type a name for the BIG-IP VE virtual machine, such as: smith_big-ip_ve.
  9. In the Inventory Location area, select a folder name and click Next.
  10. From the Configuration list, select the number of CPUs and disks required for your system, and then click Next.
  11. If the host system is controlled by VMware vCenter, the Host Cluster screen opens. Choose the preferred host and click Next. Otherwise, proceed to the next step.
  12. In the Datastore field, type the name of data source your system will use, in the Available space field, type in the amount of space your system needs (in Gigabytes), and then click Next.
    The Network Mapping dialog box opens.
  13. If SR-IOV support is required, skip this step and perform step 14 instead. Map the Source Networks for Management, External, Internal, and HA to the Destination Networks in your inventory.
    1. Map the source network Management to the name of the appropriate external network in your inventory.
      An example of a destination external network is Management.
    2. Map the source network Internal to the name of a destination non-management network in your inventory.
      An example of a destination internal network is Private Access.
    3. Map the source network External to the name of the appropriate external network in your inventory.
      An example of a destination external network is Public Access.
    4. Map the source network HA to the name of a high-availability network in your inventory.
      An example of a destination internal network is HA.
    5. When you have all four destination networks correctly mapped, click Next and skip the next (SR-IOV only) step.
      The Ready to Complete screen opens.
  14. (Perform this step only if SR-IOV support is required.) Add PCI device NICs.
    1. Delete the existing Source Networks for External, Internal, and HA.
      Important: Be sure to leave the Source Network for the Management NIC.
    2. Edit the settings for the virtual machine to add a PCI device. Map the new device to the name of the device that corresponds to the VLAN associated with your internal subnet.
      Assuming your hypervisor setup was performed correctly, there will be 16 virtual functions on each port (05:10.x and 05:11:x) to which you can choose to map your device.
    3. Edit the settings for the virtual machine to add a PCI device. Map the new device to the name of the device that corresponds to the VLAN associated with your external subnet.
    4. Edit the settings for the virtual machine to add a PCI device. Map the new device to the name of the device that corresponds to the VLAN associated with your internal HA.
    5. When you have all four destination networks correctly mapped, click Next.
      The Ready to Complete screen opens.
  15. Verify that all deployment settings are correct, and click Finish.

Powering on the virtual machine

You power on the virtual machine so that you can begin assigning IP addresses.
  1. In the main vSphere client window, click the Administration menu.
  2. Select the virtual machine that you want to power on.
  3. Click the Summary tab, and in the Commands area, click Power On.
    The status icon changes to indicate that the virtual machine is on. Note that the system will not process traffic until you configure the virtual machine from its command line or through its web interface.

There are two default accounts used for initial configuration and setup:

  • The root account provides access locally, or using SSH, or using the F5 Configuration utility. The root account password is default.
  • The admin account provides access through the web interface. The admin account password is admin.

You should change passwords for both accounts before bringing a system into production.

Assigning a management IP address to a virtual machine

The virtual machine needs an IP address assigned to its virtual management port.
Tip: The default configuration for new deployments and installations is for DHCP to acquire the management port IP address.
  1. From the main vSphere client screen, click the Administration menu.
  2. At the <username> login prompt, type root.
  3. At the password prompt, type default.
  4. Type config and press Enter.
    The F5 Management Port Setup screen opens.
  5. Click OK.
  6. If you want DHCP to automatically assign an address for the management port, select Yes. Otherwise, select No and follow the instructions for manually assigning an IP address and netmask for the management port.

When assigned, the management IP address appears in the Summary tab of the vSphere client. Alternatively, a hypervisor generic statement can be used, such as tmsh show sys management-ip.

Tip: F5 Networks highly recommends that you specify a default route for the virtual management port, but it is not required for operating the virtual machine.

Configuring the CPU reservation

Based on selections you made when you deployed the OVA file, a specific amount of memory is reserved for the BIG-IP VE virtual machine.

CPU is not specifically reserved, so to prevent instability on heavily-loaded hosts, you should reserve it manually.

  1. In vSphere, edit the properties of the virtual machine.
  2. Click the Resources tab.
  3. In the Settings area, click CPU.
  4. In the Resource Allocation section, use the slider to change the reservation.
    The CPU reservation can be up to 100 percent of the defined virtual machine hardware. For example, if the hypervisor has a 3 GHz core speed, the reservation of a virtual machine with 2 CPUs can be only 6 GHz or less.
  5. Click OK.

Turning off LRO/GRO from the VE guest to optimize PEM performance

Before you can access the VE guest to turn off LRO and GRO, you must have assigned the guest a management IP address.
To optimize performance if you use the virtual machine with the PEM module, you must turn off large receive offload (LRO) and generic receive offload (GRO) for each network interface card (NIC) that is used to pass traffic. You must also use SR-IOV. Although there are a number of ways to turn off LRO, the most reliable way is to connect to the VE guest and use the ethtool utility.
  1. Use an SSH tool to access the management IP address of the BIG-IP® VE system.
  2. From the command line, log in as root.
  3. Use the ethtool to turn off rx-checksumming for the NIC.
    ethtool -K eth<X> rx off
    Important: In this example, substitute the NIC number for <X>.
  4. Use the ethtool to turn off LRO for the NIC.
    ethtool -K eth<X> lro off
    Important: In this example, substitute the NIC number for <X>.
  5. Use the ethtool to turn off GRO for the NIC.
    ethtool -K eth<X> gro off
    Important: In this example substitute the NIC number for <X>.
  6. Use the ethtool to confirm that LRO and GRO are successfully turned off for the NIC.
    ethtool -k eth<X>
    In the system response to your command, you should see this info:

    generic-receive-offload: off

    large-receive-offload: off

    If either of these responses is on, your attempt to turn them off was not successful.
    Important: In this example substitute the NIC number for <X>.
  7. Repeat the previous three steps for each of the NICs that the BIG-IP VE uses to pass traffic.

With LRO and GRO successfully turned off, the performance of the PEM module on the BIG-IP VE system will have better performance and stability.

You can achieve optimum performance (throughput and stability) with the PEM module only if you enable SR-IOV.