Manual Chapter : Getting Started with BIG-IP Virtual Edition

Applies To:

Show Versions Show Versions

BIG-IP AAM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP APM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP GTM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP Analytics

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP LTM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP AFM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP PEM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1

BIG-IP ASM

  • 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1
Manual Chapter

What is BIG-IP Virtual Edition?

BIG-IP Virtual Edition (VE) is a version of the BIG-IP system that runs as a virtual machine in specifically-supported hypervisors. BIG-IP VE virtualizes a hardware-based BIG-IP system running a VE-compatible version of BIG-IP software.

Note: The BIG-IP VE product license determines the maximum allowed throughput rate. To view this rate limit, you can display the BIG-IP VE licensing page within the BIG-IP Configuration utility. Lab editions have no guarantee of throughput rate and are not supported for production environments.

About BIG-IP VE compatibility with KVM hypervisor products

Each time there is a new release of BIG-IP Virtual Edition (VE) software, it includes support for additional hypervisor management products. The Virtual Edition and Supported Hypervisors Matrix on the AskF5 website, http://support.f5.com, details which hypervisors are supported for each release.

Important: Hypervisors other than those identified in the matrix are not supported with this BIG-IP version; installation attempts on unsupported platforms might not be successful.

About the hypervisor guest definition requirements

The KVM virtual machine guest environment for the BIG-IP Virtual Edition (VE), at minimum, must include:

  • 2 x virtual CPUs
  • 4 GB RAM
  • 3 x virtual network adapters (minimum); more if configured with the high availability option
    Important: The number of virtual network adapters per virtual machine definition is determined by the hypervisor.
  • 1 x 100 GB Virtio disk
  • SCSI disk storage; download the image size that provides sufficient space to meet your requirements. An optional secondary disk might also be required as a datastore for specific BIG-IP modules. For information about datastore requirements, refer to the BIG-IP module's documentation.
Note: Refer to Increasing the disk space allotted to the BIG-IP virtual machine for details on changing the disk size after initial download.
Important: You must supply at least the minimum virtual configuration limits to avoid unexpected results.

For production licenses, F5 Networks suggests using the maximum configuration limits for the BIG-IP VE system. For lab editions, required reserves can be less. For each virtual machine, the KVM virtual machine guest environment permits a maximum of 10 network adapters. You can either deploy these as a management port and 9 dataplane ports or a management port, 8 dataplane ports, and an HA port.

There are also some maximum configuration limits to consider for deploying a BIG-IP VE virtual machine, such as:

  • CPU reservation can be up to 100 percent of the defined virtual machine hardware. For example, if the hypervisor has a 3 GHz core speed, the reservation of a virtual machine with 2 CPUs can be only 6 GHz or less.
  • To achieve licensing performance limits, all allocated RAM must be reserved.
  • For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.
Important: There is no longer any limitation on the maximum amount of RAM supported on the hypervisor guest.
Disk space guidelines

The size of the image that you choose to download determines both the number of slots and the number and type of modules that are supported on the VE instance.

Allocated disk space Supported module combinations Module specific concerns
7 GB LTM only on a single slot You cannot install upgrades or hotfixes to this version.
31 GB LTM only on two slots. This option can be extended and upgraded with new versions and hot fix updates. It does not allow installing any modules besides LTM, GTM, or LTM + GTM.
100 GB Supports all modules. Two slots are supported with potential room to install a third.

This option can be extended and upgraded with new versions and hot fix updates. It allows installing any combination of other modules supported by the current version of BIG-IP VE software.

Guest memory guidelines

The general memory requirement recommendation for BIG-IP Virtual Edition (VE) is 2 GB per virtual CPU. Additionally, the following memory guidelines may be helpful in setting expectations based on which modules are licensed on VE guests.

Provisioned memory Supported module combinations Module specific concerns
12 GB or more All module combinations are fully supported. N/A
8 GB Provisioning more than three modules together is not supported. GTM and Link Controller do not count toward the module-combination limit.
More than 4 GB, but less than 8 GB Provisioning more than three modules together is not supported. (See module-specific concerns relating to AAM).

Application Acceleration Manager (AAM) cannot be provisioned with any other module; AAM can only be provisioned as Standalone.

GTM and Link Controller do not count toward the module-combination limit.

4 GB or less Provisioning more than two modules together is not supported. AAM can only be provisioned as Dedicated.

About TCP Segmentation Offloading support

If you want to disable support for TCP Segmentation Offloading (TSO), you must submit a tmsh command, because the TSO feature is enabled by default. Note that enabling TSO support also enables support for large receive offload (LRO) and Jumbo Frames.

Configuring a hypervisor for TSO support

You must have the Admin user role to enable or disable TSO support for a hypervisor.

Using the tmsh command sys db, you can turn TSO support on, off, or check to see whether support is currently enabled.
  1. To determine whether TSO support is currently enabled, use the tmsh list command. list sys db tm.tcpsegmentationoffload
  2. To enable support for TSO, use the tmsh enable command. sys db tm.tcpsegmentationoffload enable
  3. To disable support for TSO, use the tmsh disable command. sys db tm.tcpsegmentationoffload disable

About SR-IOV support

If you want support for SR-IOV, in addition to using the correct hardware and BIOS settings, you must configure hypervisor settings before you set up the guests.

You must have an SR-IOV-compatible network interface card (NIC) installed, and the SR-IOV BIOS enabled before you can configure SR-IOV support.

Refer to the documentation included with your hypervisor operating system for information on support and configuration for SR-IOV.