Applies To:Show Versions
What is BIG-IP Virtual Edition?
BIG-IP® Virtual Edition (VE) is a version of the BIG-IP system that runs as a virtual machine in specifically-supported hypervisors. BIG-IP VE virtualizes a hardware-based BIG-IP system running a VE-compatible version of BIG-IP® software.
About BIG-IP VE compatibility with Community Xen hypervisor products
Each time there is a new release of BIG-IP® Virtual Edition (VE) software, it includes support for additional hypervisor management products. The Virtual Edition and Supported Hypervisors Matrix on the AskF5™ website, http://support.f5.com, details which hypervisors are supported for each release.
About the hypervisor guest definition requirements
The Community Xen virtual machine guest environment for the BIG-IP® Virtual Edition (VE), at minimum, must include:
- 2 x virtual CPUs
- 4 GB RAM
- 3 x virtual network adapters (minimum); more if
configured with the high availability optionImportant: The number of virtual network adapters per virtual machine definition is determined by the hypervisor.
- 1 x 100 GB Virtio disk
For production licenses, F5 Networks suggests using the maximum configuration limits for the BIG-IP VE system. Reservations can be less for lab editions. For each virtual machine, the Community Xen virtual machine guest environment permits a maximum of 10 network adapters. You can either deploy these as a management port and 9 dataplane ports or a management port, 8 dataplane ports, and an HA port.
There are also some maximum configuration limits to consider for deploying a BIG-IP VE virtual machine, such as:
- CPU reservation can be up to 100 percent of the defined virtual machine hardware. For example, if the hypervisor has a 3 GHz core speed, the reservation of a virtual machine with 2 CPUs can be only 6 GHz or less.
- To achieve licensing performance limits, all allocated RAM must be reserved.
- For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.
About TCP Segmentation Offloading support
If you want to disable support for TCP Segmentation Offloading (TSO), you must submit a tmsh command, because the TSO feature is enabled by default. Note that enabling TSO support also enables support for large receive offload (LRO) and Jumbo Frames.
Configuring a hypervisor for TSO support
You must have the Admin user role to enable or disable TSO support for a hypervisor.
- To determine whether TSO support is currently enabled, use the tmsh list command. list sys db tm.tcpsegmentationoffload
- To enable support for TSO, use the tmsh enable command. sys db tm.tcpsegmentationoffload enable
- To disable support for TSO, use the tmsh disable command. sys db tm.tcpsegmentationoffload disable
About SR-IOV support
If you want support for SR-IOV, in addition to using the correct hardware and BIOS settings, you must configure hypervisor settings before you set up the guests.
You must have an SR-IOV-compatible network interface card (NIC) installed, and the SR-IOV BIOS enabled before you can configure SR-IOV support.
Refer to the documentation included with your hypervisor operating system for information on support and configuration for SR-IOV.