Applies To:Show Versions
Host CPU requirements
The host hardware CPU must meet the following requirements.
- The CPU must have 64-bit architecture.
- The CPU must have virtualization support (AMD-V or Intel VT-x) enabled.
- The CPU must support a one-to-one, thread-to-defined virtual CPU ratio, or on single-threading architectures, support at least one core per defined virtual CPU.
- In VMware ESXi 5.5 and later, do not set the number of virtual sockets to more than 2.
- If your CPU supports the Advanced Encryption Standard New Instruction (AES-NI), SSL encryption processing on BIG-IP® VE will be faster. Contact your CPU vendor for details about which CPUs provide AES-NI support.
Host memory requirements
The number of licensed TMM cores determines how much memory the host system requires.
|Number of cores||Memory required|
Configuring SR-IOV on the hypervisor
- In vSphere, access the command-line tool, esxcli.
Check to see what the ixgbe driver settings are currently.
esxcli system module parameters list -m ixgbe
Set the ixgbe driver settings.
In this example, 16,16 is for a 2 port card with 16 virtual functions.esxcli system module parameters set -m ixgbe -p "max_vfs=16,16"
Reboot the hypervisor so that the changes to take effect.
When you next visit the user interface, the SR-IOV NIC will appear in the Settings area of the guest as a PCI device.
Using vSphere, add a PCI device, and then
add two virtual functions.
05:10.0 | Intel Corporation 82599 Ethernet Controller Virtual Function
05:10.1 | Intel Corporation 82599 Ethernet Controller Virtual Function
Use either the console command line or user interface to configure the VLANs
that will serve as pass-through devices for the virtual function. For each
interface and VLAN combination, specify a name and a value.
- Name - pciPassthru0.defaultVlan
- Value - 3001
Virtual machine memory requirements
The guest should have a minimum of 4 GB of RAM for the initial 2 virtual CPUs. For each additional CPU, you should add an additional 2 GB of RAM.
|Provisioned memory||Supported modules||Details|
|4 GB or fewer||Two modules maximum.||AAM can be provisioned as standalone only.|
|4-8 GB||Three modules maximum.||
BIG-IP DNS does not count toward the module limit.
Exception: Application Acceleration Manager™ (AAM™) cannot be provisioned with any other module; AAM is standalone only.
|8 GB||Three modules maximum.||BIG-IP® DNS does not count toward the module-combination limit.|
|12 GB or more||All modules.||N/A|
Virtual machine storage requirements
The BIG-IP® modules you want to use determine how much storage the guest needs.
|Provisioned storage||Supported modules||Details|
|8 GB||Local Traffic Manager™ (LTM®) module only; no space for LTM upgrades.||You can increase storage if you need to upgrade LTM or provision additional modules.|
|38 GB||LTM module only; space for installing LTM upgrades.||You can increase storage if you decide to provision additional modules. You can also install another instance of LTM on a separate partition.|
|139 GB||All modules and space for installing upgrades.||The Application Acceleration Manager™ (AAM®) module requires 20GB of additional storage dedicated to AAM. For information about configuring the Datastore volume, see Disk Management for Datastore on the http://support.f5.com.|
For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.
Virtual machine network interfaces
When you deploy BIG-IP VE, a specific number of virtual network interfaces (vNICs) are available.
- For management access, one VMXNET3 vNIC or Flexible vNIC.
- For dataplane access, three VMXNET3 vNICs.
Each virtual machine can have a maximum of 10 virtual NICs.