Manual Chapter : BIG-IQ Application Service Sizing Guidelines

Applies To:

Show Versions Show Versions

BIG-IQ Centralized Management

  • 8.0.0
Manual Chapter

BIG-IQ Application Service Sizing Guidelines

BIG-IQ application service sizing overview

This document describes the official, supported scale envelope for BIG-IQ device v8.0.x in relation to application services. Given the variability of F5 customers' configurations, it is very challenging for F5 to test all permutations that analytics and statistics gathering can bring; each customer environment is different and unique. The following content provides guidance on certain maximum numbers as well as guidance on averages. This is a baseline that we feel can assist BIG-IP and BIG-IQ users in sizing their environment correctly. The values provided for BIG-IQ Virtual Edition are heavily dependent on infrastructure beyond F5's control; for example, host CPU speed, host memory, networking, storage for virtual machines, dedicated CPU allocation for virtual machines, the load on the infrastructure, and so on.

BIG-IQ guidance configuration for scaled environments

For large, scaled environments, F5 recommends the following environment/topology:
  • 2 BIG-IQ console machines configured in an HA pair; each having the following resources:
    • a BIG-IQ Virtual Machine with at least 16 CPUs, 64 GB RAM, and 500 GB disk space.
  • 5 or more (up to 20) BIG-IQ Data Collection Devices (DCD); each having the following resources:
    • a BIG-IQ Virtual Machine with at least 8 CPUs, 32 GB RAM, and 500 GB disk space
The BIG-IQ data collection devices are configured to record analytics statistics for the BIG-IP devices being discovered.
For optimum performance, F5 makes the following maximum round trip latency recommendations:
For connections between these components
Round trip latency cannot exceed
between any two DCD or BIG-IQ devices in a DCD cluster
75 ms.
between the BIG-IQ CM and the BIG-IP devices it manages
250 ms.
between the managed BIG-IP devices and the DCDs that collect their data
250 ms.
For more information about sizing for specific environments, refer to the related links (below). You can also use the
F5 Networks BIG-IQ DCD Sizing Tools
, available on
downloads.f5.com
.

Scale guidance configuration for BIG-IQ specific objects

Total number of BIG-IP ISO images (full releases and hot fixes)
Limited by local storage availability (disk space/Shared partition size) only
Total maximum number of local BIG-IQ users (see table note 1)
100 per Centralized Management cluster
Total maximum number of local BIG-IQ user groups
25 per Centralized Management cluster
Total maximum number of BIG-IQ DCDs
20 per BIG-IQ deployment
Table note 1
Local users are not the same as remote users. For example, a remote user might need to integrate with an LDAP and/or Radius server.

Scale guidance configuration for BIG-IQ managed applications

It is very challenging for F5 to test all permutations that analytics and statistics gathering can bring, each customer environment is different and unique. The following is just a sample that was tested in the F5 lab, numbers can vary wildly from one environment to another. Because running additional application services requires more resources, please make sure to turn on application services gradually. This does not imply hard limits on any of the numbers, those were chosen only as a representative configuration for this particular test. Your scale limits may vary per your own configuration. Please look at all documentation (including the
Legacy Applications Analytics
article link at the bottom of this page). Also, work with your F5 representative before enabling and using any system in a production environment.
The following is a list of some of the more important limiting factors that can appear on any deployment, and what needs to be accounted for:
  • Retaining more than the default 10 hours of raw data specific to analytics may impact performance. (Note that this is the granular drill line of 10 hours and not the overall graph availability. More resources (CPU/memory/disk space) are consumed when you retain more raw data. Any such increase needs to be done gradually. Please refer to the
    Configuring Statistics Collection
    link at the bottom of this page.
  • The number of data collection devices (DCDs) in a specific environment. (Maximum tested is 20).
  • The amount of traffic coming from the BIG-IP devices to the DCDs.
  • The storage size of each DCD.
  • The type of events that are ingested (for example WAF events can have a higher impact than analytics).
Type
No of Applications Tested
Tested Limits (representative configuration)
Operational (see table note 3)
Application Services (see table note 1)
1000 (see table note 2)
40 BIG-IP devices/clusters per BIG-IQ device (across all applications)
5,000 ADC virtual servers per BIG- IQ device (across all applications)
5,000 pool members per BIG- IQ device (across all applications) - this number was reduced due to the automated script limitation.
5,000 pool members per BIG-IQ device (across all applications)
25 applications per single BIG-IP
125 virtual servers per single BIG-IP
125 pools per single BIG-IP
125 pool members per single BIG-IP
5 virtual servers per single application
5 pools per single application
1 pool member per single application
Applications should be evenly spread on the number of BIG-IP devices/clusters
200 virtual servers in a single refresh operation
13.7 Kbps writes for traffic to ElasticSearch
100 application services created via API in a single call
Table note 4 specifies maximum round trip latency recommendations.
Table note 1
Application Services can be new, created applications (greenfield) or legacy applications
Table note 2
Tested with 1000 application services for a specific analytics scenario (see the exact numbers in the columns above). The limits listed here reflect the tested system only. That system deployed one BIG-IQ device with 8 CPUs, 32GB RAM, and 500 GB of disk space. Statistics were gathered by Five BIG-IQ DCDs, each with 8 CPUs, 32GB RAM, and 500 GB of disk space.
Production environments will likely require additional CPU, memory, and disk space resources (for example, 16 CPUs and 64 GB RAM). You will need to perform sizing tests that reflect your environment, amount of traffic, etcetera. There can be multiple limiting factors in a production environment, for example retaining more than the default (granular) 10 hours of raw data specific to analytics. See the list above for additional limiting factors.
Table note 3
Tested average performance numbers are for specific operations at the specified scale only.
Table note 4
For optimum performance, F5 makes the following maximum round trip latency recommendations:
For connections between these components
Round trip latency cannot exceed
between any two DCD or BIG-IQ devices in a DCD cluster
75 ms.
between the BIG-IQ CM and the BIG-IP devices it manages
250 ms.
between the managed BIG-IP devices and the DCDs that collect their data
250 ms.