Manual Chapter :
BIG-IQ Application Service Sizing Guidelines
Applies To:
Show VersionsBIG-IQ Centralized Management
- 7.1.0
BIG-IQ Application Service Sizing Guidelines
BIG-IQ application service sizing overview
This document describes the official, supported scale
envelope for BIG-IQ device v7.1.x in relation to application
services. Given the variability of F5 customers' configurations,
it is very challenging for F5 to test all permutations that
analytics and statistics gathering can bring; each customer
environment is different and unique. The following content
provides guidance on certain maximum numbers as well as guidance
on averages. This is a baseline that we feel can assist BIG-IP
and BIG-IQ users in sizing their environment correctly. The
values provided for BIG-IQ Virtual Edition are heavily dependent
on infrastructure beyond F5's control; for example, host CPU
speed, host memory, networking, storage for virtual machines,
dedicated CPU allocation for virtual machines, the load on the
infrastructure, and so on.
BIG-IQ guidance configuration for scaled environments
For large, scaled environments, F5 recommends the
following environment/topology:
- 2 BIG-IQ console machines configured in an HA pair; each having the following resources:
- a BIG-IQ Virtual Machine with at least 16 CPUs, 64 GB RAM, and 500 GB disk space.
- 5 or more (up to 20) BIG-IQ Data Collection Devices (DCD); each having the following resources:
- a BIG-IQ Virtual Machine with at least 8 CPUs, 32 GB RAM, and 500 GB disk space
The BIG-IQ data collection devices are configured to record
analytics statistics for the BIG-IP devices being discovered.
Each BIG-IQ device or BIG-IQ DCD node can be separated from each
other by no more than 75ms of network latency.
For more information about sizing for specific environments,
refer to the related links (below). You can also use the
F5 Networks BIG-IQ DCD Sizing Tools
, available on downloads.f5.com
. Scale guidance configuration for BIG-IQ specific objects
Total number of BIG-IP ISO images (full
releases and hot fixes) | Limited by local storage availability (disk
space/Shared partition size) only |
---|---|
Total maximum number of local BIG-IQ users
(see table note 1) | 100 per Centralized Management cluster |
Total maximum number of local BIG-IQ user
groups | 25 per Centralized Management cluster |
Total maximum number of BIG-IQ DCDs | 20 per BIG-IQ deployment |
- Table note 1
- Local users are not the same as remote users. For example, a remote user might need to integrate with an LDAP and/or Radius server.
Scale guidance configuration for BIG-IQ managed
applications
It is very challenging for F5 to test all permutations
that analytics and statistics gathering can bring, each customer environment is different
and unique. The following is just a sample that was tested in the F5 lab, numbers can vary
wildly from one environment to another. Because running additional application services
requires more resources, please make sure to turn on application services gradually. This
does not imply hard limits on any of the numbers, those were chosen only as a representative
configuration for this particular test. Your scale limits may vary per your own
configuration. Please look at all documentation (including the
Legacy Applications Analytics
article link at the bottom of this page). Also,
work with your F5 representative before enabling and using any system in a production
environment. The following is a list of some of the more important
limiting factors that can appear on any deployment, and what needs to be accounted for:
- Retaining more than the default 10 hours of raw data specific to analytics may impact performance. (Note that this is the granular drill line of 10 hours and not the overall graph availability. More resources (CPU/memory/disk space) are consumed when you retain more raw data. Any such increase needs to be done gradually. Please refer to theConfiguring Statistics Collectionlink at the bottom of this page.
- The number of data collection devices (DCDs) in a specific environment. (Maximum tested is 20).
- The amount of traffic coming from the BIG-IP devices to the DCDs.
- The storage size of each DCD.
- The type of events that are ingested (for example WAF events can have a higher impact than analytics).
Type | No of Applications Tested | Tested Limits (representative
configuration) | Operational (see table note 3) |
---|---|---|---|
Application Services (see table note
1) | 500 (see table note 2) | 40 BIG-IP devices/clusters per BIG-IQ device
(across all applications) 3,000 Number of ADC virtual servers per BIG-IQ
device (across all applications) 3,000 Number of ADC pools per BIG-IQ device
(across all applications) 43,000 Number of pool members per BIG-IQ device
(across all applications) 12 Applications per single BIG-IP 72 virtual servers per single BIG-IP 72 pools per single BIG-IP 1100 pool members per single BIG-IP 6 Virtual servers per single application 6 Pools per single application 15 Pool members per single application | Applications should be evenly spread on the number
of BIG-IP devices/clusters 200 Virtual servers in a single refresh
operation 13.7 Kbps Writes for traffic to ElasticSearch 100 Application services created via API in a
single call 75ms Latency between any one managed device and
BIG-IQ 75ms Latency between any one managed device and
DCD |
- Table note 1
- Application Services can be new created applications (greenfield) or legacy applications
- Table note 2
- Tested with 500 application services for a specific analytics scenario (see the exact numbers in the columns above). The limits listed here reflect the tested system only. That system deployed one BIG-IQ device with 8 CPUs, 32GB RAM, and 500 GB of disk space. Statistics were gathered by Five BIG-IQ DCDs, each with 8 CPUs, 32GB RAM, and 500 GB of disk space.Production environments will likely require additional CPU, memory, and disk space resources (for example, 16 CPUs and 64 GB RAM). You will need to perform sizing tests that reflect your environment, amount of traffic, etcetera. There can be multiple limiting factors in a production environment, for example retaining more than the default (granular) 10 hours of raw data specific to analytics. See the list above for additional limiting factors.
- Table note 3
- Tested average performance numbers are for specific operations at the specified scale only.