Manual Chapter : BIG-IP Solutions Guide v4.5:BIG-IP System Overview

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 4.6.1, 4.6.0, 4.5 PTF-08, 4.5 PTF-07, 4.5 PTF-06, 4.5 PTF-05, 4.5 PTF-04, 4.5 PTF-03, 4.5 PTF-02, 4.5 PTF-01, 4.5.9, 4.5.0
Manual Chapter


BIG-IP System Overview


The BIG-IP system is an Internet device used to implement a wide variety of load balancing and other network traffic solutions, including intelligent cache content determination and SSL acceleration. The subsequent chapters in this guide each outline a solution or solutions and provide configuration instructions for those solutions. This overview introduces you to the BIG-IP system, its user interfaces, and the range of solutions possible. This chapter includes these sections and sub-sections:

  • User interface
  • A basic configuration
  • Configuring objects and properties
  • Load balancing modes
  • BIG-IP system and intranets
  • The external VLAN and outbound load balancing
  • Cache control
  • SSL acceleration
  • Content conversion
  • VLANs
  • Link aggregation and failover
  • Configuring BIG-IP redundant pairs
  • Making hidden nodes accessible

User interface

The BIG-IP system user interface consists primarily of the web-based Configuration utility and the command interface bigpipe. The Configuration utility is contained in the BIG-IP unit's internal Web server. You can access it through the administrative interface on the BIG-IP system using Netscape Navigator version 4.7, or Microsoft Internet Explorer version 5.0 or 5.5.

Figure 1.1 shows the Configuration utility as it first appears, displaying the top-level (System) screen with your existing load-balancing configuration. The Configuration utility provides an instant overview of your network as it is currently configured.

Figure 1.1 The Configuration utility System screen

The left pane of the screen, referred to as the navigation pane, contains links to screens for the main configuration objects that you create and tailor for your network: Virtual Servers, Nodes, Pools, Rules, NATs, Proxies, Network, Filters, and Monitors. These screens appear in the right pane. The left pane of the screen also contains links to screens for monitoring and system administration (Statistics, Log Files, and System Admin).

A basic configuration

As suggested in the previous section, the System screen shows the objects that are currently configured for the system. These consist of virtual servers, nodes, and a load-balancing pool. What these objects represent is shown in Figure 1.2 , a very basic configuration.

Figure 1.2 A basic configuration

In this configuration, the BIG-IP system sits between a router and an array of content servers, and load balances inbound Internet traffic across those servers.

Insertion of the BIG-IP system, with its standard two interfaces, divides the network into an external VLAN and an internal VLAN. (However, both VLANs can be on a single IP network, so that inserting the BIG-IP system does not require you to change the IP addressing of the network.) The nodes on the external VLAN are routable. The nodes on the internal VLAN, however, are hidden behind the BIG-IP system. What will appear in their place is a user-defined virtual server. It is this virtual server that receives requests and distributes them among the physical servers, which are now members of a load-balancing pool.

The key to load balancing through a virtual server is address translation, and the setting of the BIG-IP system address as the default route. By default, the virtual server translates the destination address of the incoming packet to that of the server it load balances to, making it the source address of the reply packet. The reply packet returns to the BIG-IP system as the default route, and the BIG-IP system translates its source address back to that of the virtual server.

Configuring objects and object properties

Abstract entities like virtual servers and load balancing pools are called configuration objects, and the options associated with them, like load balancing mode, are called object properties. The basic configuration shown in Figure 1.2 contains three types of objects: node, pool, and virtual server. You can create these objects by clicking the object type in the left pane of the Configuration utility. For example, the pool was created by clicking Pools to open the Pools screen, then clicking the Add (+) button to open the Add Pool screen, shown in Figure 1.3 .

Figure 1.3 Add Pool screen

You could configure the same pool from the BIG-IP command line using bigpipe as follows:

b pool my_pool { member member member }

Either configuration method results in the entry shown in Figure 1.4 being placed in the file /config/bigip.conf on the BIG-IP system. You can also edit this file directly using a text editor like vi or pico.

Figure 1.4 Pool definition in bigip.conf

pool my_pool {

For a complete description of the configuration objects and properties, refer to the BIG-IP Reference Guide.

Load balancing modes

Load balancing is the distribution of network traffic across servers that are elements in the load balancing pool. The user may select from a range of load balancing methods, or modes. The simplest mode is round robin, in which servers are addressed in a set order, and the next request always goes to the next server in the order. Other load balancing modes include ratio, dynamic ratio, fastest, least connections, observed, and predictive.

  • In ratio mode, connections are distributed based on weight attribute values that represent load capacity.
  • In dynamic ratio mode, the system is configured to read ratio weights determined by the lowest measured response time from external software.
  • In fastest mode, the server with the lowest measured average response time is picked.
  • In least connections mode, the server with the lowest number of existing connections is picked.
  • Observed and predictive modes are combinations of the simpler modes.

For a complete description of the load balancing modes, refer to the BIG-IP Reference Guide, Chapter 4, Pools.

BIG-IP system and intranets

Discussion of previous configurations has been limited to load balancing incoming traffic to the internal VLAN. The BIG-IP system can also load balance outbound traffic across routers or firewalls on the external VLAN. This creates the intranet configuration shown in Figure 1.5 , which load balances traffic from intranet clients to local servers, to a local cache, or to the Internet.

Figure 1.5 A basic intranet configuration

This solution utilizes two wildcard virtual servers: Wildcard Virtual Server1, which is HTTP port specific, and Wildcard Virtual Server2, which is not port specific. In this solution, all non-HTTP requests to addresses not on the intranet are directed to the cache server, which provides the resources if they are cached, and otherwise accesses them directly from the Internet. All non-HTTP requests not on the intranet are directed to the Internet.

For detailed information on this solution, refer to Chapter 7, A Simple Intranet Configuration .

Bidirectional load balancing

The intranet configuration shown in Figure 1.5 would typically be a part of larger configuration supporting inbound and outbound traffic.

Figure 1.6 shows traffic being load balanced bidirectionally across three firewalls.

Figure 1.6 Load balancing firewalls

This configuration requires two BIG-IP units (or BIG-IP redundant pairs), and the creation of three load balancing pools with corresponding virtual servers. A virtual server on the inside BIG-IP system (BIG-IP1 in Figure 1.6 ) load balances incoming requests across the enterprise servers. Another virtual server on the outside BIG-IP system (BIG-IP2 in Figure 1.6 ) load balances incoming requests across the external interfaces of the firewalls. A third virtual server on the inside BIG-IP redundant system load balances outbound requests across the internal interfaces of the firewalls.

For detailed information on this solution, refer to Chapter 12, Balancing Two-Way Traffic Across Firewalls .

Cache control

Using cache control features, you can create rules to distribute content among three server pools, an origin server pool, a cache pool for cachable content, and a hot pool for popular cachable content. The origin pool members contain all content. The cache pool members contain content that is considered cachable (for example all HTTP and all GIF content). The hot pool members contain cachable content that is considered hot, that is, frequently accessed, as determined by a threshold you set. Once identified, hot content is distributed and load balanced across the pool to maximize processing power when it is hot, and localized to the individual caches when it is cool (less frequently accessed).

A special cache feature is destination address affinity (also called sticky persistence). This feature directs requests for a certain destination to the same proxy server, regardless of which client the request comes from. This saves the other proxies from having to duplicate the web page in their caches, which wastes memory.

For detailed information about cache rules, refer to the BIG-IP Reference Guide, Chapter 5, iRules , and to Chapters 13, 14, and 15 of this guide.

SSL acceleration

SSL acceleration uses special software with an accelerator card to speed the encryption and decryption of encoded content. This greatly speeds the flow of HTTPS traffic without affecting the flow of non-HTTPS traffic. In addition, using add-on BIG-IP e-Commerce Controllers, it is possible to create a scalable configuration that can grow with your network.

For detailed information about SSL acceleration, refer to Chapter 11, Configuring an SSL Accelerator .

Content conversion

Content conversion is the on-the-fly switching of URLs to ARLs (Akamai Resource Locators) for web resources that are stored geographically nearby on the Akamai Freeflow NetworkTM. This greatly speeds download of large, slow-to-load graphics and other types of objects.

For detailed information about content conversion, refer to Chapter 16, Configuring a Content Converter .


The internal and external VLANs created on the BIG-IP system are by default the separate port-specified VLANs external and internal, with the BIG-IP system functioning as an L2 switch. In conformance with IEEE 802.lq, the BIG-IP system supports both port-specified VLANs and tagged VLANs. This adds the efficiency and flexibility of VLAN segmentation to traffic handling between the networks. For example, with VLANs it is no longer necessary to change any IP addresses after inserting a BIG-IP system into a single network.

VLAN capability also supports multi-site hosting, and allows the BIG-IP system to fit into and extend a pre-existing VLAN segmentation, or to serve as a VLAN switch in creating a VLAN segmentation for the wider network.

For detailed information on VLANs, refer to VLANs in the BIG-IP Reference Guide, Chapter 2, Using the Setup Utility .

Link aggregation and link failover

You can aggregate links (individual physical interfaces) on the BIG-IP system by software means to form a trunk (an aggregation of links). This link aggregation increases the bandwidth of the individual links in an additive manner. Thus four fast Ethernet links, if aggregated, create a single 400 Mb/s link. Link aggregation is highly useful with asymmetric loads. Another advantage of link aggregation is link failover. If one link in a trunk goes down, traffic is simply redistributed over the remaining links. Link aggregation conforms to the IEEE 802.3ad standard.

Configuring a BIG-IP redundant system

You can configure the BIG-IP units as redundant systems, with one unit active and the other in standby mode. This is convenient because once one unit has been configured, this configuration can be copied automatically to the other unit, a process called configuration synchronization. Once you synchronize the systems, a failure detection system determines whether the active unit has failed, and automatically re-directs traffic to standby unit. This process is called failover.

A special feature of redundant pairs is optional state mirroring. When you use the state mirroring feature, the standby BIG-IP system maintains the same state information as the active BIG-IP unit. Transactions such as FTP file transfers continue uninterrupted if the standby BIG-IP unit becomes active.

For detailed information about configuring redundant pairs, refer to the BIG-IP Reference Guide, Chapter 13, Configuring a Redundant System .

Making hidden nodes accessible

To perform load balancing, the BIG-IP system hides physical servers behind a virtual server. This prevents them from receiving direct administrative connections or from initiating requests as clients (for example, to download software upgrades.) There are two basic methods for making nodes on the internal VLAN routable to the outside world: address translation and forwarding.

Address translation

Address translation consists of providing a routable alias that a node can use as its source address when acting as a client. There are two types of address translation: NAT (Network Address Translation) and SNAT (Secure Network Address Translation). NATs are assigned one per node and can be used for both outbound and inbound connections. SNATs may be assigned to multiple nodes, and permit only outbound connections, hence the greater security.

For detailed information about address translation, refer to the BIG-IP Reference Guide, Chapter 10, SNATs, NATs and IP Forwarding .


Forwarding is the simple exposure of a node's IP address to the BIG-IP unit's external VLAN so that clients can use it as a standard routable address. There are two types of forwarding: IP forwarding, and the forwarding virtual server. IP forwarding exposes all nodes and all ports on the internal VLAN. You can use the IP filter feature to implement a layer of security.

A forwarding virtual server is like IP forwarding, but exposes only selected servers and/or ports.