Applies To:
Show VersionsBIG-IP LTM
- 11.5.10, 11.5.9, 11.5.8, 11.5.7, 11.5.6, 11.5.5, 11.5.4, 11.5.3, 11.5.2, 11.5.1
Virtual Servers
Introduction to virtual servers
Virtual servers and virtual addresses are two of the most important components of any BIG-IP® Local Traffic Manager™ configuration:
- A virtual server is a traffic-management object on the BIG-IP system that is represented by an IP address and a service. Clients on an external network can send application traffic to a virtual server, which then directs the traffic according to your configuration instructions. Virtual servers typically direct traffic to a pool of servers on an internal network, by translating the destination IP address in each packet to a pool member address. Overall, virtual servers increase the availability of resources for processing client requests.
- A virtual address is the IP address component of a virtual server. For example, if a virtual server’s destination IP address and service are 10.10.10.2:80, then the IP address 10.10.10.2 is a virtual address. You do not explicitly create virtual addresses; instead, the BIG-IP system creates a virtual address when you create a virtual server and specify the destination IP address.
You can create a many-to-one relationship between virtual servers and a virtual address. For example, you can create the three virtual servers 10.10.10.2:80, 10.10.10.2:443, and 10.10.10.2:161 for the same virtual address, 10.10.10.2.
You can enable and disable a virtual address. When you disable a virtual address, none of the virtual servers associated with that address can receive incoming network traffic.
About virtual server settings
A virtual server has several settings that you can configure to affect the way that a virtual server manages traffic. You can also assign certain resources to a virtual server, such as a load balancing pool and various policies. Together, these properties, settings, and resources represent the definition of a virtual server, and most have default values. When you create a virtual server, you can either retain the default values or adjust them to suit your needs.
If you have created a virtual server that is a standard type of virtual server, one of the resources you typically assign to the virtual server is a default pool. A default pool is the server pool to which Local Traffic Manager™ sends traffic if no iRule or policy exists that specifies a different pool. Note that if you plan on using an iRule or policy to direct traffic to a pool, you must assign the iRule or policy as a resource to the virtual server.
Types of virtual servers
There are several different types of virtual servers that you can create.
Type | Description |
---|---|
Standard | A Standard virtual server (also known as a load balancing virtual server) directs client traffic to a load balancing pool and is the most basic type of virtual server. When you first create the virtual server, you assign an existing default pool to it. From then on, the virtual server automatically directs traffic to that default pool. |
Forwarding (Layer 2) | You can set up a Forwarding (Layer 2) virtual server to share the same IP address as a node in an associated VLAN. To do this, you must perform some additional configuration tasks. These tasks consist of: creating a VLAN group that includes the VLAN in which the node resides, assigning a self-IP address to the VLAN group, and disabling the virtual server on the relevant VLAN. |
Forwarding (IP) | A Forwarding (IP) virtual server is just like other virtual servers, except that a forwarding virtual server has no pool members to load balance. The virtual server simply forwards the packet directly to the destination IP address specified in the client request. When you use a forwarding virtual server to direct a request to its originally-specified destination IP address, Local Traffic Manager adds, tracks, and reaps these connections just as with other virtual servers. You can also view statistics for a forwarding virtual servers. |
Performance (HTTP) | A Performance (HTTP) virtual server is a virtual server with which you associate a Fast HTTP profile. Together, the virtual server and profile increase the speed at which the virtual server processes HTTP requests. |
Performance (Layer 4) | A Performance (Layer 4) virtual server is a virtual server with which you associate a Fast L4 profile. Together, the virtual server and profile increase the speed at which the virtual server processes Layer 4 requests. |
Stateless | A stateless virtual server prevents the BIG-IP system from putting connections into the connection table for wildcard and forwarding destination IP addresses. When creating a stateless virtual server, you cannot configure SNAT automap, iRules, or port translation, and you must configure a default load balancing pool. Note that this type of virtual server applies to UDP traffic only. |
Reject | A Reject virtual server specifies that the BIG-IP system rejects any traffic destined for the virtual server IP address. |
DHCP Relay | A DHCP Relay virtual server relays Dynamic Host Control Protocol (DHCP) messages between clients and servers residing on different IP networks. Known as a DHCP relay agent, a BIG-IP system with a DHCP Relay type of virtual server listens for DHCP client messages being broadcast on the subnet and then relays those messages to the DHCP server. The DHCP server then uses the BIG-IP system to send the responses back to the DHCP client. Configuring a DHCP Relay virtual server on the BIG-IP system relieves you of the tasks of installing and running a separate DHCP server on each subnet. |
Internal | An internal virtual server is one that can send traffic to an intermediary server for specialized processing before the standard virtual server sends the traffic to its final destination. For example, if you want the BIG-IP system to perform content adaptation on HTTP requests or responses, you can create an internal virtual server that load balances those requests or responses to a pool of ICAP servers before sending the traffic back to the standard virtual server. |
About source and destination addresses
There are two distinct types of virtual servers that you can create: virtual servers that listen for a host destination address and virtual servers that listen for a network destination address. For both types of virtual servers, you can also specify a source IP address.
About source addresses
When configuring a virtual sever, you can specify an IP address or network from which the virtual server will accept traffic. For this setting to function properly, you must specify a value other than 0.0.0.0/0 or ::/0 (that is, any/0, any6/0). To maximize utility of this setting, specify the most specific address prefixes spanning all customer addresses and no others.
About host destination addresses
A host virtual server represents a specific site, such as an Internet web site or an FTP site, and the virtual server load balances traffic targeted to content servers that are members of a pool. A host virtual server provides a level of security, similar to an access control list (ACL), because its destination address includes a port specification, causing the virtual server to accept only traffic destined for that port.
The IP address that you assign to a host virtual server should match the IP address that Domain Name System (DNS) associates with the site’s domain name. When the BIG-IP® system receives a connection request for that site, Local Traffic Manager™ recognizes that the client’s destination IP address matches the IP address of the virtual server, and subsequently forwards the client request to one of the content servers that the virtual server load balances.
About network destination addresses
A network virtual server is a virtual server whose IP address has no bits set in the host portion of the IP address (that is, the host portion of its IP address is 0). There are two kinds of network virtual servers: those that direct client traffic based on a range of destination IP addresses, and those that direct client traffic based on specific destination IP addresses that the BIG-IP system does not recognize. A network virtual server provides a level of security because its destination network address includes a port specification, causing the virtual server to accept only traffic destined for that port on the specified network .
When you have a range of destination IP addresses
With an IP address whose host bit is set to 0, a virtual server can direct client connections that are destined for an entire range of IP addresses, rather than for a single destination IP address (as is the case for a host virtual server). Thus, when any client connection targets a destination IP address that is in the network specified by the virtual server IP address, Local Traffic Manager (LTM®) can direct that connection to one or more pools associated with the network virtual server.
For example, the virtual server can direct client traffic that is destined for any of the nodes on the 192.168.1.0 network to a specific load balancing pool such as ingress-firewalls. Or, a virtual server could direct a web connection destined to any address within the subnet 192.168.1.0/24, to the pool default_webservers.
When you have transparent devices (wildcard virtual servers)
Besides directing client connections that are destined for a specific network or subnet, a network virtual server can also direct client connections that have a specific destination IP address that the virtual server does not recognize, such as a transparent device. This type of network virtual server is known as a wildcard virtual server.
Wildcard virtual servers are a special type of network virtual server designed to manage network traffic that is targeted to transparent network devices. Examples of transparent devices are firewalls, routers, proxy servers, and cache servers. A wildcard virtual server manages network traffic that has a destination IP address unknown to the BIG-IP system.
Unrecognized client IP addresses
A host-type of virtual server typically manages traffic for a specific site. When receiving a connection request for that site, Local Traffic Manager forwards the client to one of the content servers that the virtual server load balances.
However, when load balancing transparent nodes, the BIG-IP system might not recognize a client’s destination IP address. The client might be connecting to an IP address on the other side of the firewall, router, or proxy server. In this situation, Local Traffic Manager cannot match the client’s destination IP address to a virtual server IP address.
Wildcard network virtual servers solve this problem by not translating the incoming IP address at the virtual server level on the BIG-IP system. For example, when Local Traffic Manager does not find a specific virtual server match for a client’s destination IP address, LTM matches the client’s destination IP address to a wildcard virtual server, designated by an IP address of 0.0.0.0. Local Traffic Manager then forwards the client’s packet to one of the firewalls or routers that the wildcard virtual server load balances, which in turn forwards the client’s packet to the actual destination IP address.
Default and port-specific wildcard servers
There are two kinds of wildcard virtual servers that you can create:
- Default wildcard virtual servers
- A default wildcard virtual server is a wildcard virtual server that uses port 0 and handles traffic for all services. A wildcard virtual server is enabled for all VLANs by default. However, you can specifically disable any VLANs that you do not want the default wildcard virtual server to support. Disabling VLANs for the default wildcard virtual server is done by creating a VLAN disabled list. Note that a VLAN disabled list applies to default wildcard virtual servers only. You cannot create a VLAN disabled list for a wildcard virtual server that is associated with one VLAN only.
- Port-specific wildcard virtual servers
- A port-specific wildcard virtual server handles traffic only for a particular service, and you define it using a service name or a port number. You can use port-specific wildcard virtual servers for tracking statistics for a particular type of network traffic, or for routing outgoing traffic, such as HTTP traffic, directly to a cache server rather than a firewall or router.
If you use both a default wildcard virtual server and port-specific wildcard virtual servers, any traffic that does not match either a standard virtual server or one of the port-specific wildcard virtual servers is handled by the default wildcard virtual server.
We recommend that when you define transparent nodes that need to handle more than one type of service, such as a firewall or a router, you specify an actual port for the node and turn off port translation for the virtual server.
Multiple wildcard servers
You can define multiple wildcard virtual servers that run simultaneously. Each wildcard virtual server must be assigned to an individual VLAN, and therefore can handle packets for that VLAN only.
In some configurations, you need to set up a wildcard virtual server on one side of the BIG-IP system to load balance connections across transparent devices. You can create another wildcard virtual server on the other side of the BIG-IP system to forward packets to virtual servers receiving connections from the transparent devices and forwarding them to their destination.
About route domain IDs
Whenever you configure the Source and Destination settings on a virtual server, the BIG-IP system requires that the route domain IDs match, if route domain IDs are specified. To ensure that this requirement is met, the BIG-IP system enforces specific rules, which vary depending on whether you are modifying an existing virtual server or creating a new virtual server.
User action | Result |
---|---|
In the destination address, you change an existing route domain ID. | The system automatically changes the route domain ID on the source address to match the new destination route domain ID. |
In the source address, you change an existing route domain ID. | If the new route domain ID does not match the route domain ID in the destination address, the system displays an error message stating that the two route domain IDs must match. |
User action | Result |
---|---|
You specify a destination IP address only,with a route domain ID, and do not specify a source IP address. | The source IP address defaults to 0.0.0.0 and inherits the route domain ID from the destination IP address. |
You specify both source and destination addresses but no route domain IDs. | The BIG-IP system uses the default route domain. |
You specify both source and destination addresses and a route domain ID on each of the IP addresses. | The BIG-IP system verifies that both route domain IDs match. Otherwise, the system displays an error message. |
You specify both source and destination addresses and a route domain ID on one of the addresses, but exclude an ID from the other address. | The system verifies that the specified route domain ID matches the ID of the default route domain. Specifically, when one address lacks an ID, the only valid configuration is one in which the ID specified on the other address is the ID of a default route domain. Otherwise, the system displays an error message. |
About destination service ports
Status notification to virtual addresses
You can configure a virtual server so that the status of the virtual server contributes to the associated virtual address status. When disabled, the status of the virtual server does not contribute to the associated virtual address status. This status, in turn, affects the behavior of the system when you enable route advertisement of virtual addresses.
About profiles for traffic types
Not only do virtual servers distribute traffic across multiple servers, they also treat varying types of traffic differently, depending on your traffic-management needs. For example, a virtual server can enable compression on HTTP request data as it passes through the BIG-IP system, or decrypt and re-encrypt SSL connections and verify SSL certificates. For each type of traffic destined for a specific virtual server, the virtual server can apply an entire group of settings (known as a profile) to affect the way that the BIG-IP system manages that traffic type.
In addition to compression and SSL profiles, you can configure a virtual server to apply profiles such as TCP, UDP, SPDY, SIP, FTP, and many more.
About VLAN and tunnel assignment
When you configure a virtual server, you can specify one or more VLANs, tunnels, or both, using the VLAN and Tunnel Traffic and VLANs and Tunnels settings. Configuring this feature specifies the VLANs or tunnels from which the virtual server will accept traffic. In a common configuration, the VLANs and tunnels selected reside on the external network.
About source address translation (SNATs)
When the default route on the servers does not route responses back through the BIG-IP system, you can create a secure network address translation (SNAT). A secure network address translation (SNAT) ensures that server responses always return through the BIG-IP® system. You can also use a SNAT to hide the source addresses of server-initiated requests from external devices.
For inbound connections from a client, a SNAT translates the source IP address within packets to a BIG-IP system IP address that you or the BIG-IP system defines. The destination node then uses that new source address as its destination address when responding to the request.
For outbound connections, SNATs ensure that the internal IP address of the server node remains hidden to an external host when the server initiates a connection to that host.
If you want the system to choose a SNAT translation address for you, you can select the Auto Map feature. If you prefer to define your own address, you can create a SNAT pool and assign it to the virtual server.
About bandwidth control
You can specify an existing static bandwidth control policy for the system to use to enforce a throughput policy for incoming network traffic. A static bandwidth control policy controls the aggregate rate for a group of applications or a network path. The bandwidth control policy enforces the total amount of bandwidth that can be used, specified as the maximum rate of the resource you are managing. The rate can be the total bandwidth of the BIG-IP® device, or it might be a group of traffic flows.
About traffic classes
When you create or modify a virtual server, you can assign one or more existing traffic classes to the virtual server. A traffic class allows you to classify traffic according to a set of criteria that you define, such as source and destination IP addresses. Traffic classes define not only classification criteria, but also a classification ID. Once you have defined the traffic class and assigned the class to a virtual server, the BIG-IP system associates the classification ID to each traffic flow. In this way, the BIG-IP system can regulate the flow of traffic based on that classification.
When attempting to match traffic flows to a traffic class, the BIG-IP system uses the most specific match possible.
About connection and rate limits
A virtual server, pool member, or node can prevent an excessive number of connection requests, such as during a Denial of Service (DoS) attack or during a high-demand shopping event. To ensure the availability of a virtual server, pool member, or node, you can use the BIG-IP® Local Traffic Manager™ to manage the total number of connections and the rate at which connections are made.
When you specify a connection limit, the system prevents the total number of concurrent connections to the virtual server, pool member, or node from exceeding the specified number.
When you specify a connection rate limit, the system controls the number of allowed new connections per second, thus providing a manageable increase in connections without compromising availability.
About connection and persistence mirroring
BIG-IP® system redundancy includes the ability for a device to mirror connection and persistence information to another device, to prevent interruption in service during failover. The BIG-IP system mirrors connection and persistence data over TCP port 1028 with every packet or flow state update.
Connection mirroring operates at the traffic group level. That is, each device in a device group has a specific mirroring peer device for each traffic group. The mirroring peer device is the traffic group's next-active device.
For example, if device Bigip_A is active for traffic group traffic-group-1, and the next-active device for that traffic group is Bigip_C, then the traffic group on the active device mirrors its in-process connections to traffic-group-1 on Bigip_C.
If Bigip_A becomes unavailable and failover occurs, traffic-group-1 on Bigip_C becomes active and continues the processing of any current connections.
About destination address and port translation
When you enable address translation on a virtual server, the BIG-IP system translates the destination address of the virtual server to the node address of a pool member. When you disable address translation, the system uses the virtual server destination address without translation. The default is enabled.
When you enable port translation on a virtual server, the BIG-IP system translates the port of the virtual server. When you disable port translation, the system uses the port without translation. Turning off port translation for a virtual server is useful if you want to use the virtual server to load balance connections to any service. The default is enabled.
About source port preservation
On a virtual server, you can specify whether the BIG-IP system preserves the source port of the connection. You can instruct the BIG-IP system to either preserve the source port in certain or all cases, or change the source port for all connections. The default behavior is to attempt to preserve the source port but use a different port if the source port from a particular SNAT is already in use.
Alternatively, you can instruct the system to always preserve the source port. In this case, if the port is in use, the system does not process the connection. F5 Networks recommends that you restrict use of this setting to cases that meet at least one of the following conditions:
- The port is configured for UDP traffic.
- The system is configured for nPath routing or is running in transparent mode (that is, there is no translation of any other Layer 3 or Layer 4 field).
- There is a one-to-one relationship between virtual IP addresses and node addresses, or clustered multi-processing (CMP) is disabled.
Instructing the system to change instead of preserve the source port of the connection is useful for obfuscating internal network addresses.
About clone pools
You use a clone pool when you want to configure the BIG-IP system to send traffic to a pool of intrusion detection systems (IDSs). An intrusion detection system (IDS) is a device that monitors inbound and outbound network traffic and identifies suspicious patterns that might indicate malicious activities or a network attack. You can use the clone pool feature of a BIG-IP system to copy traffic to a dedicated IDS or a sniffer device.
To configure a clone pool, you first create the clone pool of IDS or sniffer devices and then assign the clone pool to a virtual server. The clone pool feature is the recommended method for copying production traffic to IDS systems or sniffer devices. Note that when you create the clone pool, the service port that you assign to each node is irrelevant; you can choose any service port. Also, when you add a clone pool to a virtual server, the system copies only new connections; existing connections are not copied.
You can configure a virtual server to copy client-side traffic, server-side traffic, or both:
- A client-side clone pool causes the virtual server to replicate client-side traffic (prior to address translation) to the specified clone pool.
- A server-side clone pool causes the virtual server to replicate server-side traffic (after address translation) to the specified clone pool.
You can configure an unlimited number of clone pools on the BIG-IP system.
About auto last hop
When you enable the Auto Last Hop setting, the BIG-IP system can send any return traffic to the MAC address that transmitted the request, even if the routing table points to a different network or interface. As a result, the system can send return traffic to clients even when there is no matching route, such as when the system does not have a default route configured and the client is located on a remote network.
This setting is also useful when the system is load balancing transparent devices that do not modify the source IP address of the packet. Without the Auto Last Hop setting enabled, the system could return connections to a different transparent node, resulting in asymmetric routing.
You can configure this setting globally and on an object level. You set the global Auto Last Hop value on the System >> Configuration >> Local Traffic >> General screen. In this case, users typically retain the default setting, Enabled. When you configure Auto Last Hop at the object level with a value other than Default, the value you configure takes precedence over the global setting. This enables you to configure Auto Last Hop on a per-pool member basis. The default value for the virtual server Auto Last Hop setting is Default, which causes the system to use the global Auto Last Hop setting to send back the request.
About NAT64
You can instruct the BIG-IP system to allow IPv6 hosts to communicate with IPv4 servers. This setting is disabled by default.
Virtual server resources
When you create a virtual server, one of the resources that you can specify for a virtual server to use is a default server pool that you want to serve as the destination for any traffic coming from that virtual server. The system uses this pool, unless you have specified a different pool in another configuration object such as an iRule.
You can also assign other resources to a virtual server, such as iRules, policies, and persistence profiles.
About virtual address settings
A virtual address has settings that you can configure to affect the way the BIG-IP system manages traffic destined for that virtual address. When the system creates a virtual address, you can either retain the default values or adjust them to suit your needs.
About automatic deletion
You can enable an Auto Delete setting on a virtual address so that BIG-IP system automatically deletes the virtual address last associated virtual server is deleted. If you disable this setting, the system retains the virtual address, even when all associated virtual servers have been deleted. The default value is enabled.
About traffic groups
If you want the virtual address to be a floating IP address, that is, an address shared between two or more BIG-IP devices in a device group, you can assign a floating traffic group to the virtual address. A floating traffic group causes the virtual address to become a floating self IP address. A floating virtual address ensures that application traffic reaches its destination when the relevant BIG-IP device becomes unavailable.
If you want the virtual address to be a static (non-floating) IP address (used mostly for standalone devices), you can assign a non-floating traffic group to the virtual address. A non-floating traffic group causes the virtual address to become a non-floating self IP address.
About route advertisement
You can enable route advertisement for a specific virtual address. When you enable route advertisement, the BIG-IP system advertises routes to the virtual address for the purpose of dynamic routing. The system can advertise a route to the virtual address under any one of these conditions:
- When any virtual server is available. Additionally, when the ICMP Echo setting is set to Selective, the BIG-IP system sends an ICMP echo response for a request sent to the virtual address, if one or more virtual severs associated with the virtual address is in an Up or Unknown state.
- When all virtual servers are available. Additionally, when the ICMP Echo setting is set to Selective, the BIG-IP system always sends an ICMP echo response for a request sent to the virtual address, but only when all virtual servers are available.
- Always advertises the route regardless of the virtual servers available. Additionally, when the ICMP Echo setting is set to Selective, the BIG-IP system always sends an ICMP echo response for a request sent to the virtual address, regardless of the state of any virtual servers associated with the virtual address.
About ARP and virtual addresses
Whenever the system creates a virtual address, Local Traffic Manager™ internally associates the virtual address with a MAC address. This in turn causes the BIG-IP® system to respond to Address Resolution Protocol (ARP) requests for the virtual address, and to send gratuitous ARP requests and responses with respect to the virtual address. As an option, you can disable ARP activity for virtual addresses, in the rare case that ARP activity affects system performance. This most likely occurs only when you have a large number of virtual addresses defined on the system.
About ICMP echo responses
You can control whether the BIG-IP system sends responses to Internet Control Message Protocol (ICMP) echo requests, on a per-virtual address basis. Specifically, you can:
- Disable ICMP echo responses. This causes the BIG-IP system to never send an ICMP echo response for ICMP request packets sent to the virtual address, regardless of the state of any virtual servers associated with the virtual address.
- Enable ICMP echo responses. This causes the BIG-IP system to always send an ICMP echo response for ICMP request packets sent to the virtual address, regardless of the state of any virtual servers associated with the virtual address.
- Selectively enable ICMP echo responses. This causes the BIG-IP system to internally enable or disable ICMP responses for the virtual address based on node status for any associated virtual servers. This value affects the behavior of the system in different ways, depending on the value of the Advertise Route setting.
Virtual server and virtual address status
At any time, you can determine the status of a virtual server or virtual address, using the BIG-IP Configuration utility. You can find this information by displaying the list of virtual servers or virtual addresses and viewing the Status column, or by viewing the Availability property of the object.
The BIG-IP Configuration utility indicates status by displaying one of several icons, distinguished by shape and color:
- The shape of the icon indicates the status that the monitor has reported for that node.
- The color of the icon indicates the actual status of the node.
Clustered multiprocessing
The BIG-IP® system includes a performance feature known as Clustered Multiprocessing™, or CMP®. CMP is a traffic acceleration feature that creates a separate instance of the Traffic Management Microkernel (TMM) service for each central processing unit (CPU) on the system. When CMP is enabled, the workload is shared equally among all CPUs.
Whenever you create a virtual server, the BIG-IP system automatically enables the CMP feature. When CMP is enabled, all instances of the TMM service process application traffic.
When you view standard performance graphs using the BIG-IP Configuration utility, you can see multiple instances of the TMM service (tmm0, tmm1, and so on).
When CMP is enabled, be aware that:
- While displaying some statistics individually for each TMM instance, the BIG-IP system displays other statistics as the combined total of all TMM instances.
- Connection limits for a virtual server with CMP enabled are distributed evenly across all instances of the TMM service.
You can enable or disable CMP for a virtual server, or you can enable CMP for a specific CPU.