Applies To:Show Versions
BIG-IP versions 1.x - 4.x
The A record is the ADDRESS resource record that a Link Controller returns to a local DNS server in response to a name resolution request. The A record contains a variety of information, including one or more IP addresses that resolve to the requested domain name.
Any IP Traffic
Any IP Traffic is a feature that allows the BIG-IP system to load balance protocols other than TCP and UDP.
The alternate method specifies the load balancing mode that the Link Controller uses to pick a virtual server if the preferred method fails. See also preferred method.
ARL (Akamai Resource Locator)
An ARL is a URL that is modified to point to content on the Akamai Freeflow NetworkTM. In content conversion (akamaization), the URL is converted to an ARL, which retrieves the resource from a geographically nearby server on the Akamai Freeflow Network for faster content delivery.
Authentication is the process of verifying a user's identity when the user is attempting to log on to a system.
Authorization is the process of identifying the level of access that a logged-on user has been granted to system resources.
Back Orifice is a Trojan horse that is designed to run on certain common operating systems. This Trojan horse listens on UDP port 31337 by default, and allows the system to be controlled remotely. See also Trojan horse.
The big3d utility is a monitoring utility that collects metrics information about paths between a BIG-IP system and a specific local DNS server. The big3d utility runs on BIG-IP units and it forwards metrics information to 3-DNS Controllers.
BIG-IP active unit
In a redundant system, the active unit is the BIG-IP system that currently load balances connections. If the active unit in the redundant system fails, the standby unit assumes control and begins to load balance connections.
BIG-IP web server
The BIG-IP web server runs on a BIG-IP system and hosts the Configuration utility.
The bigpipe utility provides command line access to the BIG-IP system.
The bigtop utility is a statistical monitoring utility that ships on the BIG-IP system. This utility provides real-time statistical information.
BIND (Berkeley Internet Name Domain)
BIND is the most common implementation of DNS, which provides a system for matching domain names to IP addresses.
cacheable content determination
Cacheable content determination is a process that determines the type of content you cache on the basis of any combination of elements in the HTTP header.
cacheable content expression
The cacheable content expression determines, based on evaluating variables in the HTTP header of the request, whether a BIG-IP Cache Controller directs a given request to a cache server or to an origin server. Any content that does not meet the criteria in the cacheable content expression is deemed non-cacheable.
The cache pool specifies a pool of cache servers to which requests are directed in a manner that optimizes cache performance. The BIG-IP Cache Controller directs all requests bound for your origin server to this pool, unless you have configured the hot content load balancing feature, and the request is for hot (frequently requested) content. See also hot and origin server.
Certificate verification is the part of an SSL handshake that verifies that a client's SSL credentials have been signed by a trusted certificate authority.
A chain is a series of filtering criteria used to restrict access to an IP address. The order of the criteria in the chain determines how the filter is applied, from the general criteria first, to the more detailed criteria at the end of the chain.
This feature causes a pool to replicate all traffic coming into it and send that traffic to another pool cloned from the first pool.
The completion rate is the percentage of packets that a server successfully returns during a given session.
Completion Rate mode
The Completion Rate mode is a dynamic load balancing mode that distributes connections based on which network path drops the fewest packets, or allows the fewest number of packets to time out.
Content affinity ensures that a given subset of content remains associated with a given cache server to the maximum extent possible, even when cache servers become unavailable, or are added or removed. This feature also maximizes efficient use of cache memory.
content converter gateway
A content converter gateway is a gateway for converting URLs to ARLs. See also ARL.
content demand status
The content demand status is a measure of the frequency with which content in a given hot content subset is requested over a given hit period. Content demand status is either hot, in which case the number of requests for content in the hot content subset during the most recent hit period has exceeded the hot threshold, or cool, in which case the number of requests during the most recent hit period is less than the cool threshold. See also cool, cool threshold, hit period, hot, hot content subset, and hot threshold.
content hash size
Specifies the number of units, or hot content subsets, into which the content is divided when determining whether content is hot or cool. The requests for all content in a given subset are summed, and a state (hot or cool) is assigned to each subset. The content hash size should be within the same order of magnitude as the actual number of requests possible. For example, if the entire site is composed of 500,000 pieces of content, a content hash size of 100,000 is typical.
If you specify a value for hot pool, but do not specify a value for this variable, the cache statement uses a default hash size of 10 subsets. See also cool, hot, and hot content subset.
In products that support caching, content stripes are cacheable content subsets distributed among your cache servers.
Content switching is the ability to load balance traffic based on data contained within a packet.
Cookie persistence is a mode of persistence where the BIG-IP system stores persistent connection information in a cookie.
Cool describes content demand status when you are using hot content load balancing. See also content demand status, hot, and hot content load balancing.
The cool threshold specifies the maximum number of requests for given content that will cause that content to change from hot to cool at the end of the hit period.
If you specify a variable for hot pool, but do not specify a value for this variable, the cache statement uses a default cool threshold of 10 requests. See also cool, hit period, and hot.
The BIG-IP system is configured with two default VLANs, one for each interface. One default VLAN is named internal and one is named external. See also VLAN.
default wildcard virtual server
A default wildcard virtual server has an IP address and port number of 0.0.0.0:0. or *:* or "any":"any". This virtual server accepts all traffic that does not match any other virtual server defined in the configuration.
denial-of-service attack (DoS)
A denial-of-service attack, also called DoS, is a type of security breach that results in denial of service for users of the targeted system.
A domain name is a unique name that is associated with one or more IP addresses. Domain names are used in URLs to identify particular Web pages. For example, in the URL http://www.f5.com/index.html, the domain name is f5.com.
dynamic load balancing
Dynamic load balancing modes use current performance information from each node to determine which node should receive each new connection. The different dynamic load balancing modes incorporate different performance factors such as current server performance and current connection load.
Dynamic Ratio load balancing mode
Dynamic Ratio mode is like Ratio mode (see Ratio mode), except that ratio weights are based on continuous monitoring of the servers and are therefore continually changing. Dynamic Ratio load balancing may currently be implemented on RealNetworks RealServer platforms, on Windows platforms equipped with Windows Management Instrumentation (WMI), or on a server equipped with either the UC Davis SNMP agent or Windows 2000 Server SNMP agent.
dynamic site content
Dynamic site content is site content that is automatically generated each time a user accesses the site. Examples are current stock quotes or weather satellite images.
EAV (Extended Application Verification)
EAV is a health check that verifies an application on a node by running that application remotely. EAV health check is only one of the three types of health checks available on a BIG-IP system. See also health check, health monitor and external monitor.
ECV (Extended Content Verification)
ECV is a health check that allows you to determine if a node is up or down based on whether the node returns specific content. ECV health check is only one of the three types of health checks available on a BIG-IP system. See also health check.
External authentication refers to the process of using a remote LDAP or RADIUS server to store data for the purpose of authenticating users attempting to log on to the BIG-IP system.
An external monitor is a user-supplied health monitor. See also, health check, health monitor.
The external VLAN is a default VLAN on the BIG-IP system. In a basic configuration, this VLAN has the administration ports locked down. In a normal configuration, this is typically a VLAN on which external clients request connections to internal servers.
Fail-over is the process whereby a standby unit in a redundant system takes over when a software failure or a hardware failure is detected on the active unit.
The fail-over cable directly connects the two units together in a redundant system.
Fastest mode is a load balancing method that passes a new connection based on the fastest response of all currently active nodes.
FDDI (Fiber Distributed Data Interface)
FDDI is a multi-mode protocol used for transmitting data on optical-fiber cables at speeds up to 100 Mbps.
floating self IP address
A floating self IP address is an additional self IP address for a VLAN that serves as a shared address by both units of a BIG-IP redundant system.
forward proxy caching
Forward proxy caching is a configuration in which a BIG-IP Cache Controller redundant system uses content-aware traffic direction to enhance the efficiency of an array of cache servers storing Internet content for internal users.
Global Availability mode
Global Availability mode is a static load balancing mode that bases connection distribution on a particular server order, always sending a connection to the first available server in the list. This mode differs from Round Robin mode in that it searches for an available server always starting with the first server in the list, while Round Robin mode searches for an available server starting with the next server in the list (with respect to the server selected for the previous connection request).
A health check is a BIG-IP system feature that determines whether a node is up or down. Health checks are implemented through health monitors. See also health monitor, ECV, EAV, and external monitor.
A health monitor checks a node to see if it is up and functioning for a given service. If the node fails the check, it is marked down. Different monitors exist for checking different services. See also health check, EAV, ECV, and external monitor.
The high-water mark threshold determines when unestablished connections through the BIG-IP system will no longer be allowed. It is one of two global settings that provide the ability to reap connections adaptively, used in preventing denial-of-service attacks. See also low-water mark.
The hit period specifies the period, in seconds, over which to count requests for particular content before determining whether to change the state (hot or cool) of the content.
If you specify a value for hot pool, but do not specify a value for this variable, the cache statement uses a default hit period of 10 seconds. See also cool, hot, and hot pool.
A host is a network server that manages one or more virtual servers that the 3-DNS Controller uses for load balancing.
Hot is a term used to define frequently requested content based on the number of requests in a given time period for a given hot content subset. See also hot content subset.
hot content load balancing
Hot content load balancing identifies hot or frequently requested content on the basis of number of requests in a given time period for a given hot content subset. A hot content subset is different from, and typically smaller than, the content subsets used for content striping. Requests for hot content are redirected to a cache server in the hot pool, a designated group of cache servers. This feature maximizes the use of cache server processing power without significantly affecting the memory efficiency gained by cacheable content determination. See also hot, hot content subset, and hot pool.
hot content subset
A hot content subset is different from, and typically smaller than, the content subsets used for cacheable content determination. This is created once content has been determined to be hot, and is taken or created from the content subset. See also cacheable content determination.
A hot pool is a designated group of cache servers to which requests are load balanced when the requested content is hot. If a request is for hot content, the BIG-IP Cache Controller redundant system directs the request to this pool.
The hot threshold specifies the minimum number of requests for content in a given hot content subset that will cause that content to change from cool to hot at the end of the period.
If you specify a value for hot pool, but do not specify a value for this variable, the cache statement uses a default hot threshold of 100 requests. See also cool, hot, hot content subset, and hot pool.
An HTTP redirect sends an HTTP 302 Object Found message to clients. You can configure a pool with an HTTP redirect to send clients to another node or virtual server if the members of the pool are marked down.
ICMP (Internet Control Message Protocol)
ICMP is an Internet communications protocol used to determine information about routes to destination addresses, such as virtual servers managed by BIG-IP units and 3-DNS Controllers.
The ICMP flood, sometimes referred to as a "Smurf" attack, is an attack based on a method of making a remote network send ICMP Echo replies to a single host. In this attack, a single packet from the attacker goes to an unprotected network's broadcast address, which can cause every machine on that network to answer by sending a packet to the target.
intelligent cache population
Intelligent cache population allows caches to retrieve content from other caches in addition to the origin web server. Use this feature when working with non-transparent cache servers that can receive requests destined for the cache servers themselves. Intelligent cache population minimizes the load on the origin web server and speeds cache population. See also non-transparent cache server and transparent cache server.
The physical port on a BIG-IP system is called an interface. See also link.
The internal VLAN is a default VLAN on the BIG-IP system. In a basic configuration, this VLAN has the administration ports open. In a normal configuration, this is a network interface that handles connections from internal servers.
IPSEC (Internet Security Protocol) is a communications protocol that provides security for the network layer of the Internet without imposing requirements on applications running above it.
A UDP based protocol used to exchange information between BIG-IP units and 3-DNS Controllers. The iQuery protocol is officially registered for port 4353.
Key Management System
The Key Management System (KMS) is a set of screens within the Configuration utility that allows you to centrally manage SSL proxy keys and certificates. You can generate certificate requests, install keys, and export and import keys and key archives. You can also associate keys with SSL proxies.
The Kilobytes/Second mode is a dynamic load balancing mode that distributes connections based on which available server currently processes the fewest kilobytes per second.
A Land attack is a SYN packet sent where the source address and port are the same as the destination address and port.
A last hop is the final hop a connection took to get to the BIG-IP system. You can allow the BIG-IP system to determine the last hop automatically to send packets back to the device from which they originated. You can also specify the last hop manually by making it a member of a last hop pool.
Least Connections mode
Least Connections mode is a dynamic load balancing mode that bases connection distribution on which server currently manages the fewest open connections.
A link is a physical interface on the BIG-IP system connected to another physical interface in a network.
The link aggregation feature allows you to combine a number of links together to act as one interface.
load balancing mode
A particular method of determining how to distribute connections across an array.
A local DNS is a server that makes name resolution requests on behalf of a client. With respect to the Link Controller, local DNS servers are the source of name resolution requests. Local DNS is also referred to as LDNS.
A loopback adapter is a software interface that is not associated with an actual network card. The nPath routing configuration requires you to configure loopback adapters on servers.
The low-water mark threshold determines at what point adaptive reaping becomes more aggressive. It is one of two global settings that provide the ability to reap connections adaptively, used in preventing denial-of-service attacks. See also high-water mark.
MAC (Media Access Control)
MAC is a protocol that defines the way workstations gain access to transmission media, and is most widely used in reference to LANs. For IEEE LANs, the MAC layer is the lower sublayer of the data link layer protocol.
A MAC address is used to represent hardware devices on an Ethernet network.
Member is a reference to a node when it is included in a particular pool. Pools typically include multiple member nodes.
Metrics information is the data that is typically collected about the paths between Link Controllers and local DNS servers. Metrics information is also collected about the performance and availability of virtual servers. Metrics information is used for load balancing, and it can include statistics such as round trip time, packet rate, and packet loss.
minimum active members
The minimum active members is the number of members that must be active in a priority group for the BIG-IP system to send its requests to only that group. If the number of active members falls below this number, the system also sends requests to the next highest priority group (the priority group with the next lowest priority number).
When a cache does not have requested content and cannot respond to the request, it is called a miss request.
The BIG-IP system uses monitors to determine whether nodes are up or down. There are several different types of monitors and they use various methods to determine the status of a server or service.
monitor destination IP address or IP address:port
The monitor destination IP address or address:port for a user defined monitor is used mainly for setting up a node alias for the monitor to check. All nodes associated with that monitor will be marked down if the alias node (destination IP address:port) is marked down. See also node alias.
You create a monitor instance when a health monitor is associated with a node, node address, or port. It is the monitor instance that actually performs the health check, not the monitor.
A monitor template is a system-supplied health monitor that is used primarily as a template to create user-defined monitors, but in some cases can be used as is. The BIG-IP system includes a number of monitor templates, each specific to a service type, for example, HTTP and FTP. The template has a template type that corresponds to the service type and is usually the name of the template.
Named is the name server utility, which manages domain name server software.
Name resolution is the process by which a name server matches a domain name request to an IP address, and sends the information to the client requesting the resolution.
A name server is a server that maintains a DNS database, and resolves domain name requests to IP addresses using that database.
NAT (Network Address Translation)
A NAT is an alias IP address that identifies a specific node managed by the BIG-IP system to the external network.
Nimda is a computer virus that spreads through four different methods (infecting computers containing Microsoft's Web server, Internet Information Server [IIS], and computer users who open an e-mail attachment) causing traffic slowdowns. Nimda does not appear to destroy files or cause harm other than the denial-of-service. Its name (backwards for "admin") apparently refers to an admin.DLL file that, when run, continues to propagate the virus. See also, denial-of-service attack.
A node is a specific combination of an IP address and port (service) number associated with a server in the array that is managed by the BIG-IP system.
A node address is the IP address associated with one or more nodes. This IP address can be the real IP address of a network server, or it can be an alias IP address on a network server.
A node alias is a node address that the BIG-IP system uses to verify the status of multiple nodes. When the BIG-IP system uses a node alias to check node status, it pings the node alias. If the BIG-IP system receives a response to the ping, it marks all nodes associated with the node alias as up. If the BIG-IP system does not receive a response to the ping, the it marks all nodes associated with the node alias as down.
A node port is the port number or service name that is hosted by a specific node.
Node status indicates whether a node is up and available to receive connections, or down and unavailable. The BIG-IP system uses the node ping and health check features to determine node status.
Non-cacheable content is content that is not identified in the cacheable content condition part of a cache rule statement.
non-transparent cache server
Cache servers that can receive requests that are destined for the cache servers themselves are called non-transparent cache servers.
Observed mode is a dynamic load balancing mode that bases connection distribution on a combination of two factors: the server that currently hosts the fewest connections and also has the fastest response time.
The origin pool specifies a pool of servers that contain original copies of all content. Requests are load balanced to this pool when any of the following is true: the requested content is not cacheable, no cache server is available, or the BIG-IP Cache Controller redundant system is redirecting a request from a cache server that did not have the requested content.
An origin server is the web server on which all original copies of your content reside.
A path is a logical network route between a Link Controller and a local DNS server.
Path probing is the collection of metrics data, such as round trip time and packet rate, for a given path between a requesting LDNS server and a Link Controller.
A performance monitor gathers statistics and checks the state of a target device.
A series of related connections received from the same client, having the same session ID. When persistence is turned on, a BIG-IP system sends all connections having the same session ID to the same node, instead of load balancing the connections.
Ping of Death attack
The Ping of Death attack is an attack with ICMP echo packets that are larger than 65535 bytes. Since this is the maximum allowed ICMP packet size, this can crash systems that attempt to reassemble the packet. See also, denial-of-service attack.
A pool is composed of a group of network devices (called members). The BIG-IP system load balances requests to the nodes within a pool based on the load balancing method and persistence method you choose when you create the pool or edit its properties.
A pool ratio is a ratio weight applied to pools in a wide IP. If the Pool LB mode is set to Ratio, the Link Controller uses each pool for load balancing in proportion to the weight defined for the pool.
A port can be represented by a number that is associated with a specific service supported by a host. Refer to the Services and Port Index for a list of port numbers and corresponding services.
port-specific wildcard virtual server
A port-specific wildcard virtual server is a wildcard virtual server that uses a port number other than 0. See wildcard virtual server.
Port mirroring is a feature that allows you to copy traffic from any port or set of ports to a single, separate port where a sniffing device is attached.
Predictive mode is a dynamic load balancing mode that bases connection distribution on a combination of two factors: the server that currently hosts the fewest connections, and also has the fastest response time. Predictive mode also ranks server performance over time, and passes connections to servers which exhibit an improvement in performance rather than a decline.
The preferred method specifies the first load balancing mode that the Link Controller uses to load balance a resolution request. See also alternate method.
The QOS equation is the equation on which the Quality of Service load balancing mode is based. The equation calculates a score for a given path between a link and a local DNS server. The Quality of Service mode distributes connections based on the best path score for an available link. You can apply weights to the factors in the equation, such as round trip time and completion rate.
Quality of Service load balancing mode
The Quality of Service load balancing mode is a dynamic inbound load balancing mode that bases connection distribution on a configurable combination of the packet rate, completion rate, round trip time, hops, virtual server capacity, kilobytes per second, and topology information.
You create a rate filter from the Configuration utility or command line utility. When you assign a rate class to a rate filter, a rate class determines the volume of traffic allowed through a rate filter. See also rate filter.
Rate filters consist of a basic filter with a rate class. Rate filters are a type of extended IP filter. They use the same IP filter method, but they apply a rate class, which determines the volume of network traffic allowed through the filter. See also rate class.
A ratio is a parameter that assigns a weight to a virtual server for load balancing purposes.
The Ratio load balancing mode distributes connections across an array of virtual servers in proportion to the ratio weights assigned to each individual virtual server.
A receive expression is the text string that the BIG-IP system looks for in the web page returned by a web server during an extended content verification (ECV) health check.
Redundant system refers to a pair of BIG-IP units that are configured for fail-over. In a redundant system, there are two units, one running as the active unit and one running as the standby unit. If the active unit fails, the standby unit takes over and manages connection requests.
remote administrative IP address
A remote administrative IP address is an IP address from which a BIG-IP system allows shell connections, such as Telnet or SSH.
remote server acceleration
A remote server acceleration configuration is a configuration in which a BIG-IP Cache Controller redundant system uses content-aware traffic direction to enhance the efficiency of an array of cache servers that cache content for a remote web server.
A resource record is a record in a DNS database that stores data associated with domain names. A resource record typically includes a domain name, a TTL, a record type, and data specific to that record type. See also A record.
RFC 1918 addresses
An RFC 1918 address is an address that is within the range of non-routable addresses described in the IETF RFC 1918.
Round Robin mode
Round Robin mode is a static load balancing mode that bases connection distribution on a set server order. Round Robin mode sends a connection request to the next available server in the order.
round trip time (RTT)
Round trip time is the calculation of the time (in microseconds) that a local DNS server takes to respond to a ping issued by the big3d agent running on a link. The Link Controller takes RTT values into account when it uses dynamic load balancing modes.
Round Trip Time mode
Round Trip Time mode is a dynamic load balancing mode that bases connection distribution on which virtual server has the fastest measured round trip time between the link and the local DNS server.
self IP address
Self IP addresses are the IP addresses owned by the BIG-IP system that you use to access the internal and external VLANs.
A send string is the request that the BIG-IP system sends to the web server during an extended content verification (ECV) health check.
Service refers to services such as TCP, UDP, HTTP, and FTP.
The Setup utility walks you through the initial system configuration process. You can run the Setup utility from either the command line or the Configuration utility start page.
SNAT (Secure Network Address Translation)
A SNAT is a feature you can configure on the BIG-IP system. A SNAT defines a routable alias IP address that one or more nodes can use as a source IP address when making connections to hosts on the external network.
This feature allows the BIG-IP system to perform a SNAT automatically on any connection that is coming from the unit's internal VLAN. It is easier to use than traditional SNATs, and solves certain problems associated with the traditional SNAT.
SNMP (Simple Network Management Protocol)
SNMP is the Internet standard protocol, defined in STD 15, RFC 1157, developed to manage nodes on an IP network.
Source processing means that the interface rewrites the source of an incoming packet.
spanning tree protocol (STP)
Spanning tree protocol is a protocol that provides loop resolution in configurations where one or more external switches is connected in parallel with the BIG-IP system.
An SSL proxy is a gateway for decrypting HTTP requests to an HTTP server and encrypting the reply.
SSL-to-Server is an SSL proxy feature that provides secure communication between the BIG-IP system and a target content server.
A standby unit in a redundant system is a unit that is always prepared to become the active unit if the active unit fails.
stateful site content
Content that maintains dynamic information for clients on an individual basis and is commonly found on e-commerce sites is called stateful site content. For example, a site that allows a user to fill a shopping cart, leave the site, and then return and purchase the items in the shopping cart at a later time has stateful site content which retains the information for that client's particular shopping cart.
State mirroring is a feature on the BIG-IP system that preserves connection and persistence information in a BIG-IP redundant system.
static load balancing modes
Static load balancing modes base connection distribution on a pre-defined list of criteria; it does not take current server performance or current connection load into account.
static site content
Static site content is a type of site content that is stored in HTML pages, and changes only when an administrator edits the HTML document itself.
A sticky mask is a special IP mask that you can configure on the BIG-IP system. This mask optimizes sticky persistence entries by grouping more of them together.
Sub 7 attack
A Sub 7 attack is a Trojan horse that is designed to run on certain common operating systems. This Trojan horse allows the system to be controlled remotely. See also Trojan horse.
A feature designed to alleviate SYN flooding, SYN Check sends information about the flow, in the form of cookies, to the requesting client. Thus the system does not need to keep the SYN-RECEIVED state that is normally stored in the connection table for the initiated session.
A SYN flood is an attack against a system for the purpose of exhausting that system's resources. The intent is to occupy all available resources used to establish TCP connections by sending multiple SYN segments containing incorrect IP addresses.
A SYN queue is a set of connections stored in the connection table in the SYN-RECEIVED state, as part of the standard three-way TCP handshake. A SYN queue can hold a specified maximum number of connections in the SYN-RECEIVED state.
You can define any interface as a member of a tagged VLAN. You can create a list of VLAN tags or names for each tagged interface.
A Teardrop attack is carried out by a program that sends IP fragments to a machine connected to the Internet or a network. The Teardrop attack exploits an overlapping IP fragment problem present in some common operating systems that causes the TCP/IP fragmentation re-assembly code to improperly handle overlapping IP fragments.
transparent cache server
A transparent cache server can intercept requests destined for a web server, but cannot receive requests.
A transparent node appears as a router to other network devices, including the BIG-IP system.
A Trojan horse is a harmful program disguised as a benign application. The term comes to technology from the Iliad, wherein Homer tells of the Greeks giving a giant wooden horse to their foes, the Trojans.
A trunk is a combination of two or more interfaces and cables configured as one link. See also link aggregation.
UDP flood attack
The UDP flood attack is most commonly a distributed denial-of-service attack (DDoS), where multiple remote systems are sending a large flood of UDP packets to the target. See also, denial-of-service attack.
UDP fragment attack
The UDP fragment attack is based on forcing the system to reassemble huge amounts of UDP data sent as fragmented packets. The goal of this attack is to consume system resources to the point where the system fails. See also, denial-of-service attack.
The unavailable status is used for links and virtual servers. When a link or virtual server is unavailable, the Link Controller does not use it for load balancing.
The unknown status is used for links and virtual servers. When a link or virtual server is new to the Link Controller and does not yet have metrics information, the Link Controller marks its status as unknown. The Link Controller can use unknown servers for load balancing, but if the load balancing mode is dynamic, the Link Controller uses default metrics information for the unknown server until it receives live metrics data.
The up status is used for links and virtual servers. When a link or virtual server is up, the link or virtual server is available to respond to process connections.
Universal Inspection Engine
The Universal Inspection Engine (UIE) is a feature that offers universal persistence and universal content switching, to enhance your load balancing capabilities. The UIE contains a set of rule variables and functions for building expressions that you can specify in pool definitions and rules.
Universal persistence gives you the ability to persist on any string found within a packet. Also, you can directly select the pool member to which you want to persist.
A user-defined monitor is a custom monitor configured by a user, based on a system-supplied monitor template. For some monitor types, you must create a user-defined monitor in order to use them. For all monitor types, you must create a user-defined monitor to change system supplied monitor default values.
A virtual address is an IP address associated with one or more virtual servers managed by the BIG-IP system.
A virtual port is the port number or service name associated with one or more virtual servers managed by the BIG-IP system. A virtual port number should be the same TCP or UDP port number to which client programs expect to connect.
Virtual servers are a specific combination of virtual address and virtual port, associated with a content site that is managed by a BIG-IP system or other type of host server.
VLAN stands for virtual local area network. A VLAN is a logical grouping of network devices. You can use a VLAN to logically group devices that are on different network segments.
A VLAN name is the symbolic name used to identify a VLAN. For example, you might configure a VLAN named marketing, or a VLAN named development. See also VLAN.
watchdog timer card
A watchdog timer card is a hardware device that monitors the BIG-IP system for hardware failure.
A wide IP is a collection of one or more domain names that maps to one or more groups of virtual servers managed by Link Controllers. The Link Controller load balances name resolution requests across the virtual servers that are defined in the wide IP that is associated with the requested domain name.
wildcard virtual server
A wildcard virtual server is a virtual server that uses an IP address of 0.0.0.0, * or "any". A wildcard virtual server accepts connection requests for destinations outside of the local network. Wildcard virtual servers are included only in Transparent Node Mode configurations.
A WinNuke attack exploits the way certain common operating systems handle data sent to the NetBIOS ports. (NetBIOS ports are 135, 136, 137 and 138, using TCP or UDP.) See also, denial-of-service attack.