Some of the BIG-IP system profiles that you can configure are known as protocol profiles. The protocol profiles types are:
Fast L4
Fast HTTP
UDP
SCTP
For each protocol profile type, the BIG-IP system provides a pre-configured profile with default settings. In most cases, you can use these default profiles as is. If you want to change these settings, you can configure protocol profile settings when you create a profile, or after profile creation by modifying the profile’s settings.
To configure and manage protocol profiles, log in to the BIG-IP Configuration utility, and on the Main tab, expand Local Traffic, and click Profiles.
The purpose of a Fast L4 profile is to help you manage Layer 4 traffic more efficiently. When you assign a Fast L4 profile to a virtual server, the Packet Velocity® ASIC (PVA) hardware acceleration within the BIG-IP system (if supported) can process some or all of the Layer 4 traffic passing through the system. By offloading Layer 4 processing to the PVA hardware acceleration, the BIG-IP system can increase performance and throughput for basic routing functions (Layer 4) and application switching (Layer 7).
You can use a Fast L4 profile with these types of virtual servers: Performance (Layer 4), Forwarding (Layer 2), and Forwarding (IP).
When you implement a Fast L4 profile, you can instruct the system to dynamically offload flows in a connection to ePVA hardware, if your BIG-IP system supports such hardware. When you enable the PVA Offload Dynamic setting in a Fast L4 profile, you can then configure these values:
The number of client packets before dynamic ePVA hardware re-offloading occurs. The valid range is from 0 (zero) through 10. The default is 1.
The number of server packets before dynamic ePVA hardware re-offloading occurs. The valid range is from 0 (zero) through 10. The default is 0.
The Fast L4 profile type includes two settings you can configure for prioritizing traffic flows when ePVA flow acceleration is being used:
PVA Offload Initial Priority
Specifies the initial priority level for traffic flows that you want to be inserted into the flow accelerator. Supported initial priority levels are high, medium, and low. Setting an intial priority enables the BIG-IP system to observe flows and adjust the priority as needed. If both directions are being accelerated, the initial priority level applies to both directions of the packets on a flow. The default value is Medium.
PVA Offload Dynamic Priority
You can enable this setting on the Fast L4 profile. The default value is Disabled.
Note that prioritizing flow insertion into the flow accelerator:
Applies to UDP and TCP traffic only.
Functions on a per-Fast L4 profile and per-virtual server basis.
The Fast HTTP profile is a configuration tool designed to speed up certain types of HTTP connections. This profile combines selected features from the TCP Express, HTTP, and OneConnect™ profiles into a single profile that is optimized for the best possible network performance. When you associate this profile with a virtual server, the virtual server processes traffic packet-by-packet, and at a significantly higher speed.
You might consider using a Fast HTTP profile when:
You do not need features such as remote server authentication, SSL traffic management, and TCP optimizations, nor HTTP features such as data compression, pipelining, and RAM Cache.
You do not need to maintain source IP addresses.
You want to reduce the number of connections that are opened to the destination servers.
The destination servers support connection persistence, that is, HTTP/1.1, or HTTP/1.0 with Keep-Alive headers. Note that IIS servers support connection persistence by default.
You need basic iRule support only (such as limited Layer 4 support and limited HTTP header operations). For example, you can use the iRule events CLIENT_ACCEPTED, SERVER_CONNECTED, and HTTP_REQUEST.
A significant benefit of using a Fast HTTP profile is the way in which the profile supports connection persistence. Using a Fast HTTP profile ensures that for client requests, the BIG-IP system can transform or add an HTTP Connection header to keep connections open. Using the profile also ensures that the BIG-IP system pools any open server-side connections. This support for connection persistence can greatly reduce the load on destination servers by removing much of the overhead caused by the opening and closing of connections.
Note: The Fast HTTP profile is incompatible with all other profile types. Also, you cannot use this profile type in conjunction with VLAN groups, or with the IPv6 address format.
When writing iRules®, you can specify a number of events and commands that the Fast HTTP profile supports.
You can use the default fasthttp profile as is, or create a custom Fast HTTP profile.
TCP profiles are configuration tools that help you to manage TCP network traffic. Many of the configuration settings of TCP profiles are standard SYSCTL types of settings, while others are unique to the BIG-IP system.
TCP profiles are important because they are required for implementing certain types of other profiles. For example, by implementing TCP, HTTP, Rewrite, HTML, and OneConnect™ profiles, along with a persistence profile, you can take advantage of various traffic management features, such as:
Content spooling, to reduce server load
OneConnect, to pool idle server-side connections
Layer 7 session persistence, such as hash or cookie persistence
iRules® for managing HTTP traffic
HTTP data compression
HTTP pipelining
URI translation
HTML content modification
Rewriting of HTTP redirections
The BIG-IP system includes several pre-configured TCP profiles that you can use as is. In addition to the default tcp profile, the system includes TCP profiles that are pre-configured to optimize LAN and WAN traffic, as well as traffic for mobile users. You can use the pre-configured profiles as is, or you can create a custom profile based on a pre-configured profile and then adjust the values of the settings in the profiles to best suit your particular network environment.
The tcp-lan-optimized and f5-tcp-lan profiles are pre-configured profiles that can be associated with a virtual server. In cases where the BIG-IP virtual server is load balancing LAN-based or interactive traffic, you can enhance the performance of your local-area TCP traffic by using the tcp-lan-optimized or the f5-tcp-lan profiles.
If the traffic profile is strictly LAN-based, or highly interactive, and a standard virtual server with a TCP profile is required, you can configure your virtual server to use the tcp-lan-optimized or f5-tcp-lan profiles to enhance LAN-based or interactive traffic. For example, applications producing an interactive TCP data flow, such as SSH and TELNET, normally generate a TCP packet for each keystroke. A TCP profile setting such as Slow Start can introduce latency when this type of traffic is being processed.
You can use the tcp-lan-optimized or f5-tcp-lan profile as is, or you can create another custom profile, specifying the tcp-lan-optimized or f5-tcp-lan profile as the parent profile.
The tcp-wan-optimized and f5-tcp-wan profiles are pre-configured profile types. In cases where the BIG-IP system is load balancing traffic over a WAN link, you can enhance the performance of your wide-area TCP traffic by using the tcp-wan-optimized or f5-tcp-wan profiles.
If the traffic profile is strictly WAN-based, and a standard virtual server with a TCP profile is required, you can configure your virtual server to use a tcp-wan-optimized or f5-tcp-wan profile to enhance WAN-based traffic. For example, in many cases, the client connects to the BIG-IP virtual server over a WAN link, which is generally slower than the connection between the BIG-IP system and the pool member servers. If you configure your virtual server to use the tcp-wan-optimized or f5-tcp-wan profile, the BIG-IP system can accept the data more quickly, allowing resources on the pool member servers to remain available. Also, use of this profile can increase the amount of data that the BIG-IP system buffers while waiting for a remote client to accept that data. Finally, you can increase network throughput by reducing the number of short TCP segments that the BIG-IP system sends on the network.
You can use the tcp-wan-optimized or f5-tcp-wan profiles as is, or you can create another custom profile, specifying the tcp-wan-optimized or f5-tcp-wan profile as the parent profile.
The tcp-mobile-optimized profile is a pre-configured profile type, for which the default values are set to give better performance to service providers’ 3G and 4G customers. Specific options in the pre-configured profile are set to optimize traffic for most mobile users, and you can tune these settings to fit your network. For files that are smaller than 1 MB, this profile is generally better than the mptcp-mobile-optimized profile. For a more conservative profile, you can start with the tcp-mobile-optimized profile, and adjust from there.
Note: Although the pre-configured settings produced the best results in the test lab, network conditions are extremely variable. For the best results, start with the default settings and then experiment to find out what works best in your network.
This list provides guidance for relevant settings
Set the Proxy Buffer Low to the Proxy Buffer High value minus 64 KB. If the Proxy Buffer High is set to less than 64K, set this value at 32K.
The size of the Send Buffer ranges from 64K to 350K, depending on network characteristics. If you enable the Rate Pace setting, the send buffer can handle over 128K, because rate pacing eliminates some of the burstiness that would otherwise exist. On a network with higher packet loss, smaller buffer sizes perform better than larger. The number of loss recoveries indicates whether this setting should be tuned higher or lower. Higher loss recoveries reduce the goodput.
Setting the Keep Alive Interval depends on your fast dormancy goals. The default setting of 1800 seconds allows the phone to enter low power mode while keeping the flow alive on intermediary devices. To prevent the device from entering an idle state, lower this value to under 30 seconds.
The Congestion Control setting includes delay-based and hybrid algorithms, which might better address TCP performance issues better than fully loss-based congestion control algorithms in mobile environments. The Illinois algorithm is more aggressive, and can perform better in some situations, particularly when object sizes are small. When objects are greater than 1 MB, goodput might decrease with Illinois. In a high loss network, Illinois produces lower goodput and higher retransmissions.
For 4G LTE networks, specify the Packet Loss Ignore Rate as 0. For 3G networks, specify 2500. When the Packet Loss Ignore Rate is specified as more than 0, the number of retransmitted bytes and receives SACKs might increase dramatically.
For the Packet Loss Ignore Burst setting, specify within the range of 6-12, if the Packet Loss Ignore Rate is set to a value greater than 0. A higher Packet Loss Ignore Burst value increases the chance of unnecessary retransmissions.
For the Initial Congestion Window Size setting, round trips can be reduced when you increase the initial congestion window from 0 to 10 or 16.
Enabling the Rate Pace setting can result in improved goodput. It reduces loss recovery across all congestion algorithms, except Illinois. The aggressive nature of Illinois results in multiple loss recoveries, even with rate pacing enabled.
A tcp-mobile-optimized profile is similar to a TCP profile, except that the default values of certain settings vary, in order to optimize the system for mobile traffic.
You can use the tcp-mobile-optimized profile as is, or you can create another custom profile, specifying the tcp-mobile-optimized profile as the parent profile.
The mptcp-mobile-optimized profile is a pre-configured profile type for use in reverse proxy and enterprise environments for mobile applications that are front-ended by a BIG-IP system. This profile provides a more aggressive starting point than the tcp-mobile-optimized profile. It uses newer congestion control algorithms and a newer TCP stack, and is generally better for files that are larger than 1 MB. Specific options in the pre-configured profile are set to optimize traffic for most mobile users in this environment, and you can tune these settings to accommodate your network.
Note: Although the pre-configured settings produced the best results in the test lab, network conditions are extremely variable. For the best results, start with the default settings and then experiment to find out what works best in your network.
The enabled Multipath TCP (MPTCP) option enables multiple client-side flows to connect to a single server-side flow in a forward proxy scenario. MPTCP automatically and quickly adjusts to congestion in the network, moving traffic away from congested paths and toward uncongested paths.
The Congestion Control setting includes delay-based and hybrid algorithms, which can address TCP performance issues better than fully loss-based congestion control algorithms in mobile environments. Refer to the online help descriptions for assistance in selecting the setting that corresponds to your network conditions.
The enabled Rate Pace option mitigates bursty behavior in mobile networks and other configurations. It can be useful on high latency or high BDP (bandwidth-delay product) links, where packet drop is likely to be a result of buffer overflow rather than congestion.
An mptcp-mobile-optimized profile is similar to a TCP profile, except that the default values of certain settings vary, in order to optimize the system for mobile traffic.
You can use the mptcp-mobile-optimized profile as is, or you can create another custom profile, specifying the mptcp-mobile-optimized profile as the parent profile.
The s3-tcp profile is a pre-configured profile specifically designed to optimize TCP settings for S3 workloads and can be associated with a virtual server. This profile is tuned to handle the unique demands of S3 traffic, such as high-throughput data transfers and mixed operations, including small metadata requests (e.g., HEAD or LIST requests) and large object transfers (e.g., GET and PUT operations).
When the BIG-IP virtual server is load balancing S3 traffic, assigning the s3-tcp profile can significantly enhance the performance and reliability of the transport layer. The profile optimizes key TCP parameters, such as connection handling, congestion control, and buffer management, ensuring efficient and consistent performance for S3 object storage workflows.
You can use the s3-tcp profile as is or create a custom profile by specifying s3-tcp as the parent profile. This allows for additional customization to meet specific workload requirements while maintaining optimal performance for S3 traffic.
Before you begin: Ensure that you have set up the required VLANs and Self IPs.
Note:
If you want to view S3 Statistics on your network, you need to create a profile for the statistics and set up an iRule (Step 1 and Step 2). And then create a Virtual Server with s3-tcp profile, S3_statistic_profile, http profile, and the iRule.
If you do not need S3 Statistics, you can create the Virtual Server with s3-tcp profile and http profile.
Creating a profile for the statistics
Note: This profile is required only when you want to have S3 statistics in TMSH.
Create Statistics Profile and then load the configuration:
The integration of an iRule plays a critical role in identifying and collecting metrics for S3-related traffic passing through BIG-IP. This ensures enhanced visibility and optimized handling of S3 operations.
Note: Modify the IP address 11.11.1.125 with your Virtual Server IP address.
when HTTP_REQUEST {
# Capture basic HTTP request details
set s3_host [HTTP::host]
set s3_uri [HTTP::uri]
set s3_method [HTTP::method]
set s3_host_lower [string tolower $s3_host]
set client_ip [IP::client_addr]
# Check for S3-related indicators
if {
([HTTP::header exists "Authorization"] && [HTTP::header "Authorization"] starts_with "AWS4-HMAC-SHA256") ||
([HTTP::header exists "x-amz-date"]) ||
([HTTP::header exists "x-amz-content-sha256"]) ||
($s3_host_lower contains "11.11.1.125") ||
($s3_host_lower contains ".s3.") ||
([regexp {^/s3/} $s3_uri]) ||
([regexp {^/(bucket|object)/} $s3_uri])
} then {
log local0. "S3 Traffic Detected: Method=$s3_method, Host=$s3_host, URI=$s3_uri"
# Increment total S3 request count
STATS::incr S3_stats s3_total_request_count 1
# Increment method-specific counters
switch $s3_method {
"PUT" { STATS::incr S3_stats s3_total_put_count 1 }
"GET" { STATS::incr S3_stats s3_total_get_count 1 }
"DELETE" { STATS::incr S3_stats s3_total_delete_count 1 }
}
# Extract bucket and path
set bucket_name ""
set path_after_bucket ""
if {[regexp {^/([^/]+)(/.*)?} $s3_uri -> bucket_name path_after_bucket]} {
if {$path_after_bucket ne "" && [string index $path_after_bucket 0] eq "/"} {
set path_after_bucket [string range $path_after_bucket 1 end]
}
# Define object extensions
set object_extensions [list ".txt" ".png" ".img" ".csv" ".xls" ".xlsx"]
set is_object 0
foreach ext $object_extensions {
if {[string match "*$ext" $path_after_bucket]} {
set is_object 1
break
}
}
# Determine counters based on method and URI type
switch $s3_method {
"PUT" {
if {$path_after_bucket eq ""} {
STATS::incr S3_stats create_bucket_count 1
} elseif {$is_object} {
STATS::incr S3_stats create_object_count 1
} else {
STATS::incr S3_stats create_path_count 1
}
}
"DELETE" {
if {$path_after_bucket eq ""} {
STATS::incr S3_stats delete_bucket_count 1
} elseif {$is_object} {
STATS::incr S3_stats delete_object_count 1
} else {
STATS::incr S3_stats delete_path_count 1
}
}
"GET" {
if {$path_after_bucket eq ""} {
STATS::incr S3_stats list_bucket_count 1
} else {
STATS::incr S3_stats list_path_count 1
}
}
}
}
# Collect payload for PUT requests
if {$s3_method eq "PUT"} {
set content_length [HTTP::header "Content-Length"]
if {$content_length ne "" && $content_length < 10485760} {
HTTP::collect $content_length
}
}
}
}
when HTTP_REQUEST_DATA {
set payload_length [string length [HTTP::payload]]
# Log the payload size
log local0. "S3 Payload Length: $payload_length bytes"
# Increment total payload size
STATS::incr S3_stats s3_total_payload_size $payload_length
# Count large payloads (>1MB)
if {$payload_length > 1048576} {
STATS::incr S3_stats s3_large_payload_count 1
}
}
5.**CreatingaVirtualServer**6.CreateaVirtualServer:```{#codeblock_nxw_tf4_2hc}tmshcreateltmvirtuals3_virtual_serverdestination1.1.1.1:0ip-protocoltcpprofilesadd{tcphttpS3_stats}rules{s3_irule_query}source-address-translation{typeautomap}pools3_pool_server```### About MPTCP settingsTheTCPProfileprovidesyouwithmultipathTCP \(MPTCP\)functionality,whicheliminatestheneedtoreestablishconnectionswhenmovingbetween3G/4GandWiFinetworks.Forexample,whenusingMPTCPfunctionality,ifaWiFiconnectionisdropped,a4GnetworkcanimmediatelyprovidethedatawhilethedeviceattemptstoresumeaWiFiconnection,thuspreventingalossofstreaming.TheTCPprofileprovidesthreeMPTCPsettings:**Enabled**,**Passthrough**,and**Disabled**.YoucanusetheMPTCP**Enabled**settingwhenyouknowalloftheavailableMPTCPflowsrelatedtoaspecificsession.TheBIG-IPsystemmanageseachflowasanindividualTCPflow,whilesplittingandrejoiningflowsfortheMPTCPsession.Notethatoverallflowoptimization,however,cannotbeguaranteed;onlytheoptimizationforanindividualflowisguaranteed.TheMPTCP**Passthrough**settingenablesMPTCPheaderoptionstopassthrough,whilerecognizingthatnotallcorrespondingflowstothesessionswillbegoingthroughtheBIG-IPsystem.ThispassthroughfunctionalityisespeciallybeneficialwhenyouwanttorespecttheMPTCPheaderoptions,butrecognizethatnotallcorrespondingflowstothesessionwillbeflowingthroughtheBIG-IPsystem.InPassthroughmode,theBIG-IPsystemallowsMPTCPoptionstopassthrough,whilemanagingtheflowasaFastL4flow.TheMPTCP**Passthrough**settingredirectsflowsthatcomeintoaLayer7virtualservertoaFastL4proxyserver.Thisconfigurationenablesflowstobeaddedordropped,asnecessary,astheuser's coverage changes, without interrupting the TCP connection. If a Fast L4 proxy server fails to match, then the flow is blocked.WhenyoudonotneedtosupportMPTCPheaderoptions,youcanselecttheMPTCP**Disabled**setting,sothattheBIG-IPsystemignoresallMPTCPoptionsandsimplymanagesallflowsasTCPflows.### About the PUSH flag in the TCP headerBydefault,theBIG-IPsystemreceivesaTCPacknowledgement \(ACK\)wheneverthesystemsendsasegmentwiththePUSH \(PSH\)bitsetintheCodebitsfieldoftheTCPheader.ThisfrequentreceiptofACKscanaffectBIG-IPsystemperformance.Tomitigatethisissue,youcanconfigureaTCPprofilesettingcalled**PUSHFlag**tocontrolthenumberofACKsthatthesystemreceivesasaresultofsettingthePSHbitinaTCPheader.Youcanchoosefromthese**PUSHFlag**values:Default:TheBIG-IPsystemretainsitscurrentbehavior,receivinganACKwheneverthesystemsendsasegmentwiththePSHbitset.None:TheBIG-IPsystemneversetsthePSHflagwhensendingaTCPsegmentsothatthesystemwillnotreceiveanACKinresponse.One:TheBIG-IPsystemsetsthePSHflagonceperconnection,whentheFINflagisset.Auto:TheBIG-IPsystemsetsthePSHflaginthesecases:-Whenthereceiver’sReceiveWindowsizeiscloseto0.-Onceperround-triptime \(RTT\),thatis,thelengthoftimethattheBIG-IPsystemsendsasignalandreceivesanacknowledgement \(ACK\).-WhentheBIG-IPsystemreceivestheeventHUDCTL\_RESPONSE\_DONE.### TCP Auto SettingsAutosettingsinTCPwillusenetworkmeasurementstosettheoptimalsizeforproxybuffer,receivewindow,andsendbuffer.EachTCPflowestimatesthesend/receivesidebandwidthandsetsthesend/receivebuffersizedynamically.Autosettingshelptooptimizeperformanceandavoidexcessivememoryconsumption.Thesefeaturesaredisabledbydefault.|Setting|Description||-------|-----------||AutoProxyBuffer|TCPsetstheproxybufferhighbasedonMAX.||AutoReceiveWindow|TCPreceiverinfersthebandwidthandcontinuouslysetsthereceivewindowsize.||AutoSendBuffer|TCPsenderinfersthebandwidthandcontinuouslysetsthesendbuffersize.|## The UDP profile typeTheUDPprofileisaconfigurationtoolformanagingUDPnetworktraffic.BecausetheBIG-IPsystemsupportstheOpenSSLimplementationofdatagramTransportLayerSecurity \(TLS\),youcanoptionallyassignbothaUDPandaClientSSLprofiletocertaintypesofvirtualservers.### Manage UDP trafficOneofthetasksforconfiguringtheBIG-IPsystemtomanageUDPprotocoltrafficistocreateaUDPprofile.AUDPPROFILEcontainspropertiesthatyoucansettoaffectthewaythattheBIG-IPsystemmanagesthetraffic.1.OntheMaintaboftheBIG-IPConfigurationutility,click**LocalTraffic** \>**Profiles** \>**Protocols** \>**UDP**.TheUDPprofilelistscreenopens.2.Click**Create**.3.Inthe**ProfileName**field,typeaname,suchasmy\_udp\_profile.4.Configureallothersettingsasneeded.5.Click**Finished**.Afteryoucompletethistask,theBIG-IPsystemconfigurationcontainsaUDPprofilethatyoucanassigntoaBIG-IPvirtualserver.### About rate limits for egress UDP trafficYoucancreateaniRuletoenableratelimitingforegressUDPtrafficflows,onaper-flowbasis.SuchaniRuleincludesthecommand`UDP::max_rate`ortheperformancemethod`UDP_METHOD_MAX_RATE`.Withthiscommandormethod,youcanspecifyanupperlimit,inbytespersecond,fortherateofaUDPflow.Bydefault,UDPratelimitingisdisabled.Whenthepacketflowrateexceedstheconfiguredvalue,theBIG-IPsystembeginstoqueuethepacketsinabufferwithanupperthreshold,inbytes,thatyoudefine.Ifyoudonotconfigureamaximumratelimit,thennomemoryisallocatedforaUDPsendbuffer.### About UDP packet bufferingYoucanconfigureaUDPsendbufferinaUDPprofile.AUDPsendbufferisameansofholdingunsentpacketsinaqueue,uptoaconfiguredmaximumbuffersize,inbytes.QueueingbeginswhentheingresspacketratestartstoexceedtheegressratelimitspecifiedintheiRule.Oncetheingressratefallsbelowtheegressratelimit,thesystemtriestoretransmitthequeuedpackets.ThemaximumUDPsendbuffersizehasasmalldefaultvalueof65535bytes.Ifthesendbuffergetsclosetofillingup,theBIG-IPsystembeginsdroppingsomeofthepacketsaccordingtoaqueuedroppingstrategy.ThissystembehaviorofdroppingonlyafewpacketsinsteadofallpacketscausestheUDPsender's congestion control to adapt, resulting in improved user experience.Ifthenumberofpacketsinthesendbufferreachestheconfiguredmaximumbuffersize,allotherincomingUDPpacketsaredropped.**Important:**TheBIG-IPsystemonlyusestheconfiguredUDPsendbufferwhentheUDPmaximumegressratelimitisenabled.Iftheegressratelimitisdisabled,thesystemrefrainsfromallocatingmemoryforthesendbuffer.### Optimize congestion control for UDP trafficBeforedoingthistasktospecifyasendbufferthreshold,confirmthatyouhavecreatedaniRuletoimposearatelimitonUDPpacketflows.WhenyouconfigureamaximumratelimitforaUDPpacketflow,youcanalsosetathreshold,inbytes,foraUDPsendbuffer.AUDPsendbufferisamechanismthattheBIG-IPsystemcreatestostoreanyUDPpacketsthatcausetheegresspacketflowtoexceedtheconfiguredratelimit.Whenyousetabytethresholdforasendbuffer,theBIG-IPsystemcanqueuepacketsuptothethresholdvalueinsteadofdroppingthem,therebyenablingtheBIG-IPsystemtoretransmitthequeuedpacketslaterwhentheegresspacketflowratedropsbelowtheconfiguredratelimit.1.UsingtheBIG-IPsystem's management IP address, log in to the BIG-IP Configuration utility.2.OntheMaintab,click**LocalTraffic** \>**Profiles** \>**Protocol** \>**UDP**.TheBIG-IPsystemdisplaysthelistofexistingUDPprofiles.3.IntheNamecolumn,clickthenameoftheprofileforwhichyouwanttoconfigureaUDPsendbuffer.TheBIG-IPsystemdisplaystheprofileproperties.4.Forthe**SendBuffer**setting,retainorchangethedefaultvalue,inbytes.Notethatthedefaultvalueisrelativelysmall,655350.5.Click**Update**.Afteryouperformthistask,theBIG-IPsystemcanstoreandretransmitpacketsthatwouldnormallybedroppedbecausethemaximumratelimitwasexceeded.Toensureacompleteconfiguration,makesurethatyouhaveassignedthisUDPprofile,aswellastheiRulespecifyingtheegressratelimit,toavirtualserver.## The SCTP profile typeTheBIG-IPsystemincludesaprofiletypethatyoucanusetomanageStreamControlTransmissionProtocol \(SCTP\)traffic.StreamControlTransmissionProtocol \(SCTP\)isageneral-purpose,industry-standardtransportprotocol,designedformessage-orientedapplicationsthattransportsignallingdata.ThedesignofSCTPincludesappropriatecongestion-avoidancebehavior,aswellasresistancetofloodingandmasqueradeattacks.UnlikeTCP,SCTPincludestheabilitytosupportmultistreamingfunctionality,whichpermitsseveralstreamswithinanSCTPconnection.WhileaTCPstreamreferstoasequenceofbytes,anSCTPstreamrepresentsasequenceofdatamessages.Eachdatamessage \(orchunk\)containsanintegerIDthatidentifiesastream,anapplication-definedPayloadProtocolIdentifier \(PPI\),aStreamsequencenumber,andaTransmitSerialNumber \(TSN\)thatuniquelyidentifiesthechunkwithintheSCTPconnection.ChunkdeliveryisacknowledgedusingTSNssentinselectiveacknowledgements \(ACKs\)sothateverychunkcanbeindependentlyacknowledged.Thiscapabilitydemonstratesasignificantbenefitofstreams,becauseiteliminateshead-of-lineblockingwithintheconnection.Alostchunkofdataononestreamdoesnotpreventotherstreamsfromprogressingwhilethatlostchunkisretransmitted.SCTPalsoincludestheabilitytosupportmultihomingfunctionality,whichprovidespathredundancyforanSCTPconnectionbyenablingSCTPtosendpacketsbetweenmultipleaddressesownedbyeachendpoint.SCTPendpointstypicallyconfiguredifferentIPaddressesondifferentnetworkinterfacestoprovideredundantphysicalpathsbetweenthepeers.Forexample,aclientandservermightbeattachedtoseparateVLANs.TheclientandservercaneachadvertisetwoIPaddresses \(oneperVLAN\)totheotherpeer.IfeitherVLANisavailable,thenSCTPcantransportpacketsbetweenthepeers.YoucanuseSCTPasthetransportprotocolforapplicationsthatrequiremonitoringanddetectionofsessionloss.Forsuchapplications,theSCTPmechanismstodetectsessionfailureactivelymonitortheconnectivityofasession.## The Any IP profile typeWiththeAnyIPprofile,youcanenforceanidletimeoutvalueonIPtrafficotherthanTCPandUDPtraffic.YoucanusetheBIG-IPConfigurationutilitytocreate,viewdetailsfor,ordeleteAnyIPprofiles.Whenyouconfigureanidletimeoutvalue,youspecifythenumberofsecondsforwhichaconnectionisidlebeforetheconnectioniseligiblefordeletion.Thedefaultvalueis60seconds.Possiblevaluesthatyoucanconfigureare:Specify:SpecifiesthenumberofsecondsthattheAnyIPconnectionistoremainidlebeforeitcanbedeleted.Whenyouselect**Specify**,youmustalsotypeanumberinthebox.Immediate:Specifiesthatyoudonotwanttheconnectiontoremainidle,andthatitisthereforeimmediatelyeligiblefordeletion.Indefinite:SpecifiesthatAnyIPconnectionscanremainidleindefinitely.