Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 3135

Performance Enhancing Proxies Intended to Mitigate Link-Related Degradations

Pages: 45
Informational
Part 1 of 2 – Pages 1 to 21
None   None   Next

Top   ToC   RFC3135 - Page 1
Network Working Group                                          J. Border
Request for Comments: 3135                        Hughes Network Systems
Category: Informational                                          M. Kojo
                                                  University of Helsinki
                                                               J. Griner
                                              NASA Glenn Research Center
                                                           G. Montenegro
                                                  Sun Microsystems, Inc.
                                                               Z. Shelby
                                                      University of Oulu
                                                               June 2001


    Performance Enhancing Proxies Intended to Mitigate Link-Related
                              Degradations

Status of this Memo

   This memo provides information for the Internet community.  It does
   not specify an Internet standard of any kind.  Distribution of this
   memo is unlimited.

Copyright Notice

   Copyright (C) The Internet Society (2001).  All Rights Reserved.

Abstract

This document is a survey of Performance Enhancing Proxies (PEPs) often employed to improve degraded TCP performance caused by characteristics of specific link environments, for example, in satellite, wireless WAN, and wireless LAN environments. Different types of Performance Enhancing Proxies are described as well as the mechanisms used to improve performance. Emphasis is put on proxies operating with TCP. In addition, motivations for their development and use are described along with some of the consequences of using them, especially in the context of the Internet.

Table of Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Types of Performance Enhancing Proxies . . . . . . . . . . . . 4 2.1 Layering . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 Transport Layer PEPs . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Application Layer PEPs . . . . . . . . . . . . . . . . . . . 5 2.2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Implementation Symmetry . . . . . . . . . . . . . . . . . . . 6 2.4 Split Connections . . . . . . . . . . . . . . . . . . . . . . 7
Top   ToC   RFC3135 - Page 2
   2.5 Transparency . . . . . . . . . . . . . . . . . . . . . . . . .  8
   3. PEP Mechanisms  . . . . . . . . . . . . . . . . . . . . . . . .  9
   3.1 TCP ACK Handling . . . . . . . . . . . . . . . . . . . . . . .  9
   3.1.1 TCP ACK Spacing  . . . . . . . . . . . . . . . . . . . . . .  9
   3.1.2 Local TCP Acknowledgements . . . . . . . . . . . . . . . . .  9
   3.1.3 Local TCP Retransmissions  . . . . . . . . . . . . . . . . .  9
   3.1.4 TCP ACK Filtering and Reconstruction . . . . . . . . . . . . 10
   3.2 Tunneling  . . . . . . . . . . . . . . . . . . . . . . . . . . 10
   3.3 Compression  . . . . . . . . . . . . . . . . . . . . . . . . . 10
   3.4 Handling Periods of Link Disconnection with TCP  . . . . . . . 11
   3.5 Priority-based Multiplexing  . . . . . . . . . . . . . . . . . 12
   3.6 Protocol Booster Mechanisms  . . . . . . . . . . . . . . . . . 13
   4. Implications of Using PEPs  . . . . . . . . . . . . . . . . . . 14
   4.1 The End-to-end Argument  . . . . . . . . . . . . . . . . . . . 14
   4.1.1 Security . . . . . . . . . . . . . . . . . . . . . . . . . . 14
   4.1.1.1 Security Implications  . . . . . . . . . . . . . . . . . . 15
   4.1.1.2 Security Implication Mitigations . . . . . . . . . . . . . 16
   4.1.1.3 Security Research Related to PEPs  . . . . . . . . . . . . 16
   4.1.2 Fate Sharing . . . . . . . . . . . . . . . . . . . . . . . . 16
   4.1.3 End-to-end Reliability . . . . . . . . . . . . . . . . . . . 17
   4.1.4 End-to-end Failure Diagnostics . . . . . . . . . . . . . . . 19
   4.2 Asymmetric Routing . . . . . . . . . . . . . . . . . . . . . . 19
   4.3 Mobile Hosts . . . . . . . . . . . . . . . . . . . . . . . . . 20
   4.4 Scalability  . . . . . . . . . . . . . . . . . . . . . . . . . 20
   4.5 Other Implications of Using PEPs . . . . . . . . . . . . . . . 21
   5. PEP Environment Examples  . . . . . . . . . . . . . . . . . . . 21
   5.1 VSAT Environments  . . . . . . . . . . . . . . . . . . . . . . 21
   5.1.1 VSAT Network Characteristics . . . . . . . . . . . . . . . . 22
   5.1.2 VSAT Network PEP Implementations . . . . . . . . . . . . . . 23
   5.1.3 VSAT Network PEP Motivation  . . . . . . . . . . . . . . . . 24
   5.2 W-WAN Environments . . . . . . . . . . . . . . . . . . . . . . 25
   5.2.1 W-WAN Network Characteristics  . . . . . . . . . . . . . . . 25
   5.2.2 W-WAN PEP Implementations  . . . . . . . . . . . . . . . . . 26
   5.2.2.1 Mowgli System  . . . . . . . . . . . . . . . . . . . . . . 26
   5.2.2.2 Wireless Application Protocol (WAP)  . . . . . . . . . . . 28
   5.2.3 W-WAN PEP Motivation . . . . . . . . . . . . . . . . . . . . 29
   5.3 W-LAN Environments . . . . . . . . . . . . . . . . . . . . . . 30
   5.3.1 W-LAN Network Characteristics  . . . . . . . . . . . . . . . 30
   5.3.2 W-LAN PEP Implementations: Snoop . . . . . . . . . . . . . . 31
   5.3.3 W-LAN PEP Motivation . . . . . . . . . . . . . . . . . . . . 33
   6. Security Considerations . . . . . . . . . . . . . . . . . . . . 34
   7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 34
   8. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . 34
   9. References  . . . . . . . . . . . . . . . . . . . . . . . . . . 35
   10. Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . 39
   Appendix A - PEP Terminology Summary . . . . . . . . . . . . . . . 41
   Full Copyright Statement . . . . . . . . . . . . . . . . . . . . . 45
Top   ToC   RFC3135 - Page 3

1. Introduction

The Transmission Control Protocol [RFC0793] (TCP) is used as the transport layer protocol by many Internet and intranet applications. However, in certain environments, TCP and other higher layer protocol performance is limited by the link characteristics of the environment. This document is a survey of Performance Enhancing Proxy (PEP) performance migitigation techniques. A PEP is used to improve the performance of the Internet protocols on network paths where native performance suffers due to characteristics of a link or subnetwork on the path. This document is informational and does not make recommendations about using PEPs or not using them. Distinct standards track recommendations for the performance mitigation of TCP over links with high error rates, links with low bandwidth, and so on, have been developed or are in development by the Performance Implications of Link Characteristics WG (PILC) [PILCWEB]. Link design choices may have a significant influence on the performance and efficiency of the Internet. However, not all link characteristics, for example, high latency, can be compensated for by choices in the link layer design. And, the cost of compensating for some link characteristics may be prohibitive for some technologies. The techniques surveyed here are applied to existing link technologies. When new link technologies are designed, they should be designed so that these techniques are not required, if at all possible. This document does not advocate the use of PEPs in any general case. On the contrary, we believe that the end-to-end principle in designing Internet protocols should be retained as the prevailing approach and PEPs should be used only in specific environments and circumstances where end-to-end mechanisms providing similar performance enhancements are not available. In any environment where one might consider employing a PEP for improved performance, an end user (or, in some cases, the responsible network administrator) should be aware of the PEP and the choice of employing PEP functionality should be under the control of the end user, especially if employing the PEP would interfere with end-to-end usage of IP layer security mechanisms or otherwise have undesirable implications in some circumstances. This would allow the user to choose end-to- end IP at all times but, of course, without the performance enhancements that employing the PEP may yield. This survey does not make recommendations, for or against, with respect to using PEPs. Standards track recommendations have been or are being developed within the IETF for individual link
Top   ToC   RFC3135 - Page 4
   characteristics, e.g., links with high error rates, links with low
   bandwidth, links with asymmetric bandwidth, etc., by the Performance
   Implications of Link Characteristics WG (PILC) [PILCWEB].

   The remainder of this document is organized as follows.  Section 2
   provides an overview of different kinds of PEP implementations.

   Section 3 discusses some of the mechanisms which PEPs may employ in
   order to improve performance.  Section 4 discusses some of the
   implications with respect to using PEPs, especially in the context of
   the global Internet.  Finally, Section 5 discusses some example
   environments where PEPs are used: satellite very small aperture
   terminal (VSAT) environments, mobile wireless WAN (W-WAN)
   environments and wireless LAN (W-LAN) environments.  A summary of PEP
   terminology is included in an appendix (Appendix A).

2. Types of Performance Enhancing Proxies

There are many types of Performance Enhancing Proxies. Different types of PEPs are used in different environments to overcome different link characteristics which affect protocol performance. Note that enhancing performance is not necessarily limited in scope to throughput. Other performance related aspects, like usability of a link, may also be addressed. For example, [M-TCP] addresses the issue of keeping TCP connections alive during periods of disconnection in wireless networks. The following sections describe some of the key characteristics which differentiate different types of PEPs.

2.1 Layering

In principle, a PEP implementation may function at any protocol layer but typically it functions at one or two layers only. In this document we focus on PEP implementations that function at the transport layer or at the application layer as such PEPs are most commonly used to enhance performance over links with problematic characteristics. A PEP implementation may also operate below the network layer, that is, at the link layer, but this document pays only little attention to such PEPs as link layer mechanisms can be and typically are implemented transparently to network and higher layers, requiring no modifications to protocol operation above the link layer. It should also be noted that some PEP implementations operate across several protocol layers by exploiting the protocol information and possibly modifying the protocol operation at more than one layer. For such a PEP it may be difficult to define at which layer(s) it exactly operates on.
Top   ToC   RFC3135 - Page 5

2.1.1 Transport Layer PEPs

Transport layer PEPs operate at the transport level. They may be aware of the type of application being carried by the transport layer but, at most, only use this information to influence their behavior with respect to the transport protocol; they do not modify the application protocol in any way, but let the application protocol operate end-to-end. Most transport layer PEP implementations interact with TCP. Such an implementation is called a TCP Performance Enhancing Proxy (TCP PEP). For example, in an environment where ACKs may bunch together causing undesirable data segment bursts, a TCP PEP may be used to simply modify the ACK spacing in order to improve performance. On the other hand, in an environment with a large bandwidth*delay product, a TCP PEP may be used to alter the behavior of the TCP connection by generating local acknowledgments to TCP data segments in order to improve the connection's throughput. The term TCP spoofing is sometimes used synonymously for TCP PEP functionality. However, the term TCP spoofing more accurately describes the characteristic of intercepting a TCP connection in the middle and terminating the connection as if the interceptor is the intended destination. While this is a characteristic of many TCP PEP implementations, it is not a characteristic of all TCP PEP implementations.

2.1.2 Application Layer PEPs

Application layer PEPs operate above the transport layer. Today, different kinds of application layer proxies are widely used in the Internet. Such proxies include Web caches and relay Mail Transfer Agents (MTA) and they typically try to improve performance or service availability and reliability in general and in a way which is applicable in any environment but they do not necessarily include any optimizations that are specific to certain link characteristics. Application layer PEPs, on the other hand, can be implemented to improve application protocol as well as transport layer performance with respect to a particular application being used with a particular type of link. An application layer PEP may have the same functionality as the corresponding regular proxy for the same application (e.g., relay MTA or Web caching proxy) but extended with link-specific optimizations of the application protocol operation. Some application protocols employ extraneous round trips, overly verbose headers and/or inefficient header encoding which may have a significant impact on performance, in particular, with long delay and slow links. This unnecessary overhead can be reduced, in general or
Top   ToC   RFC3135 - Page 6
   for a particular type of link, by using an application layer PEP in
   an intermediate node.  Some examples of application layer PEPs which
   have been shown to improve performance on slow wireless WAN links are
   described in [LHKR96] and [CTC+97].

2.2 Distribution

A PEP implementation may be integrated, i.e., it comprises a single PEP component implemented within a single node, or distributed, i.e., it comprises two or more PEP components, typically implemented in multiple nodes. An integrated PEP implementation represents a single point at which performance enhancement is applied. For example, a single PEP component might be implemented to provide impedance matching at the point where wired and wireless links meet. A distributed PEP implementation is generally used to surround a particular link for which performance enhancement is desired. For example, a PEP implementation for a satellite connection may be distributed between two PEPs located at each end of the satellite link.

2.3 Implementation Symmetry

A PEP implementation may be symmetric or asymmetric. Symmetric PEPs use identical behavior in both directions, i.e., the actions taken by the PEP occur independent from which interface a packet is received. Asymmetric PEPs operate differently in each direction. The direction can be defined in terms of the link (e.g., from a central site to a remote site) or in terms of protocol traffic (e.g., the direction of TCP data flow, often called the TCP data channel, or the direction of TCP ACK flow, often called the TCP ACK channel). An asymmetric PEP implementation is generally used at a point where the characteristics of the links on each side of the PEP differ or with asymmetric protocol traffic. For example, an asymmetric PEP might be placed at the intersection of wired and wireless networks or an asymmetric application layer PEP might be used for the request-reply type of HTTP traffic. A PEP implementation may also be both symmetric and asymmetric at the same time with regard to different mechanisms it employs. (PEP mechanisms are described in Section 3.) Whether a PEP implementation is symmetric or asymmetric is independent of whether the PEP implementation is integrated or distributed. In other words, a distributed PEP implementation might operate symmetrically at each end of a link (i.e., the two PEPs function identically). On the other hand, a distributed PEP implementation might operate asymmetrically, with a different PEP implementation at each end of the link. Again, this usually is used with asymmetric links. For example, for a link with an asymmetric
Top   ToC   RFC3135 - Page 7
   amount of bandwidth available in each direction, the PEP on the end
   of the link forwarding traffic in the direction with a large amount
   of bandwidth might focus on locally acknowledging TCP traffic in
   order to use the available bandwidth.  At the same time, the PEP on
   the end of the link forwarding traffic in the direction with very
   little bandwidth might focus on reducing the amount of TCP
   acknowledgement traffic being forwarded across the link (to keep the
   link from congesting).

2.4 Split Connections

A split connection TCP implementation terminates the TCP connection received from an end system and establishes a corresponding TCP connection to the other end system. In a distributed PEP implementation, this is typically done to allow the use of a third connection between two PEPs optimized for the link. This might be a TCP connection optimized for the link or it might be another protocol, for example, a proprietary protocol running on top of UDP. Also, the distributed implementation might use a separate connection between the proxies for each TCP connection or it might multiplex the data from multiple TCP connections across a single connection between the PEPs. In an integrated PEP split connection TCP implementation, the PEP again terminates the connection from one end system and originates a separate connection to the other end system. [I-TCP] documents an example of a single PEP split connection implementation. Many integrated PEPs use a split connection implementation in order to address a mismatch in TCP capabilities between two end systems. For example, the TCP window scaling option [RFC1323] can be used to extend the maximum amount of TCP data which can be "in flight" (i.e., sent and awaiting acknowledgement). This is useful for filling a link which has a high bandwidth*delay product. If one end system is capable of using scaled TCP windows but the other is not, the end system which is not capable can set up its connection with a PEP on its side of the high bandwidth*delay link. The split connection PEP then sets up a TCP connection with window scaling over the link to the other end system. Split connection TCP implementations can effectively leverage TCP performance enhancements optimal for a particular link but which cannot necessarily be employed safely over the global Internet. Note that using split connection PEPs does not necessarily exclude simultaneous use of IP for end-to-end connectivity. If a split connection is managed per application or per connection and is under the control of the end user, the user can decide whether a particular
Top   ToC   RFC3135 - Page 8
   TCP connection or application makes use of the split connection PEP
   or whether it operates end-to-end.  When a PEP is employed on a last
   hop link, the end user control is relatively easy to implement.

   In effect, application layer proxies for TCP-based applications are
   split connection TCP implementations with end systems using PEPs as a
   service related to a particular application.  Therefore, all
   transport (TCP) layer enhancements that are available with split
   connection TCP implementations can also be employed with application
   layer PEPs in conjunction with application layer enhancements.

2.5 Transparency

Another key characteristic of a PEP is its degree of transparency. PEPs may operate totally transparently to the end systems, transport endpoints, and/or applications involved (in a connection), requiring no modifications to the end systems, transport endpoints, or applications. On the other hand, a PEP implementation may require modifications to both ends in order to be used. In between, a PEP implementation may require modifications to only one of the ends involved. Either of these kind of PEP implementations is non-transparent, at least to the layer requiring modification. It is sometimes useful to think of the degree of transparency of a PEP implementation at four levels, transparency with respect to the end systems (network-layer transparent PEP), transparency with respect to the transport endpoints (transport-layer transparent PEP), transparency with respect to the applications (application-layer transparent PEP) and transparency with respect to the users. For example, a user who subscribes to a satellite Internet access service may be aware that the satellite terminal is providing a performance enhancing service even though the TCP/IP stack and the applications in the user's PC are not aware of the PEP which implements it. Note that the issue of transparency is not the same as the issue of maintaining end-to-end semantics. For example, a PEP implementation which simply uses a TCP ACK spacing mechanism maintains the end-to- end semantics of the TCP connection while a split connection TCP PEP implementation may not. Yet, both can be implemented transparently to the transport endpoints at both ends. The implications of not maintaining the end-to-end semantics, in particular the end-to-end semantics of TCP connections, are discussed in Section 4.
Top   ToC   RFC3135 - Page 9

3. PEP Mechanisms

An obvious key characteristic of a PEP implementation is the mechanism(s) it uses to improve performance. Some examples of PEP mechanisms are described in the following subsections. A PEP implementation might implement more than one of these mechanisms.

3.1 TCP ACK Handling

Many TCP PEP implementations are based on TCP ACK manipulation. The handling of TCP acknowledgments can differ significantly between different TCP PEP implementations. The following subsections describe various TCP ACK handling mechanisms. Many implementations combine some of these mechanisms and possibly employ some additional mechanisms as well.

3.1.1 TCP ACK Spacing

In environments where ACKs tend to bunch together, ACK spacing is used to smooth out the flow of TCP acknowledgments traversing a link. This improves performance by eliminating bursts of TCP data segments that the TCP sender would send due to back-to-back arriving TCP acknowledgments [BPK97].

3.1.2 Local TCP Acknowledgements

In some PEP implementations, TCP data segments received by the PEP are locally acknowledged by the PEP. This is very useful over network paths with a large bandwidth*delay product as it speeds up TCP slow start and allows the sending TCP to quickly open up its congestion window. Local (negative) acknowledgments are often also employed to trigger local (and faster) error recovery on links with significant error rates. (See Section 3.1.3.) Local acknowledgments are automatically employed with split connection TCP implementations. When local acknowledgments are used, the burden falls upon the TCP PEP to recover any data which is dropped after the PEP acknowledges it.

3.1.3 Local TCP Retransmissions

A TCP PEP may locally retransmit data segments lost on the path between the TCP PEP and the receiving end system, thus aiming at faster recovery from lost data. In order to achieve this the TCP PEP may use acknowledgments arriving from the end system that receives the TCP data segments, along with appropriate timeouts, to determine
Top   ToC   RFC3135 - Page 10
   when to locally retransmit lost data.  TCP PEPs sending local
   acknowledgments to the sending end system are required to employ
   local retransmissions towards the receiving end system.

   Some PEP implementations perform local retransmissions even though
   they do not use local acknowledgments to alter TCP connection
   performance.  Basic Snoop [SNOOP] is a well know example of such a
   PEP implementation.  Snoop caches TCP data segments it receives and
   forwards and then monitors the end-to-end acknowledgments coming from
   the receiving TCP end system for duplicate acknowledgments (DUPACKs).
   When DUPACKs are received, Snoop locally retransmits the lost TCP
   data segments from its cache, suppressing the DUPACKs flowing to the
   sending TCP end system until acknowledgments for new data are
   received.  The Snoop system also implements an option to employ local
   negative acknowledgments to trigger local TCP retransmissions.  This
   can be achieved, for example, by applying TCP selective
   acknowledgments locally on the error-prone link.  (See Section 5.3
   for details.)

3.1.4 TCP ACK Filtering and Reconstruction

On paths with highly asymmetric bandwidth the TCP ACKs flowing in the low-speed direction may get congested if the asymmetry ratio is high enough. The ACK filtering and reconstruction mechanism addresses this by filtering the ACKs on one side of the link and reconstructing the deleted ACKs on the other side of the link. The mechanism and the issue of dealing with TCP ACK congestion with highly asymmetric links are discussed in detail in [RFC2760] and in [BPK97].

3.2 Tunneling

A Performance Enhancing Proxy may encapsulate messages to carry the messages across a particular link or to force messages to traverse a particular path. A PEP at the other end of the encapsulation tunnel removes the tunnel wrappers before final delivery to the receiving end system. A tunnel might be used by a distributed split connection TCP implementation as the means for carrying the connection between the distributed PEPs. A tunnel might also be used to support forcing TCP connections which use asymmetric routing to go through the end points of a distributed PEP implementation.

3.3 Compression

Many PEP implementations include support for one or more forms of compression. In some PEP implementations, compression may even be the only mechanism used for performance improvement. Compression reduces the number of bytes which need to be sent across a link. This is useful in general and can be very important for bandwidth
Top   ToC   RFC3135 - Page 11
   limited links.  Benefits of using compression include improved link
   efficiency and higher effective link utilization, reduced latency and
   improved interactive response time, decreased overhead and reduced
   packet loss rate over lossy links.

   Where appropriate, link layer compression is used.  TCP and IP header
   compression are also frequently used with PEP implementations.
   [RFC1144] describes a widely deployed method for compressing TCP
   headers.  Other header compression algorithms are described in
   [RFC2507], [RFC2508] and [RFC2509].

   Payload compression is also desirable and is increasing in importance
   with today's increased emphasis on Internet security.  Network (IP)
   layer (and above) security mechanisms convert IP payloads into random
   bit streams which defeat applicable link layer compression mechanisms
   by removing or hiding redundant "information."  Therefore,
   compression of the payload needs to be applied before security
   mechanisms are applied.  [RFC2393] defines a framework where common
   compression algorithms can be applied to arbitrary IP segment
   payloads.  However, [RFC2393] compression is not always applicable.
   Many types of IP payloads (e.g., images, audio, video and "zipped"
   files being transferred) are already compressed.  And, when security
   mechanisms such as TLS [RFC2246] are applied above the network (IP)
   layer, the data is already encrypted (and possibly also compressed),
   again removing or hiding any redundancy in the payload.  The
   resulting additional transport or network layer compression will
   compact only headers, which are small, and possibly already covered
   by separate compression algorithms of their own.

   With application layer PEPs one can employ application-specific
   compression.  Typically an application-specific (or content-specific)
   compression mechanism is much more efficient than any generic
   compression mechanism.  For example, a distributed Web PEP
   implementation may implement more efficient binary encoding of HTTP
   headers, or a PEP can employ lossy compression that reduces the image
   quality of online-images on Web pages according to end user
   instructions, thus reducing the number of bytes transferred over a
   slow link and consequently the response time perceived by the user
   [LHKR96].

3.4 Handling Periods of Link Disconnection with TCP

Periods of link disconnection or link outages are very common with some wireless links. During these periods, a TCP sender does not receive the expected acknowledgments. Upon expiration of the retransmit timer, this causes TCP to close its congestion window with all of the related drawbacks. A TCP PEP may monitor the traffic coming from the TCP sender towards the TCP receiver behind the
Top   ToC   RFC3135 - Page 12
   disconnected link.  The TCP PEP retains the last ACK, so that it can
   shut down the TCP sender's window by sending the last ACK with a
   window set to zero.  Thus, the TCP sender will go into persist mode.

   To make this work in both directions with an integrated TCP PEP
   implementation, the TCP receiver behind the disconnected link must be
   aware of the current state of the connection and, in the event of a
   disconnection, it must be capable of freezing all timers.  [M-TCP]
   implements such operation.  Another possibility is that the
   disconnected link is surrounded by a distributed PEP pair.

   In split connection TCP implementations, a period of link
   disconnection can easily be hidden from the end host on the other
   side of the PEP thus precluding the TCP connection from breaking even
   if the period of link disconnection lasts a very long time; if the
   TCP PEP cannot forward data due to link disconnection, it stops
   receiving data.  Normal TCP flow control then prevents the TCP sender
   from sending more than the TCP advertised window allowed by the PEP.
   Consequently, the PEP and its counterpart behind the disconnected
   link can employ a modified TCP version which retains the state and
   all unacknowledged data segments across the period of disconnection
   and then performs local recovery as the link is reconnected.  The
   period of link disconnection may or may not be hidden from the
   application and user, depending upon what application the user is
   using the TCP connection for.

3.5 Priority-based Multiplexing

Implementing priority-based multiplexing of data over a slow and expensive link may significantly improve the performance and usability of the link for selected applications or connections. A user behind a slow link would experience the link more feasible to use in case of simultaneous data transfers, if urgent data transfers (e.g., interactive connections) could have shorter response time (better performance) than less urgent background transfers. If the interactive connections transmit enough data to keep the slow link fully utilized, it might be necessary to fully suspend the background transfers for awhile to ensure timely delivery for the interactive connections. In flight TCP segments of an end-to-end TCP connection (with low priority) cannot be delayed for a long time. Otherwise, the TCP timer at the sending end would expire, resulting in suboptimal performance. However, this kind of operation can be controlled in conjunction with a split connection TCP PEP by assigning different priorities for different connections (or applications). A split connection PEP implementation allows the PEP in an intermediate node
Top   ToC   RFC3135 - Page 13
   to delay the data delivery of a lower-priority TCP flow for an
   unlimited period of time by simply rescheduling the order in which it
   forwards data of different flows to the destination host behind the
   slow link.  This does not have a negative impact on the delayed TCP
   flow as normal TCP flow control takes care of suspending the flow
   between the TCP sender and the PEP, when the PEP is not forwarding
   data for the flow, and resumes it once the PEP decides to continue
   forwarding data for the flow.  This can further be assisted, if the
   protocol stacks on both sides of the slow link implement priority
   based scheduling of connections.

   With such a PEP implementation, along with user-controlled
   priorities, the user can assign higher priority for selected
   interactive connection(s) and have much shorter response time for the
   selected connection(s), even if there are simultaneous low priority
   bulk data transfers which in regular end-to-end operation would
   otherwise eat the available bandwidth of the slow link almost
   completely.  These low priority bulk data transfers would then
   proceed nicely during the idle periods of interactive connections,
   allowing the user to keep the slow and expensive link (e.g., wireless
   WAN) fully utilized.

   Other priority-based mechanisms may be applied on shared wireless
   links with more than two terminals.  With shared wireless mediums
   becoming a weak link in Internet QoS architectures, many may turn to
   PEPs to provide extra priority levels across a shared wireless medium
   [SHEL00].  These PEPs are distributed on all nodes of the shared
   wireless medium.  For example, in an 802.11 WLAN this PEP is
   implemented in the access point (base station) and each mobile host.
   One PEP then uses distributed queuing techniques to coordinate
   traffic classes of all nodes.  This is also sometimes called subnet
   bandwidth management.  See [BBKT97] for an example of queuing
   techniques which can be used to achieve this.  This technique can be
   implemented either above or below the IP layer.  Priority treatment
   can typically be specified either by the user or by marking the
   (IPv4) ToS or (IPv6) Traffic Class IP header field.

3.6 Protocol Booster Mechanisms

Work in [FMSBMR98] shows a range of other possible PEP mechanisms called protocol boosters. Some of these mechanisms are specific to UDP flows. For example, a PEP may apply asymmetrical methods such as extra UDP error detection. Since the 16 bit UDP checksum is optional, it is typically not computed. However, for links with errors, the checksum could be beneficial. This checksum can be added to outgoing UDP packets by a PEP.
Top   ToC   RFC3135 - Page 14
   Symmetrical mechanisms have also been developed.  A Forward Erasure
   Correction (FZC) mechanism can be used with real-time and multicast
   traffic.  The encoding PEP adds a parity packet over a block of
   packets.  Upon reception, the parity is removed and missing data is
   regenerated.  A jitter control mechanism can be implemented at the
   expense of extra latency.  A sending PEP can add a timestamp to
   outgoing packets.  The receiving PEP then delays packets in order to
   reproduce the correct interval.

4. Implications of Using PEPs

The following sections describe some of the implications of using Performance Enhancing Proxies.

4.1 The End-to-end Argument

As indicated in [RFC1958], the end-to-end argument [SRC84] is one of the architectural principles of the Internet. The basic argument is that, as a first principle, certain required end-to-end functions can only be correctly performed by the end systems themselves. Most of the potential negative implications associated with using PEPs are related to the possibility of breaking the end-to-end semantics of connections. This is one of the main reasons why PEPs are not recommended for general use. As indicated in Section 2.5, not all PEP implementations break the end-to-end semantics of connections. Correctly designed PEPs do not attempt to replace any application level end-to-end function, but only attempt to add performance optimizations to a subpath of the end-to-end path between the application endpoints. Doing this can be consistent with the end-to-end argument. However, a user or network administrator adding a PEP to his network configuration should be aware of the potential end-to-end implications related to the mechanisms being used by the particular PEP implementation.

4.1.1 Security

In most cases, security applied above the transport layer can be used with PEPs, especially transport layer PEPs. However, today, only a limited number of applications include support for the use of transport (or higher) layer security. Network (IP) layer security (IPsec) [RFC2401], on the other hand, can generally be used by any application, transparently to the application.
Top   ToC   RFC3135 - Page 15
4.1.1.1 Security Implications
The most detrimental negative implication of breaking the end-to-end semantics of a connection is that it disables end-to-end use of IPsec. In general, a user or network administrator must choose between using PEPs and using IPsec. If IPsec is employed end-to-end, PEPs that are implemented on intermediate nodes in the network cannot examine the transport or application headers of IP packets because encryption of IP packets via IPsec's ESP header (in either transport or tunnel mode) renders the TCP header and payload unintelligible to the PEPs. Without being able to examine the transport or application headers, a PEP may not function optimally or at all. If a PEP implementation is non-transparent to the users and the users trust the PEP in the middle, IPsec can be used separately between each end system and PEP. However, in most cases this is an undesirable or unacceptable alternative as the end systems cannot trust PEPs in general. In addition, this is not as secure as end- to-end security. (For example, the traffic is exposed in the PEP when it is decrypted to be processed.) And, it can lead to potentially misleading security level assumptions by the end systems. If the two end systems negotiate different levels of security with the PEP, the end system which negotiated the stronger level of security may not be aware that a lower level of security is being provided for part of the connection. The PEP could be implemented to prevent this from happening by being smart enough to force the same level of security to each end system but this increases the complexity of the PEP implementation (and still is not as secure as end-to-end security). With a transparent PEP implementation, it is difficult for the end systems to trust the PEP because they may not be aware of its existence. Even if the user is aware of the PEP, setting up acceptable security associations with the PEP while maintaining the PEP's transparent nature is problematic (if not impossible). Note that even when a PEP implementation does not break the end-to- end semantics of a connection, the PEP implementation may not be able to function in the presence of IPsec. For example, it is difficult to do ACK spacing if the PEP cannot reliably determine which IP packets contain ACKs of interest. In any case, the authors are currently not aware of any PEP implementations, transparent or non- transparent, which provide support for end-to-end IPsec, except in a case where the PEPs are implemented on the end hosts.
Top   ToC   RFC3135 - Page 16
4.1.1.2 Security Implication Mitigations
There are some steps which can be taken to allow the use of IPsec and PEPs to coexist. If an end user can select the use of IPsec for some traffic and not for other traffic, PEP processing can be applied to the traffic sent without IPsec. Of course, the user must then do without security for this traffic or provide security for the traffic via other means (for example, by using transport layer security). However, even when this is possible, significant complexity may need to be added to the configuration of the end system. Another alternative is to implement IPsec between the two PEPs of a distributed PEP implementation. This at least protects the traffic between the two PEPs. (The issue of trusting the PEPs does not change.) In the case where the PEP implementation is not transparent to the user, (assuming that the user trusts the PEPs,) the user can configure his end system to use the PEPs as the end points of an IPsec tunnel. And, an IPsec tunnel could even potentially be used between the end system and a PEP to protect traffic on this part of the path. But, all of this adds complexity. And, it still does not eliminate the risk of the traffic being exposed in the PEP itself as the traffic is received from one IPsec tunnel, processed and then forwarded (even if forwarded through another IPsec tunnel).
4.1.1.3 Security Research Related to PEPs
There is research underway investigating the possibility of changing the implementation of IPsec to be more friendly to the use of PEPs. One approach being actively looked at is the use of multi-layer IP security. [Zhang00] describes a method which allows TCP headers to be encrypted as one layer (with the PEPs in the path of the TCP connections included in the security associations used to encrypt the TCP headers) while the TCP payload is encrypted end-to-end as a separate layer. This still involves trusting the PEP, but to a much lesser extent. However, a drawback to this approach is that it adds a significant amount of complexity to the IP security implementation. Given the existing complexity of IPsec, this drawback is a serious impediment to the standardization of the multi-layer IP security idea and it is very unlikely that this approach will be adopted as a standard any time soon. Therefore, relying on this type of approach will likely involve the use of non-standard protocols (and the associated risk of doing so).

4.1.2 Fate Sharing

Another important aspect of the end-to-end argument is fate sharing. If a failure occurs in the network, the ability of the connection to survive the failure depends upon how much state is being maintained
Top   ToC   RFC3135 - Page 17
   on behalf of the connection in the network and whether the state is
   self-healing.  If no connection specific state resides in the network
   or such state is self-healing as in case of regular end-to-end
   operation, then a failure in the network will break the connection
   only if there is no alternate path through the network between the
   end systems.  And, if there is no path, both end systems can detect
   this.  However, if the connection depends upon some state being
   stored in the network (e.g., in a PEP), then a failure in the network
   (e.g., the node containing a PEP crashes) causes this state to be
   lost, forcing the connection to terminate even if an alternate path
   through the network exists.

   The importance of this aspect of the end-to-end argument with respect
   to PEPs is dependent upon both the PEP implementation and upon the
   types of applications being used.  Sometimes coincidentally but more
   often by design, PEPs are used in environments where there is no
   alternate path between the end systems and, therefore, a failure of
   the intermediate node containing a PEP would result in the
   termination of the connection in any case.  And, even when this is
   not the case, the risk of losing the connection in the case of
   regular end-to-end operation may exist as the connection could break
   for some other reason, for example, a long enough link outage of a
   last-hop wireless link to the end host.  Therefore, users may choose
   to accept the risk of a PEP crashing in order to take advantage of
   the performance gains offered by the PEP implementation.  The
   important thing is that accepting the risk should be under the
   control of the user (i.e., the user should always have the option to
   choose end-to-end operation) and, if the user chooses to use the PEP,
   the user should be aware of the implications that a PEP failure has
   with respect to the applications being used.

4.1.3 End-to-end Reliability

Another aspect of the end-to-end argument is that of acknowledging the receipt of data end-to-end in order to achieve reliable end-to- end delivery of data. An application aiming at reliable end-to-end delivery must implement an end-to-end check and recovery at the application level. According to the end-to-end argument, this is the only possibility to correctly implement reliable end-to-end operation. Otherwise the application violates the end-to-end argument. This also means that a correctly designed application can never fully rely on the transport layer (e.g., TCP) or any other communication subsystem to provide reliable end-to-end delivery. First, a TCP connection may break down for some reason and result in lost data that must be recovered at the application level. Second, the checksum provided by TCP may be considered inadequate, resulting in undetected (by TCP) data corruption [Pax99] and requiring an
Top   ToC   RFC3135 - Page 18
   application level check for data corruption.  Third, a TCP
   acknowledgement only indicates that data was delivered to the TCP
   implementation on the other end system.  It does not guarantee that
   the data was delivered to the application layer on the other end
   system.  Therefore, a well designed application must use an
   application layer acknowledgement to ensure end-to-end delivery of
   application layer data.  Note that this does not diminish the value
   of a reliable transport protocol (i.e., TCP) as such a protocol
   allows efficient implementation of several essential functions (e.g.,
   congestion control) for an application.

   If a PEP implementation acknowledges application data prematurely
   (before the PEP receives an application ACK from the other endpoint),
   end-to-end reliability cannot be guaranteed.  Typically, application
   layer PEPs do not acknowledge data prematurely, i.e., the PEP does
   not send an application ACK to the sender until it receives an
   application ACK from the receiver.  And, transport layer PEP
   implementations, including TCP PEPs, generally do not interfere with
   end-to-end application layer acknowledgments as they let applications
   operate end-to-end.  However, the user and/or network administrator
   employing the PEP must understand how it operates in order to
   understand the risks related to end-to-end reliability.

   Some Internet applications do not necessarily operate end-to-end in
   their regular operation, thus abandoning any end-to-end reliability
   guarantee.  For example, Internet email delivery often operates via
   relay Mail Transfer Agents, that is, relay Simple Mail Transfer
   Protocol (SMTP) servers.  An originating MTA (SMTP server) sends the
   mail message to a relay MTA that receives the mail message, stores it
   in non-volatile storage (e.g., on disk) and then sends an application
   level acknowledgement.  The relay MTA then takes "full
   responsibility" for delivering the mail message to the destination
   SMTP server (maybe via another relay MTA); it tries to forward the
   message for a relatively long time (typically around 5 days).  This
   scheme does not give a 100% guarantee of email delivery, but
   reliability is considered "good enough".

   An application layer PEP for this kind of an application may
   acknowledge application data (e.g., mail message) without essentially
   decreasing reliability, as long as the PEP operates according to the
   same procedure as the regular proxy (e.g., relay MTA).  Again, as
   indicated above, the user and/or network administrator employing such
   a PEP needs to understand how it operates in order to understand the
   reliability risks associated with doing so.
Top   ToC   RFC3135 - Page 19

4.1.4 End-to-end Failure Diagnostics

Another aspect of the end-to-end argument is the ability to support end-to-end failure diagnostics when problems are encountered. If a network problem occurs which breaks a connection, the end points of the connection will detect the failure via timeouts. However, the existence of a PEP in between the two end points could delay (sometimes significantly) the detection of the failure by one or both of the end points. (Of course, some PEPs are intentionally designed to hide these types of failures as described in Section 3.4.) The implications of delayed detection of a failed connection depend on the applications being used. Possibilities range from no impact at all (or just minor annoyance to the end user) all the way up to impacting mission critical business functions by delaying switchovers to alternate communications paths. In addition, tools used to debug connection failures may be affected by the use of a PEP. For example, PING (described in [RFC792] and [RFC2151]) is often used to test for connectivity. But, because PING is based on ICMP instead of TCP (i.e., it is implemented using ICMP Echo and Reply commands at the network layer), it is possible that the configuration of the network might route PING traffic around the PEP. Thus, PING could indicate that an end-to-end path exists between two hosts when it does not actually exist for TCP traffic. Even when the PING traffic does go through the PEP, the diagnostics indications provided by the PING traffic are altered. For example, if the PING traffic goes transparently through the PEP, PING does not provide any indication that the PEP exists and since the PING traffic is not being subjected to the same processing as TCP traffic, it may not necessarily provide an accurate indication of the network delay being experienced by TCP traffic. On the other hand, if the PEP terminates the PING and responds to it on behalf of the end host, then the PING provides information only on the connectivity to the PEP. Traceroute (also described in [RFC2151]) is similarly affected by the presence of the PEP.

4.2 Asymmetric Routing

Deploying a PEP implementation usually requires that traffic to and from the end hosts is routed through the intermediate node(s) where PEPs reside. With some networks, this cannot be accomplished, or it might require that the intermediate node is located several hops away from the target link edge which in turn is impractical in many cases and may result in non-optimal routing.
Top   ToC   RFC3135 - Page 20
   Note that this restriction does not apply to all PEP implementations.
   For example, a PEP which is simply doing ACK spacing only needs to
   see one direction of the traffic flow (the direction in which the
   ACKs are flowing).  ACK spacing can be done without seeing the actual
   flow of data.

4.3 Mobile Hosts

In environments where a PEP implementation is used to serve mobile hosts, additional problems may be encountered because PEP related state information may need to be transferred to a new PEP node during a handoff. When a mobile host moves, it is subject to handovers. If the intermediate node and home for the serving PEP changes due to handover, any state information that the PEP maintains and is required for continuous operation must be transferred to the new intermediate node to ensure continued operation of the connection. This requires extra work and overhead and may not be possible to perform fast enough, especially if the host moves frequently over cell boundaries of a wireless network. If the mobile host moves to another IP network, routing to and from the mobile host may need to be changed to traverse a new PEP node. Today, mobility implications with respect to using PEPs are more significant to W-LAN networks than to W-WAN networks. Currently, a W-WAN base station typically does not provide the mobile host with the connection point to the wireline Internet. (A W-WAN base station may not even have an IP stack.) Instead, the W-WAN network takes care of mobility with the connection point to the wireline Internet remaining unchanged while the mobile host moves. Thus, PEP state handover is not currently required in most W-WAN networks when the host moves. However, this is generally not true in W-LAN networks and, even in the case of W-WAN networks, the user and/or network administrator using a PEP needs to be cognizant of how the W-WAN base stations and the PEP work in case W-WAN PEP state handoff becomes necessary in the future.

4.4 Scalability

Because a PEP typically processes packet information above the IP layer, a PEP requires more processing power per packet than a router. Therefore, PEPs will always be (at least) one step behind routers in terms of the total throughput they can support. (Processing above the IP layer is also more difficult to implement in hardware.) In addition, since most PEP implementations require per connection state, PEP memory requirements are generally significantly higher
Top   ToC   RFC3135 - Page 21
   than with a router.  Therefore, a PEP implementation may have a limit
   on the number of connections which it can support whereas a router
   has no such limitation.

   Increased processing power and memory requirements introduce
   scalability issues with respect to the use of PEPs.  Placement of a
   PEP on a high speed link or a link which supports a large number of
   connections may require network topology changes beyond just
   inserting the PEP into the path of the traffic.  For example, if a
   PEP can only handle half of the traffic on a link, multiple PEPs may
   need to be used in parallel, adding complexity to the network
   configuration to divide the traffic between the PEPs.

4.5 Other Implications of Using PEPs

This document describes some significant implications with respect to using Performance Enhancing Proxies. However, the list of implications provided in this document is not necessarily exhaustive. Some examples of other potential implications related to using PEPs include the use of PEPs in multi-homing environments and the use of PEPs with respect to Quality of Service (QoS) transparency. For example, there may be potential interaction with the priority-based multiplexing mechanism described in Section 3.5 and the use of differentiated services [RFC2475]. Therefore, users and network administrators who wish to deploy a PEP should look not only at the implications described in this document but also at the overall impact (positive and negative) that the PEP will have on their applications and network infrastructure, both initially and in the future when new applications are added and/or changes in the network infrastructure are required.


(page 21 continued on part 2)

Next Section