2.6.
Name of Problem Extra additive constant in congestion avoidance Classification Congestion control / performance Description RFC 1122 section 4.2.2.15 states that TCP MUST implement Jacobson's "congestion avoidance" algorithm [Jacobson88], which calls for increasing the congestion window, cwnd, by: MSS * MSS / cwnd for each ACK received for new data [RFC2001]. This has the effect of increasing cwnd by approximately one segment in each round trip time. Some TCP implementations add an additional fraction of a segment (typically MSS/8) to cwnd for each ACK received for new data [Stevens94, Wright95]: (MSS * MSS / cwnd) + MSS/8 These implementations exhibit "Extra additive constant in congestion avoidance". Significance May be detrimental to performance even in completely uncongested environments (see Implications). In congested environments, may also be detrimental to the performance of other connections.
Implications The extra additive term allows a TCP to more aggressively open its congestion window (quadratic rather than linear increase). For congested networks, this can increase the loss rate experienced by all connections sharing a bottleneck with the aggressive TCP. However, even for completely uncongested networks, the extra additive term can lead to diminished performance, as follows. In congestion avoidance, a TCP sender probes the network path to determine its available capacity, which often equates to the number of buffers available at a bottleneck link. With linear congestion avoidance, the TCP only probes for sufficient capacity (buffer) to hold one extra packet per RTT. Thus, when it exceeds the available capacity, generally only one packet will be lost (since on the previous RTT it already found that the path could sustain a window with one less packet in flight). If the congestion window is sufficiently large, then the TCP will recover from this single loss using fast retransmission and avoid an expensive (in terms of performance) retransmission timeout. However, when the additional additive term is used, then cwnd can increase by more than one packet per RTT, in which case the TCP probes more aggressively. If in the previous RTT it had reached the available capacity of the path, then the excess due to the extra increase will again be lost, but now this will result in multiple losses from the flight instead of a single loss. TCPs that do not utilize SACK [RFC2018] generally will not recover from multiple losses without incurring a retransmission timeout [Fall96,Hoe96], significantly diminishing performance. Relevant RFCs RFC 1122 requires use of the "congestion avoidance" algorithm. RFC 2001 outlines the fast retransmit/fast recovery algorithms. RFC 2018 discusses the SACK option. Trace file demonstrating it Recorded using tcpdump running on the same FDDI LAN as host A. Host A is the sender and host B is the receiver. The connection establishment specified an MSS of 4,312 bytes and a window scale factor of 4. We omit the establishment and the first 2.5 MB of data transfer, as the problem is best demonstrated when the window has grown to a large value. At the beginning of the trace excerpt, the congestion window is 31 packets. The connection is never receiver-window limited, so we omit window advertisements from the trace for clarity.
11:42:07.697951 B > A: . ack 2383006 11:42:07.699388 A > B: . 2508054:2512366(4312) 11:42:07.699962 A > B: . 2512366:2516678(4312) 11:42:07.700012 B > A: . ack 2391630 11:42:07.701081 A > B: . 2516678:2520990(4312) 11:42:07.701656 A > B: . 2520990:2525302(4312) 11:42:07.701739 B > A: . ack 2400254 11:42:07.702685 A > B: . 2525302:2529614(4312) 11:42:07.703257 A > B: . 2529614:2533926(4312) 11:42:07.703295 B > A: . ack 2408878 11:42:07.704414 A > B: . 2533926:2538238(4312) 11:42:07.704989 A > B: . 2538238:2542550(4312) 11:42:07.705040 B > A: . ack 2417502 11:42:07.705935 A > B: . 2542550:2546862(4312) 11:42:07.706506 A > B: . 2546862:2551174(4312) 11:42:07.706544 B > A: . ack 2426126 11:42:07.707480 A > B: . 2551174:2555486(4312) 11:42:07.708051 A > B: . 2555486:2559798(4312) 11:42:07.708088 B > A: . ack 2434750 11:42:07.709030 A > B: . 2559798:2564110(4312) 11:42:07.709604 A > B: . 2564110:2568422(4312) 11:42:07.710175 A > B: . 2568422:2572734(4312) * 11:42:07.710215 B > A: . ack 2443374 11:42:07.710799 A > B: . 2572734:2577046(4312) 11:42:07.711368 A > B: . 2577046:2581358(4312) 11:42:07.711405 B > A: . ack 2451998 11:42:07.712323 A > B: . 2581358:2585670(4312) 11:42:07.712898 A > B: . 2585670:2589982(4312) 11:42:07.712938 B > A: . ack 2460622 11:42:07.713926 A > B: . 2589982:2594294(4312) 11:42:07.714501 A > B: . 2594294:2598606(4312) 11:42:07.714547 B > A: . ack 2469246 11:42:07.715747 A > B: . 2598606:2602918(4312) 11:42:07.716287 A > B: . 2602918:2607230(4312) 11:42:07.716328 B > A: . ack 2477870 11:42:07.717146 A > B: . 2607230:2611542(4312) 11:42:07.717717 A > B: . 2611542:2615854(4312) 11:42:07.717762 B > A: . ack 2486494 11:42:07.718754 A > B: . 2615854:2620166(4312) 11:42:07.719331 A > B: . 2620166:2624478(4312) 11:42:07.719906 A > B: . 2624478:2628790(4312) ** 11:42:07.719958 B > A: . ack 2495118 11:42:07.720500 A > B: . 2628790:2633102(4312) 11:42:07.721080 A > B: . 2633102:2637414(4312) 11:42:07.721739 B > A: . ack 2503742 11:42:07.722348 A > B: . 2637414:2641726(4312)
11:42:07.722918 A > B: . 2641726:2646038(4312) 11:42:07.769248 B > A: . ack 2512366 The receiver's acknowledgment policy is one ACK per two packets received. Thus, for each ACK arriving at host A, two new packets are sent, except when cwnd increases due to congestion avoidance, in which case three new packets are sent. With an ack-every-two-packets policy, cwnd should only increase one MSS per 2 RTT. However, at the point marked "*" the window increases after 7 ACKs have arrived, and then again at "**" after 6 more ACKs. While we do not have space to show the effect, this trace suffered from repeated timeout retransmissions due to multiple packet losses during a single RTT. Trace file demonstrating correct behavior Made using the same host and tracing setup as above, except now A's TCP has been modified to remove the MSS/8 additive constant. Tcpdump reported 77 packet drops; the excerpt below is fully self-consistent so it is unlikely that any of these occurred during the excerpt. We again begin when cwnd is 31 packets (this occurs significantly later in the trace, because the congestion avoidance is now less aggressive with opening the window). 14:22:21.236757 B > A: . ack 5194679 14:22:21.238192 A > B: . 5319727:5324039(4312) 14:22:21.238770 A > B: . 5324039:5328351(4312) 14:22:21.238821 B > A: . ack 5203303 14:22:21.240158 A > B: . 5328351:5332663(4312) 14:22:21.240738 A > B: . 5332663:5336975(4312) 14:22:21.270422 B > A: . ack 5211927 14:22:21.271883 A > B: . 5336975:5341287(4312) 14:22:21.272458 A > B: . 5341287:5345599(4312) 14:22:21.279099 B > A: . ack 5220551 14:22:21.280539 A > B: . 5345599:5349911(4312) 14:22:21.281118 A > B: . 5349911:5354223(4312) 14:22:21.281183 B > A: . ack 5229175 14:22:21.282348 A > B: . 5354223:5358535(4312) 14:22:21.283029 A > B: . 5358535:5362847(4312) 14:22:21.283089 B > A: . ack 5237799 14:22:21.284213 A > B: . 5362847:5367159(4312) 14:22:21.284779 A > B: . 5367159:5371471(4312) 14:22:21.285976 B > A: . ack 5246423 14:22:21.287465 A > B: . 5371471:5375783(4312)
14:22:21.288036 A > B: . 5375783:5380095(4312) 14:22:21.288073 B > A: . ack 5255047 14:22:21.289155 A > B: . 5380095:5384407(4312) 14:22:21.289725 A > B: . 5384407:5388719(4312) 14:22:21.289762 B > A: . ack 5263671 14:22:21.291090 A > B: . 5388719:5393031(4312) 14:22:21.291662 A > B: . 5393031:5397343(4312) 14:22:21.291701 B > A: . ack 5272295 14:22:21.292870 A > B: . 5397343:5401655(4312) 14:22:21.293441 A > B: . 5401655:5405967(4312) 14:22:21.293481 B > A: . ack 5280919 14:22:21.294476 A > B: . 5405967:5410279(4312) 14:22:21.295053 A > B: . 5410279:5414591(4312) 14:22:21.295106 B > A: . ack 5289543 14:22:21.296306 A > B: . 5414591:5418903(4312) 14:22:21.296878 A > B: . 5418903:5423215(4312) 14:22:21.296917 B > A: . ack 5298167 14:22:21.297716 A > B: . 5423215:5427527(4312) 14:22:21.298285 A > B: . 5427527:5431839(4312) 14:22:21.298324 B > A: . ack 5306791 14:22:21.299413 A > B: . 5431839:5436151(4312) 14:22:21.299986 A > B: . 5436151:5440463(4312) 14:22:21.303696 B > A: . ack 5315415 14:22:21.305177 A > B: . 5440463:5444775(4312) 14:22:21.305755 A > B: . 5444775:5449087(4312) 14:22:21.308032 B > A: . ack 5324039 14:22:21.309525 A > B: . 5449087:5453399(4312) 14:22:21.310101 A > B: . 5453399:5457711(4312) 14:22:21.310144 B > A: . ack 5332663 *** 14:22:21.311615 A > B: . 5457711:5462023(4312) 14:22:21.312198 A > B: . 5462023:5466335(4312) 14:22:21.341876 B > A: . ack 5341287 14:22:21.343451 A > B: . 5466335:5470647(4312) 14:22:21.343985 A > B: . 5470647:5474959(4312) 14:22:21.350304 B > A: . ack 5349911 14:22:21.351852 A > B: . 5474959:5479271(4312) 14:22:21.352430 A > B: . 5479271:5483583(4312) 14:22:21.352484 B > A: . ack 5358535 14:22:21.353574 A > B: . 5483583:5487895(4312) 14:22:21.354149 A > B: . 5487895:5492207(4312) 14:22:21.354205 B > A: . ack 5367159 14:22:21.355467 A > B: . 5492207:5496519(4312) 14:22:21.356039 A > B: . 5496519:5500831(4312) 14:22:21.357361 B > A: . ack 5375783 14:22:21.358855 A > B: . 5500831:5505143(4312) 14:22:21.359424 A > B: . 5505143:5509455(4312) 14:22:21.359465 B > A: . ack 5384407
14:22:21.360605 A > B: . 5509455:5513767(4312) 14:22:21.361181 A > B: . 5513767:5518079(4312) 14:22:21.361225 B > A: . ack 5393031 14:22:21.362485 A > B: . 5518079:5522391(4312) 14:22:21.363057 A > B: . 5522391:5526703(4312) 14:22:21.363096 B > A: . ack 5401655 14:22:21.364236 A > B: . 5526703:5531015(4312) 14:22:21.364810 A > B: . 5531015:5535327(4312) 14:22:21.364867 B > A: . ack 5410279 14:22:21.365819 A > B: . 5535327:5539639(4312) 14:22:21.366386 A > B: . 5539639:5543951(4312) 14:22:21.366427 B > A: . ack 5418903 14:22:21.367586 A > B: . 5543951:5548263(4312) 14:22:21.368158 A > B: . 5548263:5552575(4312) 14:22:21.368199 B > A: . ack 5427527 14:22:21.369189 A > B: . 5552575:5556887(4312) 14:22:21.369758 A > B: . 5556887:5561199(4312) 14:22:21.369803 B > A: . ack 5436151 14:22:21.370814 A > B: . 5561199:5565511(4312) 14:22:21.371398 A > B: . 5565511:5569823(4312) 14:22:21.375159 B > A: . ack 5444775 14:22:21.376658 A > B: . 5569823:5574135(4312) 14:22:21.377235 A > B: . 5574135:5578447(4312) 14:22:21.379303 B > A: . ack 5453399 14:22:21.380802 A > B: . 5578447:5582759(4312) 14:22:21.381377 A > B: . 5582759:5587071(4312) 14:22:21.381947 A > B: . 5587071:5591383(4312) **** "***" marks the end of the first round trip. Note that cwnd did not increase (as evidenced by each ACK eliciting two new data packets). Only at "****", which comes near the end of the second round trip, does cwnd increase by one packet. This trace did not suffer any timeout retransmissions. It transferred the same amount of data as the first trace in about half as much time. This difference is repeatable between hosts A and B. References [Stevens94] and [Wright95] discuss this problem. The problem of Reno TCP failing to recover from multiple losses except via a retransmission timeout is discussed in [Fall96,Hoe96].
How to detect If source code is available, that is generally the easiest way to detect this problem. Search for each modification to the cwnd variable; (at least) one of these will be for congestion avoidance, and inspection of the related code should immediately identify the problem if present. The problem can also be detected by closely examining packet traces taken near the sender. During congestion avoidance, cwnd will increase by an additional segment upon the receipt of (typically) eight acknowledgements without a loss. This increase is in addition to the one segment increase per round trip time (or two round trip times if the receiver is using delayed ACKs). Furthermore, graphs of the sequence number vs. time, taken from packet traces, are normally linear during congestion avoidance. When viewing packet traces of transfers from senders exhibiting this problem, the graphs appear quadratic instead of linear. Finally, the traces will show that, with sufficiently large windows, nearly every loss event results in a timeout. How to fix This problem may be corrected by removing the "+ MSS/8" term from the congestion avoidance code that increases cwnd each time an ACK of new data is received.2.7.
Name of Problem Initial RTO too low Classification Performance Description When a TCP first begins transmitting data, it lacks the RTT measurements necessary to have computed an adaptive retransmission timeout (RTO). RFC 1122, 4.2.3.1, states that a TCP SHOULD initialize RTO to 3 seconds. A TCP that uses a lower value exhibits "Initial RTO too low". Significance In environments with large RTTs (where "large" means any value larger than the initial RTO), TCPs will experience very poor performance.
Implications Whenever RTO < RTT, very poor performance can result as packets are unnecessarily retransmitted (because RTO will expire before an ACK for the packet can arrive) and the connection enters slow start and congestion avoidance. Generally, the algorithms for computing RTO avoid this problem by adding a positive term to the estimated RTT. However, when a connection first begins it must use some estimate for RTO, and if it picks a value less than RTT, the above problems will arise. Furthermore, when the initial RTO < RTT, it can take a long time for the TCP to correct the problem by adapting the RTT estimate, because the use of Karn's algorithm (mandated by RFC 1122, 4.2.3.1) will discard many of the candidate RTT measurements made after the first timeout, since they will be measurements of retransmitted segments. Relevant RFCs RFC 1122 states that TCPs SHOULD initialize RTO to 3 seconds and MUST implement Karn's algorithm. Trace file demonstrating it The following trace file was taken using tcpdump at host A, the data sender. The advertised window and SYN options have been omitted for clarity. 07:52:39.870301 A > B: S 2786333696:2786333696(0) 07:52:40.548170 B > A: S 130240000:130240000(0) ack 2786333697 07:52:40.561287 A > B: P 1:513(512) ack 1 07:52:40.753466 A > B: . 1:513(512) ack 1 07:52:41.133687 A > B: . 1:513(512) ack 1 07:52:41.458529 B > A: . ack 513 07:52:41.458686 A > B: . 513:1025(512) ack 1 07:52:41.458797 A > B: P 1025:1537(512) ack 1 07:52:41.541633 B > A: . ack 513 07:52:41.703732 A > B: . 513:1025(512) ack 1 07:52:42.044875 B > A: . ack 513 07:52:42.173728 A > B: . 513:1025(512) ack 1 07:52:42.330861 B > A: . ack 1537 07:52:42.331129 A > B: . 1537:2049(512) ack 1 07:52:42.331262 A > B: P 2049:2561(512) ack 1 07:52:42.623673 A > B: . 1537:2049(512) ack 1 07:52:42.683203 B > A: . ack 1537 07:52:43.044029 B > A: . ack 1537 07:52:43.193812 A > B: . 1537:2049(512) ack 1
Note from the SYN/SYN-ACK exchange, the RTT is over 600 msec. However, from the elapsed time between the third and fourth lines (the first packet being sent and then retransmitted), it is apparent the RTO was initialized to under 200 msec. The next line shows that this value has doubled to 400 msec (correct exponential backoff of RTO), but that still does not suffice to avoid an unnecessary retransmission. Finally, an ACK from B arrives for the first segment. Later two more duplicate ACKs for 513 arrive, indicating that both the original and the two retransmissions arrived at B. (Indeed, a concurrent trace at B showed that no packets were lost during the entire connection). This ACK opens the congestion window to two packets, which are sent back-to-back, but at 07:52:41.703732 RTO again expires after a little over 200 msec, leading to an unnecessary retransmission, and the pattern repeats. By the end of the trace excerpt above, 1536 bytes have been successfully transmitted from A to B, over an interval of more than 2 seconds, reflecting terrible performance. Trace file demonstrating correct behavior The following trace file was taken using tcpdump at host C, the data sender. The advertised window and SYN options have been omitted for clarity. 17:30:32.090299 C > D: S 2031744000:2031744000(0) 17:30:32.900325 D > C: S 262737964:262737964(0) ack 2031744001 17:30:32.900326 C > D: . ack 1 17:30:32.910326 C > D: . 1:513(512) ack 1 17:30:34.150355 D > C: . ack 513 17:30:34.150356 C > D: . 513:1025(512) ack 1 17:30:34.150357 C > D: . 1025:1537(512) ack 1 17:30:35.170384 D > C: . ack 1025 17:30:35.170385 C > D: . 1537:2049(512) ack 1 17:30:35.170386 C > D: . 2049:2561(512) ack 1 17:30:35.320385 D > C: . ack 1537 17:30:35.320386 C > D: . 2561:3073(512) ack 1 17:30:35.320387 C > D: . 3073:3585(512) ack 1 17:30:35.730384 D > C: . ack 2049 The initial SYN/SYN-ACK exchange shows that RTT is more than 800 msec, and for some subsequent packets it rises above 1 second, but C's retransmit timer does not ever expire. References This problem is documented in [Paxson97].
How to detect This problem is readily detected by inspecting a packet trace of the startup of a TCP connection made over a long-delay path. It can be diagnosed from either a sender-side or receiver-side trace. Long-delay paths can often be found by locating remote sites on other continents. How to fix As this problem arises from a faulty initialization, one hopes fixing it requires a one-line change to the TCP source code.2.8.
Name of Problem Failure of window deflation after loss recovery Classification Congestion control / performance Description The fast recovery algorithm allows TCP senders to continue to transmit new segments during loss recovery. First, fast retransmission is initiated after a TCP sender receives three duplicate ACKs. At this point, a retransmission is sent and cwnd is halved. The fast recovery algorithm then allows additional segments to be sent when sufficient additional duplicate ACKs arrive. Some implementations of fast recovery compute when to send additional segments by artificially incrementing cwnd, first by three segments to account for the three duplicate ACKs that triggered fast retransmission, and subsequently by 1 MSS for each new duplicate ACK that arrives. When cwnd allows, the sender transmits new data segments. When an ACK arrives that covers new data, cwnd is to be reduced by the amount by which it was artificially increased. However, some TCP implementations fail to "deflate" the window, causing an inappropriate amount of data to be sent into the network after recovery. One cause of this problem is the "header prediction" code, which is used to handle incoming segments that require little work. In some implementations of TCP, the header prediction code does not check to make sure cwnd has not been artificially inflated, and therefore does not reduce the artificially increased cwnd when appropriate. Significance TCP senders that exhibit this problem will transmit a burst of data immediately after recovery, which can degrade performance, as well as network stability. Effectively, the sender does not
reduce the size of cwnd as much as it should (to half its value when loss was detected), if at all. This can harm the performance of the TCP connection itself, as well as competing TCP flows. Implications A TCP sender exhibiting this problem does not reduce cwnd appropriately in times of congestion, and therefore may contribute to congestive collapse. Relevant RFCs RFC 2001 outlines the fast retransmit/fast recovery algorithms. [Brakmo95] outlines this implementation problem and offers a fix. Trace file demonstrating it The following trace file was taken using tcpdump at host A, the data sender. The advertised window (which never changed) has been omitted for clarity, except for the first packet sent by each host. 08:22:56.825635 A.7505 > B.7505: . 29697:30209(512) ack 1 win 4608 08:22:57.038794 B.7505 > A.7505: . ack 27649 win 4096 08:22:57.039279 A.7505 > B.7505: . 30209:30721(512) ack 1 08:22:57.321876 B.7505 > A.7505: . ack 28161 08:22:57.322356 A.7505 > B.7505: . 30721:31233(512) ack 1 08:22:57.347128 B.7505 > A.7505: . ack 28673 08:22:57.347572 A.7505 > B.7505: . 31233:31745(512) ack 1 08:22:57.347782 A.7505 > B.7505: . 31745:32257(512) ack 1 08:22:57.936393 B.7505 > A.7505: . ack 29185 08:22:57.936864 A.7505 > B.7505: . 32257:32769(512) ack 1 08:22:57.950802 B.7505 > A.7505: . ack 29697 win 4096 08:22:57.951246 A.7505 > B.7505: . 32769:33281(512) ack 1 08:22:58.169422 B.7505 > A.7505: . ack 29697 08:22:58.638222 B.7505 > A.7505: . ack 29697 08:22:58.643312 B.7505 > A.7505: . ack 29697 08:22:58.643669 A.7505 > B.7505: . 29697:30209(512) ack 1 08:22:58.936436 B.7505 > A.7505: . ack 29697 08:22:59.002614 B.7505 > A.7505: . ack 29697 08:22:59.003026 A.7505 > B.7505: . 33281:33793(512) ack 1 08:22:59.682902 B.7505 > A.7505: . ack 33281 08:22:59.683391 A.7505 > B.7505: P 33793:34305(512) ack 1 08:22:59.683748 A.7505 > B.7505: P 34305:34817(512) ack 1 *** 08:22:59.684043 A.7505 > B.7505: P 34817:35329(512) ack 1 08:22:59.684266 A.7505 > B.7505: P 35329:35841(512) ack 1 08:22:59.684567 A.7505 > B.7505: P 35841:36353(512) ack 1 08:22:59.684810 A.7505 > B.7505: P 36353:36865(512) ack 1 08:22:59.685094 A.7505 > B.7505: P 36865:37377(512) ack 1
The first 12 lines of the trace show incoming ACKs clocking out a window of data segments. At this point in the transfer, cwnd is 7 segments. The next 4 lines of the trace show 3 duplicate ACKs arriving from the receiver, followed by a retransmission from the sender. At this point, cwnd is halved (to 3 segments) and artificially incremented by the three duplicate ACKs that have arrived, making cwnd 6 segments. The next two lines show 2 more duplicate ACKs arriving, each of which increases cwnd by 1 segment. So, after these two duplicate ACKs arrive the cwnd is 8 segments and the sender has permission to send 1 new segment (since there are 7 segments outstanding). The next line in the trace shows this new segment being transmitted. The next packet shown in the trace is an ACK from host B that covers the first 7 outstanding segments (all but the new segment sent during recovery). This should cause cwnd to be reduced to 3 segments and 2 segments to be transmitted (since there is already 1 outstanding segment in the network). However, as shown by the last 7 lines of the trace, cwnd is not reduced, causing a line-rate burst of 7 new segments. Trace file demonstrating correct behavior The trace would appear identical to the one above, only it would stop after the line marked "***", because at this point host A would correctly reduce cwnd after recovery, allowing only 2 segments to be transmitted, rather than producing a burst of 7 segments. References This problem is documented and the performance implications analyzed in [Brakmo95]. How to detect Failure of window deflation after loss recovery can be found by examining sender-side packet traces recorded during periods of moderate loss (so cwnd can grow large enough to allow for fast recovery when loss occurs). How to fix When this bug is caused by incorrect header prediction, the fix is to add a predicate to the header prediction test that checks to see whether cwnd is inflated; if so, the header prediction test fails and the usual ACK processing occurs, which (in this case) takes care to deflate the window. See [Brakmo95] for details.2.9.
Name of Problem Excessively short keepalive connection timeout
Classification Reliability Description Keep-alive is a mechanism for checking whether an idle connection is still alive. According to RFC 1122, keepalive should only be invoked in server applications that might otherwise hang indefinitely and consume resources unnecessarily if a client crashes or aborts a connection during a network failure. RFC 1122 also specifies that if a keep-alive mechanism is implemented it MUST NOT interpret failure to respond to any specific probe as a dead connection. The RFC does not specify a particular mechanism for timing out a connection when no response is received for keepalive probes. However, if the mechanism does not allow ample time for recovery from network congestion or delay, connections may be timed out unnecessarily. Significance In congested networks, can lead to unwarranted termination of connections. Implications It is possible for the network connection between two peer machines to become congested or to exhibit packet loss at the time that a keep-alive probe is sent on a connection. If the keep- alive mechanism does not allow sufficient time before dropping connections in the face of unacknowledged probes, connections may be dropped even when both peers of a connection are still alive. Relevant RFCs RFC 1122 specifies that the keep-alive mechanism may be provided. It does not specify a mechanism for determining dead connections when keepalive probes are not acknowledged. Trace file demonstrating it Made using the Orchestra tool at the peer of the machine using keep-alive. After connection establishment, incoming keep-alives were dropped by Orchestra to simulate a dead connection. 22:11:12.040000 A > B: 22666019:0 win 8192 datasz 4 SYN 22:11:12.060000 B > A: 2496001:22666020 win 4096 datasz 4 SYN ACK 22:11:12.130000 A > B: 22666020:2496002 win 8760 datasz 0 ACK (more than two hours elapse) 00:23:00.680000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 00:23:01.770000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 00:23:02.870000 A > B: 22666019:2496002 win 8760 datasz 1 ACK 00:23.03.970000 A > B: 22666019:2496002 win 8760 datasz 1 ACK
00:23.05.070000 A > B: 22666019:2496002 win 8760 datasz 1 ACK The initial three packets are the SYN exchange for connection setup. About two hours later, the keepalive timer fires because the connection has been idle. Keepalive probes are transmitted a total of 5 times, with a 1 second spacing between probes, after which the connection is dropped. This is problematic because a 5 second network outage at the time of the first probe results in the connection being killed. Trace file demonstrating correct behavior Made using the Orchestra tool at the peer of the machine using keep-alive. After connection establishment, incoming keep-alives were dropped by Orchestra to simulate a dead connection. 16:01:52.130000 A > B: 1804412929:0 win 4096 datasz 4 SYN 16:01:52.360000 B > A: 16512001:1804412930 win 4096 datasz 4 SYN ACK 16:01:52.410000 A > B: 1804412930:16512002 win 4096 datasz 0 ACK (two hours elapse) 18:01:57.170000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:03:12.220000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:04:27.270000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:05:42.320000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:06:57.370000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:08:12.420000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:09:27.480000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:10:43.290000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:11:57.580000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK 18:13:12.630000 A > B: 1804412929:16512002 win 4096 datasz 0 RST ACK In this trace, when the keep-alive timer expires, 9 keepalive probes are sent at 75 second intervals. 75 seconds after the last probe is sent, a final RST segment is sent indicating that the connection has been closed. This implementation waits about 11 minutes before timing out the connection, while the first implementation shown allows only 5 seconds. References This problem is documented in [Dawson97]. How to detect For implementations manifesting this problem, it shows up on a packet trace after the keepalive timer fires if the peer machine receiving the keepalive does not respond. Usually the keepalive timer will fire at least two hours after keepalive is turned on, but it may be sooner if the timer value has been configured lower, or if the keepalive mechanism violates the specification (see Insufficient interval between keepalives problem). In this
example, suppressing the response of the peer to keepalive probes was accomplished using the Orchestra toolkit, which can be configured to drop packets. It could also have been done by creating a connection, turning on keepalive, and disconnecting the network connection at the receiver machine. How to fix This problem can be fixed by using a different method for timing out keepalives that allows a longer period of time to elapse before dropping the connection. For example, the algorithm for timing out on dropped data could be used. Another possibility is an algorithm such as the one shown in the trace above, which sends 9 probes at 75 second intervals and then waits an additional 75 seconds for a response before closing the connection.2.10.
Name of Problem Failure to back off retransmission timeout Classification Congestion control / reliability Description The retransmission timeout is used to determine when a packet has been dropped in the network. When this timeout has expired without the arrival of an ACK, the segment is retransmitted. Each time a segment is retransmitted, the timeout is adjusted according to an exponential backoff algorithm, doubling each time. If a TCP fails to receive an ACK after numerous attempts at retransmitting the same segment, it terminates the connection. A TCP that fails to double its retransmission timeout upon repeated timeouts is said to exhibit "Failure to back off retransmission timeout". Significance Backing off the retransmission timer is a cornerstone of network stability in the presence of congestion. Consequently, this bug can have severe adverse affects in congested networks. It also affects TCP reliability in congested networks, as discussed in the next section. Implications It is possible for the network connection between two TCP peers to become congested or to exhibit packet loss at the time that a retransmission is sent on a connection. If the retransmission mechanism does not allow sufficient time before dropping
connections in the face of unacknowledged segments, connections may be dropped even when, by waiting longer, the connection could have continued. Relevant RFCs RFC 1122 specifies mandatory exponential backoff of the retransmission timeout, and the termination of connections after some period of time (at least 100 seconds). Trace file demonstrating it Made using tcpdump on an intermediate host: 16:51:12.671727 A > B: S 510878852:510878852(0) win 16384 16:51:12.672479 B > A: S 2392143687:2392143687(0) ack 510878853 win 16384 16:51:12.672581 A > B: . ack 1 win 16384 16:51:15.244171 A > B: P 1:3(2) ack 1 win 16384 16:51:15.244933 B > A: . ack 3 win 17518 (DF) <receiving host disconnected> 16:51:19.381176 A > B: P 3:5(2) ack 1 win 16384 16:51:20.162016 A > B: P 3:5(2) ack 1 win 16384 16:51:21.161936 A > B: P 3:5(2) ack 1 win 16384 16:51:22.161914 A > B: P 3:5(2) ack 1 win 16384 16:51:23.161914 A > B: P 3:5(2) ack 1 win 16384 16:51:24.161879 A > B: P 3:5(2) ack 1 win 16384 16:51:25.161857 A > B: P 3:5(2) ack 1 win 16384 16:51:26.161836 A > B: P 3:5(2) ack 1 win 16384 16:51:27.161814 A > B: P 3:5(2) ack 1 win 16384 16:51:28.161791 A > B: P 3:5(2) ack 1 win 16384 16:51:29.161769 A > B: P 3:5(2) ack 1 win 16384 16:51:30.161750 A > B: P 3:5(2) ack 1 win 16384 16:51:31.161727 A > B: P 3:5(2) ack 1 win 16384 16:51:32.161701 A > B: R 5:5(0) ack 1 win 16384 The initial three packets are the SYN exchange for connection setup, then a single data packet, to verify that data can be transferred. Then the connection to the destination host was disconnected, and more data sent. Retransmissions occur every second for 12 seconds, and then the connection is terminated with a RST. This is problematic because a 12 second pause in connectivity could result in the termination of a connection. Trace file demonstrating correct behavior Again, a tcpdump taken from a third host:
16:59:05.398301 A > B: S 2503324757:2503324757(0) win 16384 16:59:05.399673 B > A: S 2492674648:2492674648(0) ack 2503324758 win 16384 16:59:05.399866 A > B: . ack 1 win 17520 16:59:06.538107 A > B: P 1:3(2) ack 1 win 17520 16:59:06.540977 B > A: . ack 3 win 17518 (DF) <receiving host disconnected> 16:59:13.121542 A > B: P 3:5(2) ack 1 win 17520 16:59:14.010928 A > B: P 3:5(2) ack 1 win 17520 16:59:16.010979 A > B: P 3:5(2) ack 1 win 17520 16:59:20.011229 A > B: P 3:5(2) ack 1 win 17520 16:59:28.011896 A > B: P 3:5(2) ack 1 win 17520 16:59:44.013200 A > B: P 3:5(2) ack 1 win 17520 17:00:16.015766 A > B: P 3:5(2) ack 1 win 17520 17:01:20.021308 A > B: P 3:5(2) ack 1 win 17520 17:02:24.027752 A > B: P 3:5(2) ack 1 win 17520 17:03:28.034569 A > B: P 3:5(2) ack 1 win 17520 17:04:32.041567 A > B: P 3:5(2) ack 1 win 17520 17:05:36.048264 A > B: P 3:5(2) ack 1 win 17520 17:06:40.054900 A > B: P 3:5(2) ack 1 win 17520 17:07:44.061306 A > B: R 5:5(0) ack 1 win 17520 In this trace, when the retransmission timer expires, 12 retransmissions are sent at exponentially-increasing intervals, until the interval value reaches 64 seconds, at which time the interval stops growing. 64 seconds after the last retransmission, a final RST segment is sent indicating that the connection has been closed. This implementation waits about 9 minutes before timing out the connection, while the first implementation shown allows only 12 seconds. References None known. How to detect A simple transfer can be easily interrupted by disconnecting the receiving host from the network. tcpdump or another appropriate tool should show the retransmissions being sent. Several trials in a low-rtt environment may be required to demonstrate the bug. How to fix For one of the implementations studied, this problem seemed to be the result of an error introduced with the addition of the Brakmo-Peterson RTO algorithm [Brakmo95], which can return a value of zero where the older Jacobson algorithm always returns a
positive value. Brakmo and Peterson specified an additional step of min(rtt + 2, RTO) to avoid problems with this. Unfortunately, in the implementation this step was omitted when calculating the exponential backoff for the RTO. This results in an RTO of 0 seconds being multiplied by the backoff, yielding again zero, and then being subjected to a later MAX operation that increases it to 1 second, regardless of the backoff factor. A similar TCP persist failure has the same cause.2.11.
Name of Problem Insufficient interval between keepalives Classification Reliability Description Keep-alive is a mechanism for checking whether an idle connection is still alive. According to RFC 1122, keep-alive may be included in an implementation. If it is included, the interval between keep-alive packets MUST be configurable, and MUST default to no less than two hours. Significance In congested networks, can lead to unwarranted termination of connections. Implications According to RFC 1122, keep-alive is not required of implementations because it could: (1) cause perfectly good connections to break during transient Internet failures; (2) consume unnecessary bandwidth ("if no one is using the connection, who cares if it is still good?"); and (3) cost money for an Internet path that charges for packets. Regarding this last point, we note that in addition the presence of dial-on-demand links in the route can greatly magnify the cost penalty of excess keepalives, potentially forcing a full-time connection on a link that would otherwise only be connected a few minutes a day. If keepalive is provided the RFC states that the required inter- keepalive distance MUST default to no less than two hours. If it does not, the probability of connections breaking increases, the bandwidth used due to keepalives increases, and cost increases over paths which charge per packet.
Relevant RFCs RFC 1122 specifies that the keep-alive mechanism may be provided. It also specifies the two hour minimum for the default interval between keepalive probes. Trace file demonstrating it Made using the Orchestra tool at the peer of the machine using keep-alive. Machine A was configured to use default settings for the keepalive timer. 11:36:32.910000 A > B: 3288354305:0 win 28672 datasz 4 SYN 11:36:32.930000 B > A: 896001:3288354306 win 4096 datasz 4 SYN ACK 11:36:32.950000 A > B: 3288354306:896002 win 28672 datasz 0 ACK 11:50:01.190000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 11:50:01.210000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 12:03:29.410000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 12:03:29.430000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 12:16:57.630000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 12:16:57.650000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 12:30:25.850000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 12:30:25.870000 B > A: 896002:3288354306 win 4096 datasz 0 ACK 12:43:54.070000 A > B: 3288354305:896002 win 28672 datasz 0 ACK 12:43:54.090000 B > A: 896002:3288354306 win 4096 datasz 0 ACK The initial three packets are the SYN exchange for connection setup. About 13 minutes later, the keepalive timer fires because the connection is idle. The keepalive is acknowledged, and the timer fires again in about 13 more minutes. This behavior continues indefinitely until the connection is closed, and is a violation of the specification. Trace file demonstrating correct behavior Made using the Orchestra tool at the peer of the machine using keep-alive. Machine A was configured to use default settings for the keepalive timer. 17:37:20.500000 A > B: 34155521:0 win 4096 datasz 4 SYN 17:37:20.520000 B > A: 6272001:34155522 win 4096 datasz 4 SYN ACK 17:37:20.540000 A > B: 34155522:6272002 win 4096 datasz 0 ACK 19:37:25.430000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 19:37:25.450000 B > A: 6272002:34155522 win 4096 datasz 0 ACK
21:37:30.560000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 21:37:30.570000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 23:37:35.580000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 23:37:35.600000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 01:37:40.620000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 01:37:40.640000 B > A: 6272002:34155522 win 4096 datasz 0 ACK 03:37:45.590000 A > B: 34155521:6272002 win 4096 datasz 0 ACK 03:37:45.610000 B > A: 6272002:34155522 win 4096 datasz 0 ACK The initial three packets are the SYN exchange for connection setup. Just over two hours later, the keepalive timer fires because the connection is idle. The keepalive is acknowledged, and the timer fires again just over two hours later. This behavior continues indefinitely until the connection is closed. References This problem is documented in [Dawson97]. How to detect For implementations manifesting this problem, it shows up on a packet trace. If the connection is left idle, the keepalive probes will arrive closer together than the two hour minimum.2.12.
Name of Problem Window probe deadlock Classification Reliability Description When an application reads a single byte from a full window, the window should not be updated, in order to avoid Silly Window Syndrome (SWS; see [RFC813]). If the remote peer uses a single byte of data to probe the window, that byte can be accepted into the buffer. In some implementations, at this point a negative argument to a signed comparison causes all further new data to be considered outside the window; consequently, it is discarded (after sending an ACK to resynchronize). These discards include the ACKs for the data packets sent by the local TCP, so the TCP will consider the data unacknowledged.
Consequently, the application may be unable to complete sending new data to the remote peer, because it has exhausted the transmit buffer available to its local TCP, and buffer space is never being freed because incoming ACKs that would do so are being discarded. If the application does not read any more data, which may happen due to its failure to complete such sends, then deadlock results. Significance It's relatively rare for applications to use TCP in a manner that can exercise this problem. Most applications only transmit bulk data if they know the other end is prepared to receive the data. However, if a client fails to consume data, putting the server in persist mode, and then consumes a small amount of data, it can mistakenly compute a negative window. At this point the client will discard all further packets from the server, including ACKs of the client's own data, since they are not inside the (impossibly-sized) window. If subsequently the client consumes enough data to then send a window update to the server, the situation will be rectified. That is, this situation can only happen if the client consumes 1 < N < MSS bytes, so as not to cause a window update, and then starts its own transmission towards the server of more than a window's worth of data. Implications TCP connections will hang and eventually time out. Relevant RFCs RFC 793 describes zero window probing. RFC 813 describes Silly Window Syndrome. Trace file demonstrating it Trace made from a version of tcpdump modified to print out the sequence number attached to an ACK even if it's dataless. An unmodified tcpdump would not print seq:seq(0); however, for this bug, the sequence number in the ACK is important for unambiguously determining how the TCP is behaving. [ Normal connection startup and data transmission from B to A. Options, including MSS of 16344 in both directions, omitted for clarity. ] 16:07:32.327616 A > B: S 65360807:65360807(0) win 8192 16:07:32.327304 B > A: S 65488807:65488807(0) ack 65360808 win 57344 16:07:32.327425 A > B: . 1:1(0) ack 1 win 57344 16:07:32.345732 B > A: P 1:2049(2048) ack 1 win 57344 16:07:32.347013 B > A: P 2049:16385(14336) ack 1 win 57344 16:07:32.347550 B > A: P 16385:30721(14336) ack 1 win 57344 16:07:32.348683 B > A: P 30721:45057(14336) ack 1 win 57344 16:07:32.467286 A > B: . 1:1(0) ack 45057 win 12288
16:07:32.467854 B > A: P 45057:57345(12288) ack 1 win 57344 [ B fills up A's offered window ] 16:07:32.667276 A > B: . 1:1(0) ack 57345 win 0 [ B probes A's window with a single byte ] 16:07:37.467438 B > A: . 57345:57346(1) ack 1 win 57344 [ A resynchronizes without accepting the byte ] 16:07:37.467678 A > B: . 1:1(0) ack 57345 win 0 [ B probes A's window again ] 16:07:45.467438 B > A: . 57345:57346(1) ack 1 win 57344 [ A resynchronizes and accepts the byte (per the ack field) ] 16:07:45.667250 A > B: . 1:1(0) ack 57346 win 0 [ The application on A has started generating data. The first packet A sends is small due to a memory allocation bug. ] 16:07:51.358459 A > B: P 1:2049(2048) ack 57346 win 0 [ B acks A's first packet ] 16:07:51.467239 B > A: . 57346:57346(0) ack 2049 win 57344 [ This looks as though A accepted B's ACK and is sending another packet in response to it. In fact, A is trying to resynchronize with B, and happens to have data to send and can send it because the first small packet didn't use up cwnd. ] 16:07:51.467698 A > B: . 2049:14337(12288) ack 57346 win 0 [ B acks all of the data that A has sent ] 16:07:51.667283 B > A: . 57346:57346(0) ack 14337 win 57344 [ A tries to resynchronize. Notice that by the packets seen on the network, A and B *are* in fact synchronized; A only thinks that they aren't. ] 16:07:51.667477 A > B: . 14337:14337(0) ack 57346 win 0 [ A's retransmit timer fires, and B acks all of the data. A once again tries to resynchronize. ] 16:07:52.467682 A > B: . 1:14337(14336) ack 57346 win 0 16:07:52.468166 B > A: . 57346:57346(0) ack 14337 win 57344 16:07:52.468248 A > B: . 14337:14337(0) ack 57346 win 0 [ A's retransmit timer fires again, and B acks all of the data. A once again tries to resynchronize. ] 16:07:55.467684 A > B: . 1:14337(14336) ack 57346 win 0
16:07:55.468172 B > A: . 57346:57346(0) ack 14337 win 57344 16:07:55.468254 A > B: . 14337:14337(0) ack 57346 win 0 Trace file demonstrating correct behavior Made between the same two hosts after applying the bug fix mentioned below (and using the same modified tcpdump). [ Connection starts up with data transmission from B to A. Note that due to a separate bug (the fact that A and B are communicating over a loopback driver), B erroneously skips slow start. ] 17:38:09.510854 A > B: S 3110066585:3110066585(0) win 16384 17:38:09.510926 B > A: S 3110174850:3110174850(0) ack 3110066586 win 57344 17:38:09.510953 A > B: . 1:1(0) ack 1 win 57344 17:38:09.512956 B > A: P 1:2049(2048) ack 1 win 57344 17:38:09.513222 B > A: P 2049:16385(14336) ack 1 win 57344 17:38:09.513428 B > A: P 16385:30721(14336) ack 1 win 57344 17:38:09.513638 B > A: P 30721:45057(14336) ack 1 win 57344 17:38:09.519531 A > B: . 1:1(0) ack 45057 win 12288 17:38:09.519638 B > A: P 45057:57345(12288) ack 1 win 57344 [ B fills up A's offered window ] 17:38:09.719526 A > B: . 1:1(0) ack 57345 win 0 [ B probes A's window with a single byte. A resynchronizes without accepting the byte ] 17:38:14.499661 B > A: . 57345:57346(1) ack 1 win 57344 17:38:14.499724 A > B: . 1:1(0) ack 57345 win 0 [ B probes A's window again. A resynchronizes and accepts the byte, as indicated by the ack field ] 17:38:19.499764 B > A: . 57345:57346(1) ack 1 win 57344 17:38:19.519731 A > B: . 1:1(0) ack 57346 win 0 [ B probes A's window with a single byte. A resynchronizes without accepting the byte ] 17:38:24.499865 B > A: . 57346:57347(1) ack 1 win 57344 17:38:24.499934 A > B: . 1:1(0) ack 57346 win 0 [ The application on A has started generating data. B acks A's data and A accepts the ACKs and the data transfer continues ] 17:38:28.530265 A > B: P 1:2049(2048) ack 57346 win 0 17:38:28.719914 B > A: . 57346:57346(0) ack 2049 win 57344 17:38:28.720023 A > B: . 2049:16385(14336) ack 57346 win 0 17:38:28.720089 A > B: . 16385:30721(14336) ack 57346 win 0
17:38:28.720370 B > A: . 57346:57346(0) ack 30721 win 57344 17:38:28.720462 A > B: . 30721:45057(14336) ack 57346 win 0 17:38:28.720526 A > B: P 45057:59393(14336) ack 57346 win 0 17:38:28.720824 A > B: P 59393:73729(14336) ack 57346 win 0 17:38:28.721124 B > A: . 57346:57346(0) ack 73729 win 47104 17:38:28.721198 A > B: P 73729:88065(14336) ack 57346 win 0 17:38:28.721379 A > B: P 88065:102401(14336) ack 57346 win 0 17:38:28.721557 A > B: P 102401:116737(14336) ack 57346 win 0 17:38:28.721863 B > A: . 57346:57346(0) ack 116737 win 36864 References None known. How to detect Initiate a connection from a client to a server. Have the server continuously send data until its buffers have been full for long enough to exhaust the window. Next, have the client read 1 byte and then delay for long enough that the server TCP sends a window probe. Now have the client start sending data. At this point, if it ignores the server's ACKs, then the client's TCP suffers from the problem. How to fix In one implementation known to exhibit the problem (derived from 4.3-Reno), the problem was introduced when the macro MAX() was replaced by the function call max() for computing the amount of space in the receive window: tp->rcv_wnd = max(win, (int)(tp->rcv_adv - tp->rcv_nxt)); When data has been received into a window beyond what has been advertised to the other side, rcv_nxt > rcv_adv, making this negative. It's clear from the (int) cast that this is intended, but the unsigned max() function sign-extends so the negative number is "larger". The fix is to change max() to imax(): tp->rcv_wnd = imax(win, (int)(tp->rcv_adv - tp->rcv_nxt)); 4.3-Tahoe and before did not have this bug, since it used the macro MAX() for this calculation.