3.6. Datagram Congestion Control Protocol (DCCP)
The Datagram Congestion Control Protocol (DCCP) [RFC4340] is an IETF Standards Track bidirectional transport protocol that provides unicast connections of congestion-controlled messages without providing reliability. The DCCP Problem Statement [RFC4336] describes the goals that DCCP sought to address. It is suitable for applications that transfer fairly large amounts of data and that can benefit from control over the trade-off between timeliness and reliability [RFC4336]. DCCP offers low overhead, and many characteristics common to UDP, but can avoid "re-inventing the wheel" each time a new multimedia application emerges. Specifically, it includes core transport functions (feature negotiation, path state management, RTT calculation, PMTUD, etc.): DCCP applications select how they send packets and, where suitable, choose common algorithms to manage their functions. Examples of applications that can benefit from such transport services include interactive applications, streaming media, or on-line games [RFC4336].
3.6.1. Protocol Description
DCCP is a connection-oriented datagram protocol that provides a three-way handshake to allow a client and server to set up a connection and provides mechanisms for orderly completion and immediate teardown of a connection. A DCCP protocol instance can be extended [RFC4340] and tuned using additional features. Some features are sender-side only, requiring no negotiation with the receiver; some are receiver-side only; and some are explicitly negotiated during connection setup. DCCP uses a Connect packet to initiate a session and permits each endpoint to choose the features it wishes to support. Simultaneous open [RFC5596], as in TCP, can enable interoperability in the presence of middleboxes. The Connect packet includes a Service Code [RFC5595] that identifies the application or protocol using DCCP, providing middleboxes with information about the intended use of a connection. The DCCP service is unicast-only. It provides multiplexing to multiple sockets at each endpoint using port numbers. An active DCCP session is identified by its four-tuple of local and remote IP addresses and local and remote port numbers. The protocol segments data into messages that are typically sized to fit in IP packets but may be fragmented if they are smaller than the maximum packet size. A DCCP interface allows applications to request fragmentation for packets larger than PMTU, but not larger than the maximum packet size allowed by the current congestion control mechanism (Congestion Control Maximum Packet Size (CCMPS)) [RFC4340]. Each message is identified by a sequence number. The sequence number is used to identify segments in acknowledgments, to detect unacknowledged segments, to measure RTT, etc. The protocol may support unordered delivery of data and does not itself provide retransmission. DCCP supports reduced checksum coverage, a partial payload protection mechanism similar to UDP-Lite. There is also a Data Checksum option, which when enabled, contains a strong Cyclic Redundancy Check (CRC), to enable endpoints to detect application data corruption. Receiver flow control is supported, which limits the amount of unacknowledged data that can be outstanding at a given time. A DCCP Reset packet may be used to force a DCCP endpoint to close a session [RFC4340], aborting the connection.
DCCP supports negotiation of the congestion control profile between endpoints, to provide plug-and-play congestion control mechanisms. Examples of specified profiles include "TCP-like" [RFC4341], "TCP- friendly" [RFC4342], and "TCP-friendly for small packets" [RFC5622]. Additional mechanisms are recorded in an IANA registry (see <http://www.iana.org/assignments/dccp-parameters>). A lightweight UDP-based encapsulation (DCCP-UDP) has been defined [RFC6773] that permits DCCP to be used over paths where DCCP is not natively supported. Support for DCCP in NAPT/NATs is defined in [RFC4340] and [RFC5595]. Upper-layer protocols specified on top of DCCP include DTLS [RFC5238], RTP [RFC5762], and Interactive Connectivity Establishment / Session Description Protocol (ICE/SDP) [RFC6773].3.6.2. Interface Description
Functions expected for a DCCP API include: Open, Close, and Management of the progress a DCCP connection. The Open function provides feature negotiation, selection of an appropriate Congestion Control Identifier (CCID) for congestion control, and other parameters associated with the DCCP connection. A function allows an application to send DCCP datagrams, including setting the required checksum coverage and any required options. (DCCP permits sending datagrams with a zero-length payload.) A function allows reception of data, including indicating if the data was used or dropped. Functions can also make the status of a connection visible to an application, including detection of the maximum packet size and the ability to perform flow control by detecting a slow receiver at the sender. There is no API currently specified in the RFC Series.3.6.3. Transport Features
The transport features provided by DCCP are: o unicast transport, o connection-oriented communication with feature negotiation and application-to-port mapping, o signaling of application class for middlebox support (implemented using Service Codes), o port multiplexing, o unidirectional or bidirectional communication,
o message-oriented delivery, o unreliable delivery with drop notification, o unordered delivery, o flow control (implemented using the slow receiver function), and o partial and full payload error detection (with optional strong integrity check).3.7. Transport Layer Security (TLS) and Datagram TLS (DTLS) as a Pseudotransport
Transport Layer Security (TLS) [RFC5246] and Datagram TLS (DTLS) [RFC6347] are IETF protocols that provide several security-related features to applications. TLS is designed to run on top of a reliable streaming transport protocol (usually TCP), while DTLS is designed to run on top of a best-effort datagram protocol (UDP or DCCP [RFC5238]). At the time of writing, the current version of TLS is 1.2, defined in [RFC5246]; work on TLS version is 1.3 [TLS-1.3] nearing completion. DTLS provides nearly identical functionality to applications; it is defined in [RFC6347] and its current version is also 1.2. The TLS protocol evolved from the Secure Sockets Layer (SSL) [RFC6101] protocols developed in the mid-1990s to support protection of HTTP traffic. While older versions of TLS and DTLS are still in use, they provide weaker security guarantees. [RFC7457] outlines important attacks on TLS and DTLS. [RFC7525] is a Best Current Practices (BCP) document that describes secure configurations for TLS and DTLS to counter these attacks. The recommendations are applicable for the vast majority of use cases.3.7.1. Protocol Description
Both TLS and DTLS provide the same security features and can thus be discussed together. The features they provide are: o Confidentiality o Data integrity o Peer authentication (optional) o Perfect forward secrecy (optional)
The authentication of the peer entity can be omitted; a common web use case is where the server is authenticated and the client is not. TLS also provides a completely anonymous operation mode in which neither peer's identity is authenticated. It is important to note that TLS itself does not specify how a peering entity's identity should be interpreted. For example, in the common use case of authentication by means of an X.509 certificate, it is the application's decision whether the certificate of the peering entity is acceptable for authorization decisions. Perfect forward secrecy, if enabled and supported by the selected algorithms, ensures that traffic encrypted and captured during a session at time t0 cannot be later decrypted at time t1 (t1 > t0), even if the long-term secrets of the communicating peers are later compromised. As DTLS is generally used over an unreliable datagram transport such as UDP, applications will need to tolerate lost, reordered, or duplicated datagrams. Like TLS, DTLS conveys application data in a sequence of independent records. However, because records are mapped to unreliable datagrams, there are several features unique to DTLS that are not applicable to TLS: o Record replay detection (optional). o Record size negotiation (estimates of PMTU and record size expansion factor). o Conveyance of IP don't fragment (DF) bit settings by application. o An anti-DoS stateless cookie mechanism (optional). Generally, DTLS follows the TLS design as closely as possible. To operate over datagrams, DTLS includes a sequence number and limited forms of retransmission and fragmentation for its internal operations. The sequence number may be used for detecting replayed information, according to the windowing procedure described in Section 4.1.2.6 of [RFC6347]. DTLS forbids the use of stream ciphers, which are essentially incompatible when operating on independent encrypted records.3.7.2. Interface Description
TLS is commonly invoked using an API provided by packages such as OpenSSL, wolfSSL, or GnuTLS. Using such APIs entails the manipulation of several important abstractions, which fall into the following categories: long-term keys and algorithms, session state, and communications/connections.
Considerable care is required in the use of TLS APIs to ensure creation of a secure application. The programmer should have at least a basic understanding of encryption and digital signature algorithms and their strengths, public key infrastructure (including X.509 certificates and certificate revocation), and the Sockets API. See [RFC7525] and [RFC7457], as mentioned above. As an example, in the case of OpenSSL, the primary abstractions are the library itself, method (protocol), session, context, cipher, and connection. After initializing the library and setting the method, a cipher suite is chosen and used to configure a context object. Session objects may then be minted according to the parameters present in a context object and associated with individual connections. Depending on how precisely the programmer wishes to select different algorithmic or protocol options, various levels of details may be required.3.7.3. Transport Features
Both TLS and DTLS employ a layered architecture. The lower layer is commonly called the "record protocol". It is responsible for: o message fragmentation, o authentication and integrity via message authentication codes (MACs), o data encryption, and o scheduling transmission using the underlying transport protocol. DTLS augments the TLS record protocol with: o ordering and replay protection, implemented using sequence numbers. Several protocols are layered on top of the record protocol. These include the handshake, alert, and change cipher spec protocols. There is also the data protocol, used to carry application traffic. The handshake protocol is used to establish cryptographic and compression parameters when a connection is first set up. In DTLS, this protocol also has a basic fragmentation and retransmission capability and a cookie-like mechanism to resist DoS attacks. (TLS compression is not recommended at present). The alert protocol is used to inform the peer of various conditions, most of which are terminal for the connection. The change cipher spec protocol is used to synchronize changes in cryptographic parameters for each peer.
The data protocol, when used with an appropriate cipher, provides: o authentication of one end or both ends of a connection, o confidentiality, and o cryptographic integrity protection. Both TLS and DTLS are unicast-only.3.8. Real-Time Transport Protocol (RTP)
RTP provides an end-to-end network transport service, suitable for applications transmitting real-time data, such as audio, video or data, over multicast or unicast transport services, including TCP, UDP, UDP-Lite, DCCP, TLS, and DTLS.3.8.1. Protocol Description
The RTP standard [RFC3550] defines a pair of protocols: RTP and the RTP Control Protocol (RTCP). The transport does not provide connection setup, instead relying on out-of-band techniques or associated control protocols to setup, negotiate parameters, or tear down a session. An RTP sender encapsulates audio/video data into RTP packets to transport media streams. The RFC Series specifies RTP payload formats that allow packets to carry a wide range of media and specifies a wide range of multiplexing, error control, and other support mechanisms. If a frame of media data is large, it will be fragmented into several RTP packets. Likewise, several small frames may be bundled into a single RTP packet. An RTP receiver collects RTP packets from the network, validates them for correctness, and sends them to the media decoder input queue. Missing packet detection is performed by the channel decoder. The playout buffer is ordered by time stamp and is used to reorder packets. Damaged frames may be repaired before the media payloads are decompressed to display or store the data. Some uses of RTP are able to exploit the partial payload protection features offered by DCCP and UDP-Lite. RTCP is a control protocol that works alongside an RTP flow. Both the RTP sender and receiver will send RTCP report packets. This is used to periodically send control information and report performance.
Based on received RTCP feedback, an RTP sender can adjust the transmission, e.g., perform rate adaptation at the application layer in the case of congestion. An RTCP receiver report (RTCP RR) is returned to the sender periodically to report key parameters (e.g., the fraction of packets lost in the last reporting interval, the cumulative number of packets lost, the highest sequence number received, and the inter-arrival jitter). The RTCP RR packets also contain timing information that allows the sender to estimate the network round-trip time (RTT) to the receivers. The interval between reports sent from each receiver tends to be on the order of a few seconds on average, although this varies with the session rate, and sub-second reporting intervals are possible for high rate sessions. The interval is randomized to avoid synchronization of reports from multiple receivers.3.8.2. Interface Description
There is no standard API defined for RTP or RTCP. Implementations are typically tightly integrated with a particular application and closely follow the principles of application-level framing and integrated layer processing [ClarkArch] in media processing [RFC2736], error recovery and concealment, rate adaptation, and security [RFC7202]. Accordingly, RTP implementations tend to be targeted at particular application domains (e.g., voice-over-IP, IPTV, or video conferencing), with a feature set optimized for that domain, rather than being general purpose implementations of the protocol.3.8.3. Transport Features
The transport features provided by RTP are: o unicast, multicast, or IPv4 broadcast (provided by lower-layer protocol), o port multiplexing (provided by lower-layer protocol), o unidirectional or bidirectional communication (provided by lower- layer protocol), o message-oriented delivery with support for media types and other extensions, o reliable delivery when using erasure coding or unreliable delivery with drop notification (if supported by lower-layer protocol),
o connection setup with feature negotiation (using associated protocols) and application-to-port mapping (provided by lower- layer protocol), o segmentation, and o performance metric reporting (using associated protocols).3.9. Hypertext Transport Protocol (HTTP) over TCP as a Pseudotransport
The Hypertext Transfer Protocol (HTTP) is an application-level protocol widely used on the Internet. It provides object-oriented delivery of discrete data or files. Version 1.1 of the protocol is specified in [RFC7230] [RFC7231] [RFC7232] [RFC7233] [RFC7234] [RFC7235], and version 2 is specified in [RFC7540]. HTTP is usually transported over TCP using ports 80 and 443, although it can be used with other transports. When used over TCP, it inherits TCP's properties. Application-layer protocols may use HTTP as a substrate with an existing method and data formats, or specify new methods and data formats. There are various reasons for this practice listed in [RFC3205]; these include being a well-known and well-understood protocol, reusability of existing servers and client libraries, easy use of existing security mechanisms such as HTTP digest authentication [RFC7235] and TLS [RFC5246], and the ability of HTTP to traverse firewalls, which allows it to work over many types of infrastructure and in cases where an application server often needs to support HTTP anyway. Depending on application need, the use of HTTP as a substrate protocol may add complexity and overhead in comparison to a special- purpose protocol (e.g., HTTP headers, suitability of the HTTP security model, etc.). [RFC3205] addresses this issue, provides some guidelines, and identifies concerns about the use of HTTP standard ports 80 and 443, the use of the HTTP URL scheme, and interaction with existing firewalls, proxies, and NATs. Representational State Transfer (REST) [REST] is another example of how applications can use HTTP as a transport protocol. REST is an architecture style that may be used to build applications using HTTP as a communication protocol.3.9.1. Protocol Description
The Hypertext Transfer Protocol (HTTP) is a request/response protocol. A client sends a request containing a request method, URI, and protocol version followed by message whose design is inspired by
MIME (see [RFC7231] for the differences between an HTTP object and a MIME message), containing information about the client and request modifiers. The message can also contain a message body carrying application data. The server responds with a status or error code followed by a message containing information about the server and information about the data. This may include a message body. It is possible to specify a data format for the message body using MIME media types [RFC2045]. The protocol has additional features; some relevant to pseudotransport are described below. Content negotiation, specified in [RFC7231], is a mechanism provided by HTTP to allow selection of a representation for a requested resource. The client and server negotiate acceptable data formats, character sets, and data encoding (e.g., data can be transferred compressed using gzip). HTTP can accommodate exchange of messages as well as data streaming (using chunked transfer encoding [RFC7230]). It is also possible to request a part of a resource using an object range request [RFC7233]. The protocol provides powerful cache control signaling defined in [RFC7234]. The persistent connections of HTTP 1.1 and HTTP 2.0 allow multiple request/response transactions (streams) during the lifetime of a single HTTP connection. This reduces overhead during connection establishment and mitigates transport-layer slow-start that would have otherwise been incurred for each transaction. HTTP 2.0 connections can multiplex many request/response pairs in parallel on a single transport connection. Both are important to reduce latency for HTTP's primary use case. HTTP can be combined with security mechanisms, such as TLS (denoted by HTTPS). This adds protocol properties provided by such a mechanism (e.g., authentication and encryption). The TLS Application-Layer Protocol Negotiation (ALPN) extension [RFC7301] can be used to negotiate the HTTP version within the TLS handshake, eliminating the latency incurred by additional round-trip exchanges. Arbitrary cookie strings, included as part of the request headers, are often used as bearer tokens in HTTP.3.9.2. Interface Description
There are many HTTP libraries available exposing different APIs. The APIs provide a way to specify a request by providing a URI, a method, request modifiers, and, optionally, a request body. For the response, callbacks can be registered that will be invoked when the response is received. If HTTPS is used, the API exposes a registration of callbacks when a server requests client authentication and when certificate verification is needed.
The World Wide Web Consortium (W3C) has standardized the XMLHttpRequest API [XHR]. This API can be used for sending HTTP/ HTTPS requests and receiving server responses. Besides the XML data format, the request and response data format can also be JSON, HTML, and plain text. JavaScript and XMLHttpRequest are ubiquitous programming models for websites and more general applications where native code is less attractive.3.9.3. Transport Features
The transport features provided by HTTP, when used as a pseudotransport, are: o unicast transport (provided by the lower-layer protocol, usually TCP), o unidirectional or bidirectional communication, o transfer of objects in multiple streams with object content type negotiation, supporting partial transmission of object ranges, o ordered delivery (provided by the lower-layer protocol, usually TCP), o fully reliable delivery (provided by the lower-layer protocol, usually TCP), o flow control (provided by the lower-layer protocol, usually TCP), and o congestion control (provided by the lower-layer protocol, usually TCP). HTTPS (HTTP over TLS) additionally provides the following features (as provided by TLS): o authentication (of one or both ends of a connection), o confidentiality, and o integrity protection.
3.10. File Delivery over Unidirectional Transport / Asynchronous Layered Coding (FLUTE/ALC) for Reliable Multicast
FLUTE/ALC is an IETF Standards Track protocol specified in [RFC6726] and [RFC5775]. It provides object-oriented delivery of discrete data or files. Asynchronous Layer Coding (ALC) provides an underlying reliable transport service and FLUTE a file-oriented specialization of the ALC service (e.g., to carry associated metadata). [RFC6726] and [RFC5775] are non-backward-compatible updates of [RFC3926] and [RFC3450], which are Experimental protocols; these Experimental protocols are currently largely deployed in the 3GPP Multimedia Broadcast / Multicast Service (MBMS) (see [MBMS], Section 7) and similar contexts (e.g., the Japanese ISDB-Tmm standard). The FLUTE/ALC protocol has been designed to support massively scalable reliable bulk data dissemination to receiver groups of arbitrary size using IP Multicast over any type of delivery network, including unidirectional networks (e.g., broadcast wireless channels). However, the FLUTE/ALC protocol also supports point-to- point unicast transmissions. FLUTE/ALC bulk data dissemination has been designed for discrete file or memory-based "objects". Although FLUTE/ALC is not well adapted to byte and message streaming, there is an exception: FLUTE/ALC is used to carry 3GPP Dynamic Adaptive Streaming over HTTP (DASH) when scalability is a requirement (see [MBMS], Section 5.6). FLUTE/ALC's reliability, delivery mode, congestion control, and flow/ rate control mechanisms can be separately controlled to meet different application needs. Section 4.1 of [RFC8085] describes multicast congestion control requirements for UDP.3.10.1. Protocol Description
The FLUTE/ALC protocol works on top of UDP (though it could work on top of any datagram delivery transport protocol), without requiring any connectivity from receivers to the sender. Purely unidirectional networks are therefore supported by FLUTE/ALC. This guarantees scalability to an unlimited number of receivers in a session, since the sender behaves exactly the same regardless of the number of receivers. FLUTE/ALC supports the transfer of bulk objects such as file or in-memory content, using either a push or an on-demand mode. In push mode, content is sent once to the receivers, while in on-demand mode, content is sent continuously during periods of time that can greatly exceed the average time required to download the session objects (see [RFC5651], Section 4.2).
This enables receivers to join a session asynchronously, at their own discretion, receive the content, and leave the session. In this case, data content is typically sent continuously, in loops (also known as "carousels"). FLUTE/ALC also supports the transfer of an object stream, with loose real-time constraints. This is particularly useful to carry 3GPP DASH when scalability is a requirement and unicast transmissions over HTTP cannot be used ([MBMS], Section 5.6). In this case, packets are sent in sequence using push mode. FLUTE/ALC is not well adapted to byte and message streaming, and other solutions could be preferred (e.g., FECFRAME [RFC6363] with real-time flows). The FLUTE file delivery instantiation of ALC provides a metadata delivery service. Each object of the FLUTE/ALC session is described in a dedicated entry of a File Delivery Table (FDT), using an XML format (see [RFC6726], Section 3.2). This metadata can include, but is not restricted to, a URI attribute (to identify and locate the object), a media type attribute, a size attribute, an encoding attribute, or a message digest attribute. Since the set of objects sent within a session can be dynamic, with new objects being added and old ones removed, several instances of the FDT can be sent, and a mechanism is provided to identify a new FDT instance. Error detection and verification of the protocol control information relies on the underlying transport (e.g., UDP checksum). To provide robustness against packet loss and improve the efficiency of the on-demand mode, FLUTE/ALC relies on packet erasure coding (Application-Layer Forward Error Correction (AL-FEC)). AL-FEC encoding is proactive (since there is no feedback and therefore no (N)ACK-based retransmission), and ALC packets containing repair data are sent along with ALC packets containing source data. Several FEC Schemes have been standardized; FLUTE/ALC does not mandate the use of any particular one. Several strategies concerning the transmission order of ALC source and repair packets are possible, in particular, in on-demand mode where it can deeply impact the service provided (e.g., to favor the recovery of objects in sequence or, at the other extreme, to favor the recovery of all objects in parallel), and FLUTE/ALC does not mandate nor recommend the use of any particular one. A FLUTE/ALC session is composed of one or more channels, associated to different destination unicast and/or multicast IP addresses. ALC packets are sent in those channels at a certain transmission rate, with a rate that often differs depending on the channel. FLUTE/ALC does not mandate nor recommend any strategy to select which ALC packet to send on which channel. FLUTE/ALC can use a multiple rate congestion control building block (e.g., Wave and Equation Based Rate
Control (WEBRC)) to provide congestion control that is feedback free, where receivers adjust their reception rates individually by joining and leaving channels associated with the session. To that purpose, the ALC header provides a specific field to carry congestion-control- specific information. However, FLUTE/ALC does not mandate the use of a particular congestion control mechanism although WEBRC is mandatory to support for the Internet ([RFC6726], Section 1.1.4). FLUTE/ALC is often used over a network path with pre-provisioned capacity [RFC8085] where there are no flows competing for capacity. In this case, a sender-based rate control mechanism and a single channel are sufficient. [RFC6584] provides per-packet authentication, integrity, and anti- replay protection in the context of the ALC and NORM protocols. Several mechanisms are proposed that seamlessly integrate into these protocols using the ALC and NORM header extension mechanisms.3.10.2. Interface Description
The FLUTE/ALC specification does not describe a specific API to control protocol operation. Although open source and commercial implementations have specified APIs, there is no IETF-specified API for FLUTE/ALC.3.10.3. Transport Features
The transport features provided by FLUTE/ALC are: o unicast, multicast, anycast, or IPv4 broadcast transmission, o object-oriented delivery of discrete data or files and associated metadata, o fully reliable or partially reliable delivery (of file or in- memory objects), using proactive packet erasure coding (AL-FEC) to recover from packet erasures, o ordered or unordered delivery (of file or in-memory objects), o error detection (based on the UDP checksum), o per-packet authentication, o per-packet integrity, o per-packet replay protection, and o congestion control for layered flows (e.g., with WEBRC).
3.11. NACK-Oriented Reliable Multicast (NORM)
NORM is an IETF Standards Track protocol specified in [RFC5740]. It provides object-oriented delivery of discrete data or files. The protocol was designed to support reliable bulk data dissemination to receiver groups using IP Multicast but also provides for point-to- point unicast operation. Support for bulk data dissemination includes discrete file or computer memory-based "objects" as well as byte and message streaming. NORM can incorporate packet erasure coding as a part of its selective Automatic Repeat reQuest (ARQ) in response to negative acknowledgments from the receiver. The packet erasure coding can also be proactively applied for forward protection from packet loss. NORM transmissions are governed by TCP-Friendly Multicast Congestion Control (TFMCC) [RFC4654]. The reliability, congestion control, and flow control mechanisms can be separately controlled to meet different application needs.3.11.1. Protocol Description
The NORM protocol is encapsulated in UDP datagrams and thus provides multiplexing for multiple sockets on hosts using port numbers. For loosely coordinated IP Multicast, NORM is not strictly connection- oriented although per-sender state is maintained by receivers for protocol operation. [RFC5740] does not specify a handshake protocol for connection establishment. Separate session initiation can be used to coordinate port numbers. However, in-band "client-server" style connection establishment can be accomplished with the NORM congestion control signaling messages using port binding techniques like those for TCP client-server connections. NORM supports bulk "objects" such as file or in-memory content but also can treat a stream of data as a logical bulk object for purposes of packet erasure coding. In the case of stream transport, NORM can support either byte streams or message streams where application- defined message boundary information is carried in the NORM protocol messages. This allows the receiver(s) to join/rejoin and recover message boundaries mid-stream as needed. Application content is carried and identified by the NORM protocol with encoding symbol identifiers depending upon the Forward Error Correction (FEC) Scheme [RFC5052] configured. NORM uses NACK-based selective ARQ to reliably deliver the application content to the receiver(s). NORM proactively measures round-trip timing information to scale ARQ timers appropriately and to support congestion control. For multicast
operation, timer-based feedback suppression is used to achieve group size scaling with low feedback traffic levels. The feedback suppression is not applied for unicast operation. NORM uses rate-based congestion control based upon the TCP-Friendly Rate Control (TFRC) [RFC5348] principles that are also used in DCCP [RFC4340]. NORM uses control messages to measure RTT and collect congestion event information (e.g., reflecting a loss event or ECN event) from the receiver(s) to support dynamic adjustment or the rate. TCP-Friendly Multicast Congestion Control (TFMCC) [RFC4654] provides extra features to support multicast but is functionally equivalent to TFRC for unicast. Error detection and verification of the protocol control information relies on the on the underlying transport (e.g., UDP checksum). The reliability mechanism is decoupled from congestion control. This allows invocation of alternative arrangements of transport services, for example, to support, fixed-rate reliable delivery or unreliable delivery (that may optionally be "better than best effort" via packet erasure coding) using TFRC. Alternative congestion control techniques may be applied, for example, TFRC with congestion event detection based on ECN. While NORM provides NACK-based reliability, it also supports a positive acknowledgment (ACK) mechanism that can be used for receiver flow control. This mechanism is decoupled from the reliability and congestion control, supporting applications with different needs. One example is use of NORM for quasi-reliable delivery, where timely delivery of newer content may be favored over completely reliable delivery of older content within buffering and RTT constraints.3.11.2. Interface Description
The NORM specification does not describe a specific API to control protocol operation. A freely available, open-source reference implementation of NORM is available at <https://www.nrl.navy.mil/itd/ncs/products/norm>, and a documented API is provided for this implementation. While a sockets-like API is not currently documented, the existing API supports the necessary functions for that to be implemented.
3.11.3. Transport Features
The transport features provided by NORM are: o unicast or multicast transport, o unidirectional communication, o stream-oriented delivery in a single stream or object-oriented delivery of in-memory data or file bulk content objects, o fully reliable (NACK-based) or partially reliable (using erasure coding both proactively and as part of ARQ) delivery, o unordered delivery, o error detection (relies on UDP checksum), o segmentation, o data bundling (using Nagle's algorithm), o flow control (timer-based and/or ACK-based), and o congestion control (also supporting fixed-rate reliable or unreliable delivery).3.12. Internet Control Message Protocol (ICMP)
The Internet Control Message Protocol (ICMP) [RFC792] for IPv4 and ICMP for IPv6 [RFC4443] are IETF Standards Track protocols. It is a connectionless unidirectional protocol that delivers individual messages, without error correction, congestion control, or flow control. Messages may be sent as unicast, IPv4 broadcast, or multicast datagrams (IPv4 and IPv6), in addition to anycast datagrams. While ICMP is not typically described as a transport protocol, it does position itself over the network layer, and the operation of other transport protocols can be closely linked to the functions provided by ICMP. Transport protocols and upper-layer protocols can use received ICMP messages to help them make appropriate decisions when network or endpoint errors are reported, for example, to implement ICMP-based Path MTU Discovery (PMTUD) [RFC1191] [RFC1981] or assist in Packetization Layer PMTUD (PLPMTUD) [RFC4821]. Such reactions to received messages need to protect from off-path data injection
[RFC8085] to avoid an application receiving packets created by an unauthorized third party. An application therefore needs to ensure that all messages are appropriately validated by checking the payload of the messages to ensure they are received in response to actually transmitted traffic (e.g., a reported error condition that corresponds to a UDP datagram or TCP segment was actually sent by the application). This requires context [RFC6056], such as local state about communication instances to each destination (e.g., in TCP, DCCP, or SCTP). This state is not always maintained by UDP-based applications [RFC8085].3.12.1. Protocol Description
ICMP is a connectionless unidirectional protocol. It delivers independent messages, called "datagrams". Each message is required to carry a checksum as an integrity check and to protect from misdelivery to an unintended endpoint. ICMP messages typically relay diagnostic information from an endpoint [RFC1122] or network device [RFC1812] addressed to the sender of a flow. This usually contains the network protocol header of a packet that encountered a reported issue. Some formats of messages can also carry other payload data. Each message carries an integrity check calculated in the same way as for UDP; this checksum is not optional. The RFC Series defines additional IPv6 message formats to support a range of uses. In the case of IPv6, the protocol incorporates neighbor discovery [RFC4861] [RFC3971] (provided by ARP for IPv4) and Multicast Listener Discovery (MLD) [RFC2710] group management functions (provided by IGMP for IPv4). Reliable transmission cannot be assumed. A receiving application that is unable to run sufficiently fast, or frequently, may miss messages since there is no flow or congestion control. In addition, some network devices rate-limit ICMP messages.3.12.2. Interface Description
ICMP processing is integrated in many connection-oriented transports but, like other functions, needs to be provided by an upper-layer protocol when using UDP and UDP-Lite. On some stacks, a bound socket also allows a UDP application to be notified when ICMP error messages are received for its transmissions [RFC8085].
Any response to ICMP error messages ought to be robust to temporary routing failures (sometimes called "soft errors"), e.g., transient ICMP "unreachable" messages ought to not normally cause a communication abort [RFC5461] [RFC8085].3.12.3. Transport Features
ICMP does not provide any transport service directly to applications. Used together with other transport protocols, it provides transmission of control, error, and measurement data between endpoints or from devices along the path to one endpoint.