Internet Engineering Task Force (IETF) C. Lever, Ed. Request for Comments: 8166 Oracle Obsoletes: 5666 W. Simpson Category: Standards Track Red Hat ISSN: 2070-1721 T. Talpey Microsoft June 2017 Remote Direct Memory Access Transport for Remote Procedure Call Version 1Abstract
This document specifies a protocol for conveying Remote Procedure Call (RPC) messages on physical transports capable of Remote Direct Memory Access (RDMA). This protocol is referred to as the RPC-over- RDMA version 1 protocol in this document. It requires no revision to application RPC protocols or the RPC protocol itself. This document obsoletes RFC 5666. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc8166.
Copyright Notice Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1. RPCs on RDMA Transports . . . . . . . . . . . . . . . . . 4 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 2.2. RPCs . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3. RDMA . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3. RPC-over-RDMA Protocol Framework . . . . . . . . . . . . . . 10 3.1. Transfer Models . . . . . . . . . . . . . . . . . . . . . 10 3.2. Message Framing . . . . . . . . . . . . . . . . . . . . . 11 3.3. Managing Receiver Resources . . . . . . . . . . . . . . . 11 3.4. XDR Encoding with Chunks . . . . . . . . . . . . . . . . 14 3.5. Message Size . . . . . . . . . . . . . . . . . . . . . . 19 4. RPC-over-RDMA in Operation . . . . . . . . . . . . . . . . . 23 4.1. XDR Protocol Definition . . . . . . . . . . . . . . . . . 23 4.2. Fixed Header Fields . . . . . . . . . . . . . . . . . . . 28 4.3. Chunk Lists . . . . . . . . . . . . . . . . . . . . . . . 30 4.4. Memory Registration . . . . . . . . . . . . . . . . . . . 33 4.5. Error Handling . . . . . . . . . . . . . . . . . . . . . 34 4.6. Protocol Elements No Longer Supported . . . . . . . . . . 37 4.7. XDR Examples . . . . . . . . . . . . . . . . . . . . . . 38 5. RPC Bind Parameters . . . . . . . . . . . . . . . . . . . . . 39 6. ULB Specifications . . . . . . . . . . . . . . . . . . . . . 41 6.1. DDP-Eligibility . . . . . . . . . . . . . . . . . . . . . 41 6.2. Maximum Reply Size . . . . . . . . . . . . . . . . . . . 43 6.3. Additional Considerations . . . . . . . . . . . . . . . . 43 6.4. ULP Extensions . . . . . . . . . . . . . . . . . . . . . 43 7. Protocol Extensibility . . . . . . . . . . . . . . . . . . . 44 7.1. Conventional Extensions . . . . . . . . . . . . . . . . . 44 8. Security Considerations . . . . . . . . . . . . . . . . . . . 44 8.1. Memory Protection . . . . . . . . . . . . . . . . . . . . 44 8.2. RPC Message Security . . . . . . . . . . . . . . . . . . 46 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 49 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 50 10.1. Normative References . . . . . . . . . . . . . . . . . . 50 10.2. Informative References . . . . . . . . . . . . . . . . . 51 Appendix A. Changes from RFC 5666 . . . . . . . . . . . . . . . 53 A.1. Changes to the Specification . . . . . . . . . . . . . . 53 A.2. Changes to the Protocol . . . . . . . . . . . . . . . . . 53 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 54 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 55
1. Introduction
This document specifies the RPC-over-RDMA version 1 protocol, based on existing implementations of RFC 5666 and experience gained through deployment. This document obsoletes RFC 5666. This specification clarifies text that was subject to multiple interpretations and removes support for unimplemented RPC-over-RDMA version 1 protocol elements. It clarifies the role of Upper-Layer Bindings (ULBs) and describes what they are to contain. In addition, this document describes current practice using RPCSEC_GSS [RFC7861] on RDMA transports. The protocol version number has not been changed because the protocol specified in this document fully interoperates with implementations of the RPC-over-RDMA version 1 protocol specified in [RFC5666].1.1. RPCs on RDMA Transports
RDMA [RFC5040] [RFC5041] [IBARCH] is a technique for moving data efficiently between end nodes. By directing data into destination buffers as it is sent on a network, and placing it via direct memory access by hardware, the benefits of faster transfers and reduced host overhead are obtained. Open Network Computing Remote Procedure Call (ONC RPC, often shortened in NFSv4 documents to RPC) [RFC5531] is a remote procedure call protocol that runs over a variety of transports. Most RPC implementations today use UDP [RFC768] or TCP [RFC793]. On UDP, RPC messages are encapsulated inside datagrams, while on a TCP byte stream, RPC messages are delineated by a record marking protocol. An RDMA transport also conveys RPC messages in a specific fashion that must be fully described if RPC implementations are to interoperate. RDMA transports present semantics that differ from either UDP or TCP. They retain message delineations like UDP but provide reliable and sequenced data transfer like TCP. They also provide an offloaded bulk transfer service not provided by UDP or TCP. RDMA transports are therefore appropriately viewed as a new transport type by RPC. In this context, the Network File System (NFS) protocols, as described in [RFC1094], [RFC1813], [RFC7530], [RFC5661], and future NFSv4 minor versions, are all obvious beneficiaries of RDMA transports. A complete problem statement is presented in [RFC5532]. Many other RPC-based protocols can also benefit.
Although the RDMA transport described herein can provide relatively transparent support for any RPC application, this document also describes mechanisms that can optimize data transfer even further, when RPC applications are willing to exploit awareness of RDMA as the transport.2. Terminology
2.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.2.2. RPCs
This section highlights key elements of the RPC [RFC5531] and External Data Representation (XDR) [RFC4506] protocols, upon which RPC-over-RDMA version 1 is constructed. Strong grounding with these protocols is recommended before reading this document.2.2.1. Upper-Layer Protocols
RPCs are an abstraction used to implement the operations of an Upper- Layer Protocol (ULP). "ULP" refers to an RPC Program and Version tuple, which is a versioned set of procedure calls that comprise a single well-defined API. One example of a ULP is the Network File System Version 4.0 [RFC7530]. In this document, the term "RPC consumer" refers to an implementation of a ULP running on an RPC client endpoint.2.2.2. Requesters and Responders
Like a local procedure call, every RPC procedure has a set of "arguments" and a set of "results". A calling context invokes a procedure, passing arguments to it, and the procedure subsequently returns a set of results. Unlike a local procedure call, the called procedure is executed remotely rather than in the local application's execution context. The RPC protocol as described in [RFC5531] is fundamentally a message-passing protocol between one or more clients (where RPC consumers are running) and a server (where a remote execution context is available to process RPC transactions on behalf of those consumers).
ONC RPC transactions are made up of two types of messages: CALL An "RPC Call message" requests that work be done. This type of message is designated by the value zero (0) in the message's msg_type field. An arbitrary unique value is placed in the message's XID field in order to match this RPC Call message to a corresponding RPC Reply message. REPLY An "RPC Reply message" reports the results of work requested by an RPC Call message. An RPC Reply message is designated by the value one (1) in the message's msg_type field. The value contained in an RPC Reply message's XID field is copied from the RPC Call message whose results are being reported. The RPC client endpoint acts as a "Requester". It serializes the procedure's arguments and conveys them to a server endpoint via an RPC Call message. This message contains an RPC protocol header, a header describing the requested upper-layer operation, and all arguments. The RPC server endpoint acts as a "Responder". It deserializes the arguments and processes the requested operation. It then serializes the operation's results into another byte stream. This byte stream is conveyed back to the Requester via an RPC Reply message. This message contains an RPC protocol header, a header describing the upper-layer reply, and all results. The Requester deserializes the results and allows the original caller to proceed. At this point, the RPC transaction designated by the XID in the RPC Call message is complete, and the XID is retired. In summary, RPC Call messages are sent by Requesters to Responders to initiate RPC transactions. RPC Reply messages are sent by Responders to Requesters to complete the processing on an RPC transaction.2.2.3. RPC Transports
The role of an "RPC transport" is to mediate the exchange of RPC messages between Requesters and Responders. An RPC transport bridges the gap between the RPC message abstraction and the native operations of a particular network transport. RPC-over-RDMA is a connection-oriented RPC transport. When a connection-oriented transport is used, clients initiate transport connections, while servers wait passively for incoming connection requests.
2.2.4. External Data Representation
One cannot assume that all Requesters and Responders represent data objects the same way internally. RPC uses External Data Representation (XDR) to translate native data types and serialize arguments and results [RFC4506]. The XDR protocol encodes data independently of the endianness or size of host-native data types, allowing unambiguous decoding of data on the receiving end. RPC Programs are specified by writing an XDR definition of their procedures, argument data types, and result data types. XDR assumes that the number of bits in a byte (octet) and their order are the same on both endpoints and on the physical network. The smallest indivisible unit of XDR encoding is a group of four octets. XDR also flattens lists, arrays, and other complex data types so they can be conveyed as a stream of bytes. A serialized stream of bytes that is the result of XDR encoding is referred to as an "XDR stream". A sending endpoint encodes native data into an XDR stream and then transmits that stream to a receiver. A receiving endpoint decodes incoming XDR byte streams into its native data representation format.2.2.4.1. XDR Opaque Data
Sometimes, a data item must be transferred as is: without encoding or decoding. The contents of such a data item are referred to as "opaque data". XDR encoding places the content of opaque data items directly into an XDR stream without altering it in any way. ULPs or applications perform any needed data translation in this case. Examples of opaque data items include the content of files or generic byte strings.2.2.4.2. XDR Roundup
The number of octets in a variable-length data item precedes that item in an XDR stream. If the size of an encoded data item is not a multiple of four octets, octets containing zero are added after the end of the item; this is the case so that the next encoded data item in the XDR stream starts on a four-octet boundary. The encoded size of the item is not changed by the addition of the extra octets. These extra octets are never exposed to ULPs. This technique is referred to as "XDR roundup", and the extra octets are referred to as "XDR roundup padding".
2.3. RDMA
RPC Requesters and Responders can be made more efficient if large RPC messages are transferred by a third party, such as intelligent network-interface hardware (data movement offload), and placed in the receiver's memory so that no additional adjustment of data alignment has to be made (direct data placement or "DDP"). RDMA transports enable both optimizations.2.3.1. DDP
Typically, RPC implementations copy the contents of RPC messages into a buffer before being sent. An efficient RPC implementation sends bulk data without copying it into a separate send buffer first. However, socket-based RPC implementations are often unable to receive data directly into its final place in memory. Receivers often need to copy incoming data to finish an RPC operation: sometimes, only to adjust data alignment. In this document, "RDMA" refers to the physical mechanism an RDMA transport utilizes when moving data. Although this may not be efficient, before an RDMA transfer, a sender may copy data into an intermediate buffer. After an RDMA transfer, a receiver may copy that data again to its final destination. In this document, the term "DDP" refers to any optimized data transfer where it is unnecessary for a receiving host's CPU to copy transferred data to another location after it has been received. Just as [RFC5666] did, this document focuses on the use of RDMA Read and Write operations to achieve both data movement offload and DDP. However, not all RDMA-based data transfer qualifies as DDP, and DDP can be achieved using non-RDMA mechanisms.2.3.2. RDMA Transport Requirements
To achieve good performance during receive operations, RDMA transports require that RDMA consumers provision resources in advance to receive incoming messages. An RDMA consumer might provide Receive buffers in advance by posting an RDMA Receive Work Request for every expected RDMA Send from a remote peer. These buffers are provided before the remote peer posts RDMA Send Work Requests; thus, this is often referred to as "pre- posting" buffers.
An RDMA Receive Work Request remains outstanding until hardware matches it to an inbound Send operation. The resources associated with that Receive must be retained in host memory, or "pinned", until the Receive completes. Given these basic tenets of RDMA transport operation, the RPC-over- RDMA version 1 protocol assumes each transport provides the following abstract operations. A more complete discussion of these operations is found in [RFC5040]. Registered Memory Registered memory is a region of memory that is assigned a steering tag that temporarily permits access by the RDMA provider to perform data-transfer operations. The RPC-over-RDMA version 1 protocol assumes that each region of registered memory MUST be identified with a steering tag of no more than 32 bits and memory addresses of up to 64 bits in length. RDMA Send The RDMA provider supports an RDMA Send operation, with completion signaled on the receiving peer after data has been placed in a pre-posted buffer. Sends complete at the receiver in the order they were issued at the sender. The amount of data transferred by a single RDMA Send operation is limited by the size of the remote peer's pre-posted buffers. RDMA Receive The RDMA provider supports an RDMA Receive operation to receive data conveyed by incoming RDMA Send operations. To reduce the amount of memory that must remain pinned awaiting incoming Sends, the amount of pre-posted memory is limited. Flow control to prevent overrunning receiver resources is provided by the RDMA consumer (in this case, the RPC-over-RDMA version 1 protocol). RDMA Write The RDMA provider supports an RDMA Write operation to place data directly into a remote memory region. The local host initiates an RDMA Write, and completion is signaled there. No completion is signaled on the remote peer. The local host provides a steering tag, memory address, and length of the remote peer's memory region. RDMA Writes are not ordered with respect to one another, but are ordered with respect to RDMA Sends. A subsequent RDMA Send completion obtained at the write initiator guarantees that prior RDMA Write data has been successfully placed in the remote peer's memory.
RDMA Read The RDMA provider supports an RDMA Read operation to place peer source data directly into the read initiator's memory. The local host initiates an RDMA Read, and completion is signaled there. No completion is signaled on the remote peer. The local host provides steering tags, memory addresses, and a length for the remote source and local destination memory region. The local host signals Read completion to the remote peer as part of a subsequent RDMA Send message. The remote peer can then release steering tags and subsequently free associated source memory regions. The RPC-over-RDMA version 1 protocol is designed to be carried over RDMA transports that support the above abstract operations. This protocol conveys information sufficient for an RPC peer to direct an RDMA provider to perform transfers containing RPC data and to communicate their result(s).3. RPC-over-RDMA Protocol Framework
3.1. Transfer Models
A "transfer model" designates which endpoint exposes its memory and which is responsible for initiating the transfer of data. To enable RDMA Read and Write operations, for example, an endpoint first exposes regions of its memory to a remote endpoint, which initiates these operations against the exposed memory. Read-Read Requesters expose their memory to the Responder, and the Responder exposes its memory to Requesters. The Responder reads, or pulls, RPC arguments or whole RPC calls from each Requester. Requesters pull RPC results or whole RPC relies from the Responder. Write-Write Requesters expose their memory to the Responder, and the Responder exposes its memory to Requesters. Requesters write, or push, RPC arguments or whole RPC calls to the Responder. The Responder pushes RPC results or whole RPC relies to each Requester. Read-Write Requesters expose their memory to the Responder, but the Responder does not expose its memory. The Responder pulls RPC arguments or whole RPC calls from each Requester. The Responder pushes RPC results or whole RPC relies to each Requester.
Write-Read The Responder exposes its memory to Requesters, but Requesters do not expose their memory. Requesters push RPC arguments or whole RPC calls to the Responder. Requesters pull RPC results or whole RPC relies from the Responder.3.2. Message Framing
On an RPC-over-RDMA transport, each RPC message is encapsulated by an RPC-over-RDMA message. An RPC-over-RDMA message consists of two XDR streams. RPC Payload Stream The "Payload stream" contains the encapsulated RPC message being transferred by this RPC-over-RDMA message. This stream always begins with the Transaction ID (XID) field of the encapsulated RPC message. Transport Stream The "Transport stream" contains a header that describes and controls the transfer of the Payload stream in this RPC-over-RDMA message. This header is analogous to the record marking used for RPC on TCP sockets but is more extensive, since RDMA transports support several modes of data transfer. In its simplest form, an RPC-over-RDMA message consists of a Transport stream followed immediately by a Payload stream conveyed together in a single RDMA Send. To transmit large RPC messages, a combination of one RDMA Send operation and one or more other RDMA operations is employed. RPC-over-RDMA framing replaces all other RPC framing (such as TCP record marking) when used atop an RPC-over-RDMA association, even when the underlying RDMA protocol may itself be layered atop a transport with a defined RPC framing (such as TCP). However, it is possible for RPC-over-RDMA to be dynamically enabled in the course of negotiating the use of RDMA via a ULP exchange. Because RPC framing delimits an entire RPC request or reply, the resulting shift in framing must occur between distinct RPC messages, and in concert with the underlying transport.3.3. Managing Receiver Resources
It is critical to provide RDMA Send flow control for an RDMA connection. If any pre-posted Receive buffer on the connection is not large enough to accept an incoming RDMA Send, or if a pre-posted Receive buffer is not available to accept an incoming RDMA Send, the
RDMA connection can be terminated. This is different than conventional TCP/IP networking, in which buffers are allocated dynamically as messages are received. The longevity of an RDMA connection mandates that sending endpoints respect the resource limits of peer receivers. To ensure messages can be sent and received reliably, there are two operational parameters for each connection.3.3.1. RPC-over-RDMA Credits
Flow control for RDMA Send operations directed to the Responder is implemented as a simple request/grant protocol in the RPC-over-RDMA header associated with each RPC message. An RPC-over-RDMA version 1 credit is the capability to handle one RPC-over-RDMA transaction. Each RPC-over-RDMA message sent from Requester to Responder requests a number of credits from the Responder. Each RPC-over-RDMA message sent from Responder to Requester informs the Requester how many credits the Responder has granted. The requested and granted values are carried in each RPC- over-RDMA message's rdma_credit field (see Section 4.2.3). Practically speaking, the critical value is the granted value. A Requester MUST NOT send unacknowledged requests in excess of the Responder's granted credit limit. If the granted value is exceeded, the RDMA layer may signal an error, possibly terminating the connection. The granted value MUST NOT be zero, since such a value would result in deadlock. RPC calls complete in any order, but the current granted credit limit at the Responder is known to the Requester from RDMA Send ordering properties. The number of allowed new requests the Requester may send is then the lower of the current requested and granted credit values, minus the number of requests in flight. Advertised credit values are not altered when individual RPCs are started or completed. The requested and granted credit values MAY be adjusted to match the needs or policies in effect on either peer. For instance, a Responder may reduce the granted credit value to accommodate the available resources in a Shared Receive Queue. The Responder MUST ensure that an increase in receive resources is effected before the next RPC Reply message is sent. A Requester MUST maintain enough receive resources to accommodate expected replies. Responders have to be prepared for there to be no receive resources available on Requesters with no pending RPC transactions.
Certain RDMA implementations may impose additional flow-control restrictions, such as limits on RDMA Read operations in progress at the Responder. Accommodation of such restrictions is considered the responsibility of each RPC-over-RDMA version 1 implementation.3.3.2. Inline Threshold
An "inline threshold" value is the largest message size (in octets) that can be conveyed in one direction between peer implementations using RDMA Send and Receive. The inline threshold value is the smaller of the largest number of bytes the sender can post via a single RDMA Send operation and the largest number of bytes the receiver can accept via a single RDMA Receive operation. Each connection has two inline threshold values: one for messages flowing from Requester-to-Responder (referred to as the "call inline threshold") and one for messages flowing from Responder-to-Requester (referred to as the "reply inline threshold"). Unlike credit limits, inline threshold values are not advertised to peers via the RPC-over-RDMA version 1 protocol, and there is no provision for inline threshold values to change during the lifetime of an RPC-over-RDMA version 1 connection.3.3.3. Initial Connection State
When a connection is first established, peers might not know how many receive resources the other has, nor how large the other peer's inline thresholds are. As a basis for an initial exchange of RPC requests, each RPC-over- RDMA version 1 connection provides the ability to exchange at least one RPC message at a time, whose RPC Call and Reply messages are no more than 1024 bytes in size. A Responder MAY exceed this basic level of configuration, but a Requester MUST NOT assume more than one credit is available and MUST receive a valid reply from the Responder carrying the actual number of available credits, prior to sending its next request. Receiver implementations MUST support inline thresholds of 1024 bytes but MAY support larger inline thresholds values. An independent mechanism for discovering a peer's inline thresholds before a connection is established may be used to optimize the use of RDMA Send and Receive operations. In the absence of such a mechanism, senders and receives MUST assume the inline thresholds are 1024 bytes.
3.4. XDR Encoding with Chunks
When a DDP capability is available, the transport places the contents of one or more XDR data items directly into the receiver's memory, separately from the transfer of other parts of the containing XDR stream.3.4.1. Reducing an XDR Stream
RPC-over-RDMA version 1 provides a mechanism for moving part of an RPC message via a data transfer distinct from an RDMA Send/Receive pair. The sender removes one or more XDR data items from the Payload stream. They are conveyed via other mechanisms, such as one or more RDMA Read or Write operations. As the receiver decodes an incoming message, it skips over directly placed data items. The portion of an XDR stream that is split out and moved separately is referred to as a "chunk". In some contexts, data in an RPC-over- RDMA header that describes these split out regions of memory may also be referred to as a "chunk". A Payload stream after chunks have been removed is referred to as a "reduced" Payload stream. Likewise, a data item that has been removed from a Payload stream to be transferred separately is referred to as a "reduced" data item.3.4.2. DDP-Eligibility
Not all XDR data items benefit from DDP. For example, small data items or data items that require XDR unmarshaling by the receiver do not benefit from DDP. In addition, it is impractical for receivers to prepare for every possible XDR data item in a protocol to be transferred in a chunk. To maintain interoperability on an RPC-over-RDMA transport, a determination must be made of which few XDR data items in each ULP are allowed to use DDP. This is done by additional specifications that describe how ULPs employ DDP. A "ULB specification" identifies which specific individual XDR data items in a ULP MAY be transferred via DDP. Such data items are referred to as "DDP-eligible". All other XDR data items MUST NOT be reduced. Detailed requirements for ULBs are provided in Section 6.
3.4.3. RDMA Segments
When encoding a Payload stream that contains a DDP-eligible data item, a sender may choose to reduce that data item. When it chooses to do so, the sender does not place the item into the Payload stream. Instead, the sender records in the RPC-over-RDMA header the location and size of the memory region containing that data item. The Requester provides location information for DDP-eligible data items in both RPC Call and Reply messages. The Responder uses this information to retrieve arguments contained in the specified region of the Requester's memory or place results in that memory region. An "RDMA segment", or "plain segment", is an RPC-over-RDMA Transport header data object that contains the precise coordinates of a contiguous memory region that is to be conveyed separately from the Payload stream. Plain segments contain the following information: Handle Steering tag (STag) or R_key generated by registering this memory with the RDMA provider. Length The length of the RDMA segment's memory region, in octets. An "empty segment" is an RDMA segment with the value zero (0) in its length field. Offset The offset or beginning memory address of the RDMA segment's memory region. See [RFC5040] for further discussion.3.4.4. Chunks
In RPC-over-RDMA version 1, a "chunk" refers to a portion of the Payload stream that is moved independently of the RPC-over-RDMA Transport header and Payload stream. Chunk data is removed from the sender's Payload stream, transferred via separate operations, and then reinserted into the receiver's Payload stream to form a complete RPC message. Each chunk is comprised of RDMA segments. Each RDMA segment represents a single contiguous piece of that chunk. A Requester MAY divide a chunk into RDMA segments using any boundaries that are convenient. The length of a chunk is the sum of the lengths of the RDMA segments that comprise it.
The RPC-over-RDMA version 1 transport protocol does not place a limit on chunk size. However, each ULP may cap the amount of data that can be transferred by a single RPC (for example, NFS has "rsize" and "wsize", which restrict the payload size of NFS READ and WRITE operations). The Responder can use such limits to sanity check chunk sizes before using them in RDMA operations.3.4.4.1. Counted Arrays
If a chunk contains a counted array data type, the count of array elements MUST remain in the Payload stream, while the array elements MUST be moved to the chunk. For example, when encoding an opaque byte array as a chunk, the count of bytes stays in the Payload stream, while the bytes in the array are removed from the Payload stream and transferred within the chunk. Individual array elements appear in a chunk in their entirety. For example, when encoding an array of arrays as a chunk, the count of items in the enclosing array stays in the Payload stream, but each enclosed array, including its item count, is transferred as part of the chunk.3.4.4.2. Optional-Data
If a chunk contains an optional-data data type, the "is present" field MUST remain in the Payload stream, while the data, if present, MUST be moved to the chunk.3.4.4.3. XDR Unions
A union data type MUST NOT be made DDP-eligible, but one or more of its arms MAY be DDP-eligible, subject to the other requirements in this section.3.4.4.4. Chunk Roundup
Except in special cases (covered in Section 3.5.3), a chunk MUST contain exactly one XDR data item. This makes it straightforward to reduce variable-length data items without affecting the XDR alignment of data items in the Payload stream. When a variable-length XDR data item is reduced, the sender MUST remove XDR roundup padding for that data item from the Payload stream so that data items remaining in the Payload stream begin on four-byte alignment.
3.4.5. Read Chunks
A "Read chunk" represents an XDR data item that is to be pulled from the Requester to the Responder. A Read chunk is a list of one or more RDMA read segments. An RDMA read segment consists of a Position field followed by a plain segment. See Section 4.1.2 for details. Position The byte offset in the unreduced Payload stream where the receiver reinserts the data item conveyed in a chunk. The Position value MUST be computed from the beginning of the unreduced Payload stream, which begins at Position zero. All RDMA read segments belonging to the same Read chunk have the same value in their Position field. While constructing an RPC Call message, a Requester registers memory regions that contain data to be transferred via RDMA Read operations. It advertises the coordinates of these regions in the RPC-over-RDMA Transport header of the RPC Call message. After receiving an RPC Call message sent via an RDMA Send operation, a Responder transfers the chunk data from the Requester using RDMA Read operations. The Responder reconstructs the transferred chunk data by concatenating the contents of each RDMA segment, in list order, into the received Payload stream at the Position value recorded in that RDMA segment. Put another way, the Responder inserts the first RDMA segment in a Read chunk into the Payload stream at the byte offset indicated by its Position field. RDMA segments whose Position field value match this offset are concatenated afterwards, until there are no more RDMA segments at that Position value. The Position field in a read segment indicates where the containing Read chunk starts in the Payload stream. The value in this field MUST be a multiple of four. All segments in the same Read chunk share the same Position value, even if one or more of the RDMA segments have a non-four-byte-aligned length.
3.4.5.1. Decoding Read Chunks
While decoding a received Payload stream, whenever the XDR offset in the Payload stream matches that of a Read chunk, the Responder initiates an RDMA Read to pull the chunk's data content into registered local memory. The Responder acknowledges its completion of use of Read chunk source buffers when it sends an RPC Reply message to the Requester. The Requester may then release Read chunks advertised in the request.3.4.5.2. Read Chunk Roundup
When reducing a variable-length argument data item, the Requester SHOULD NOT include the data item's XDR roundup padding in the chunk. The length of a Read chunk is determined as follows: o If the Requester chooses to include roundup padding in a Read chunk, the chunk's total length MUST be the sum of the encoded length of the data item and the length of the roundup padding. The length of the data item that was encoded into the Payload stream remains unchanged. The sender can increase the length of the chunk by adding another RDMA segment containing only the roundup padding, or it can do so by extending the final RDMA segment in the chunk. o If the sender chooses not to include roundup padding in the chunk, the chunk's total length MUST be the same as the encoded length of the data item.3.4.6. Write Chunks
While constructing an RPC Call message, a Requester prepares memory regions in which to receive DDP-eligible result data items. A "Write chunk" represents an XDR data item that is to be pushed from a Responder to a Requester. It is made up of an array of zero or more plain segments. Write chunks are provisioned by a Requester long before the Responder has prepared the reply Payload stream. A Requester often does not know the actual length of the result data items to be returned, since the result does not yet exist. Thus, it MUST register Write chunks long enough to accommodate the maximum possible size of each returned data item.
In addition, the XDR position of DDP-eligible data items in the reply's Payload stream is not predictable when a Requester constructs an RPC Call message. Therefore, RDMA segments in a Write chunk do not have a Position field. For each Write chunk provided by a Requester, the Responder pushes one data item to the Requester, filling the chunk contiguously and in segment array order until that data item has been completely written to the Requester. The Responder MUST copy the segment count and all segments from the Requester-provided Write chunk into the RPC Reply message's Transport header. As it does so, the Responder updates each segment length field to reflect the actual amount of data that is being returned in that segment. The Responder then sends the RPC Reply message via an RDMA Send operation. An "empty Write chunk" is a Write chunk with a zero segment count. By definition, the length of an empty Write chunk is zero. An "unused Write chunk" has a non-zero segment count, but all of its segments are empty segments.3.4.6.1. Decoding Write Chunks
After receiving the RPC Reply message, the Requester reconstructs the transferred data by concatenating the contents of each segment, in array order, into the RPC Reply message's XDR stream at the known XDR position of the associated DDP-eligible result data item.3.4.6.2. Write Chunk Roundup
When provisioning a Write chunk for a variable-length result data item, the Requester SHOULD NOT include additional space for XDR roundup padding. A Responder MUST NOT write XDR roundup padding into a Write chunk, even if the Requester made space available for it. Therefore, when returning a single variable-length result data item, a returned Write chunk's total length MUST be the same as the encoded length of the result data item.3.5. Message Size
A receiver of RDMA Send operations is required by RDMA to have previously posted one or more adequately sized buffers. Memory savings are achieved on both Requesters and Responders by posting small Receive buffers. However, not all RPC messages are small. RPC-over-RDMA version 1 provides several mechanisms that allow messages of any size to be conveyed efficiently.
3.5.1. Short Messages
RPC messages are frequently smaller than typical inline thresholds. For example, the NFS version 3 GETATTR operation is only 56 bytes: 20 bytes of RPC header, a 32-byte file handle argument, and 4 bytes for its length. The reply to this common request is about 100 bytes. Since all RPC messages conveyed via RPC-over-RDMA require an RDMA Send operation, the most efficient way to send an RPC message that is smaller than the inline threshold is to append the Payload stream directly to the Transport stream. An RPC-over-RDMA header with a small RPC Call or Reply message immediately following is transferred using a single RDMA Send operation. No other operations are needed. An RPC-over-RDMA transaction using Short Messages: Requester Responder | RDMA Send (RDMA_MSG) | Call | ------------------------------> | | | | | Processing | | | RDMA Send (RDMA_MSG) | | <------------------------------ | Reply3.5.2. Chunked Messages
If DDP-eligible data items are present in a Payload stream, a sender MAY reduce some or all of these items by removing them from the Payload stream. The sender uses a separate mechanism to transfer the reduced data items. The Transport stream with the reduced Payload stream immediately following is then transferred using a single RDMA Send operation. After receiving the Transport and Payload streams of an RPC Call message accompanied by Read chunks, the Responder uses RDMA Read operations to move reduced data items in Read chunks. Before sending the Transport and Payload streams of an RPC Reply message containing Write chunks, the Responder uses RDMA Write operations to move reduced data items in Write and Reply chunks.
An RPC-over-RDMA transaction with a Read chunk: Requester Responder | RDMA Send (RDMA_MSG) | Call | ------------------------------> | | RDMA Read | | <------------------------------ | | RDMA Response (arg data) | | ------------------------------> | | | | | Processing | | | RDMA Send (RDMA_MSG) | | <------------------------------ | Reply An RPC-over-RDMA transaction with a Write chunk: Requester Responder | RDMA Send (RDMA_MSG) | Call | ------------------------------> | | | | | Processing | | | RDMA Write (result data) | | <------------------------------ | | RDMA Send (RDMA_MSG) | | <------------------------------ | Reply3.5.3. Long Messages
When a Payload stream is larger than the receiver's inline threshold, the Payload stream is reduced by removing DDP-eligible data items and placing them in chunks to be moved separately. If there are no DDP- eligible data items in the Payload stream, or the Payload stream is still too large after it has been reduced, the RDMA transport MUST use RDMA Read or Write operations to convey the Payload stream itself. This mechanism is referred to as a "Long Message". To transmit a Long Message, the sender conveys only the Transport stream with an RDMA Send operation. The Payload stream is not included in the Send buffer in this instance. Instead, the Requester provides chunks that the Responder uses to move the Payload stream. Long Call To send a Long Call message, the Requester provides a special Read chunk that contains the RPC Call message's Payload stream. Every RDMA read segment in this chunk MUST contain zero in its Position field. Thus, this chunk is known as a "Position Zero Read chunk".
Long Reply To send a Long Reply, the Requester provides a single special Write chunk in advance, known as the "Reply chunk", that will contain the RPC Reply message's Payload stream. The Requester sizes the Reply chunk to accommodate the maximum expected reply size for that upper-layer operation. Though the purpose of a Long Message is to handle large RPC messages, Requesters MAY use a Long Message at any time to convey an RPC Call message. A Responder chooses which form of reply to use based on the chunks provided by the Requester. If Write chunks were provided and the Responder has a DDP-eligible result, it first reduces the reply Payload stream. If a Reply chunk was provided and the reduced Payload stream is larger than the reply inline threshold, the Responder MUST use the Requester-provided Reply chunk for the reply. XDR data items may appear in these special chunks without regard to their DDP-eligibility. As these chunks contain a Payload stream, such chunks MUST include appropriate XDR roundup padding to maintain proper XDR alignment of their contents. An RPC-over-RDMA transaction using a Long Call: Requester Responder | RDMA Send (RDMA_NOMSG) | Call | ------------------------------> | | RDMA Read | | <------------------------------ | | RDMA Response (RPC call) | | ------------------------------> | | | | | Processing | | | RDMA Send (RDMA_MSG) | | <------------------------------ | Reply
An RPC-over-RDMA transaction using a Long Reply: Requester Responder | RDMA Send (RDMA_MSG) | Call | ------------------------------> | | | | | Processing | | | RDMA Write (RPC reply) | | <------------------------------ | | RDMA Send (RDMA_NOMSG) | | <------------------------------ | Reply