Internet Engineering Task Force (IETF) B. Halevy Request for Comments: 8435 Category: Standards Track T. Haynes ISSN: 2070-1721 Hammerspace August 2018 Parallel NFS (pNFS) Flexible File LayoutAbstract
Parallel NFS (pNFS) allows a separation between the metadata (onto a metadata server) and data (onto a storage device) for a file. The flexible file layout type is defined in this document as an extension to pNFS that allows the use of storage devices that require only a limited degree of interaction with the metadata server and use already-existing protocols. Client-side mirroring is also added to provide replication of files. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8435. Copyright Notice Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction ....................................................3 1.1. Definitions ................................................4 1.2. Requirements Language ......................................6 2. Coupling of Storage Devices .....................................6 2.1. LAYOUTCOMMIT ...............................................7 2.2. Fencing Clients from the Storage Device ....................7 2.2.1. Implementation Notes for Synthetic uids/gids ........8 2.2.2. Example of Using Synthetic uids/gids ................9 2.3. State and Locking Models ..................................10 2.3.1. Loosely Coupled Locking Model ......................11 2.3.2. Tightly Coupled Locking Model ......................12 3. XDR Description of the Flexible File Layout Type ...............13 3.1. Code Components Licensing Notice ..........................14 4. Device Addressing and Discovery ................................16 4.1. ff_device_addr4 ...........................................16 4.2. Storage Device Multipathing ...............................17 5. Flexible File Layout Type ......................................18 5.1. ff_layout4 ................................................19 5.1.1. Error Codes from LAYOUTGET .........................23 5.1.2. Client Interactions with FF_FLAGS_NO_IO_THRU_MDS ...23 5.2. LAYOUTCOMMIT ..............................................24 5.3. Interactions between Devices and Layouts ..................24 5.4. Handling Version Errors ...................................24 6. Striping via Sparse Mapping ....................................25 7. Recovering from Client I/O Errors ..............................25 8. Mirroring ......................................................26 8.1. Selecting a Mirror ........................................26 8.2. Writing to Mirrors ........................................27 8.2.1. Single Storage Device Updates Mirrors ..............27 8.2.2. Client Updates All Mirrors .........................27 8.2.3. Handling Write Errors ..............................28 8.2.4. Handling Write COMMITs .............................28 8.3. Metadata Server Resilvering of the File ...................29 9. Flexible File Layout Type Return ...............................29 9.1. I/O Error Reporting .......................................30 9.1.1. ff_ioerr4 ..........................................30 9.2. Layout Usage Statistics ...................................31 9.2.1. ff_io_latency4 .....................................31 9.2.2. ff_layoutupdate4 ...................................32 9.2.3. ff_iostats4 ........................................33 9.3. ff_layoutreturn4 ..........................................34 10. Flexible File Layout Type LAYOUTERROR .........................35 11. Flexible File Layout Type LAYOUTSTATS .........................35 12. Flexible File Layout Type Creation Hint .......................35 12.1. ff_layouthint4 ...........................................35 13. Recalling a Layout ............................................36
13.1. CB_RECALL_ANY ............................................36 14. Client Fencing ................................................37 15. Security Considerations .......................................37 15.1. RPCSEC_GSS and Security Services .........................39 15.1.1. Loosely Coupled ...................................39 15.1.2. Tightly Coupled ...................................39 16. IANA Considerations ...........................................39 17. References ....................................................40 17.1. Normative References .....................................40 17.2. Informative References ...................................41 Acknowledgments ...................................................42 Authors' Addresses ................................................421. Introduction
In Parallel NFS (pNFS), the metadata server returns layout type structures that describe where file data is located. There are different layout types for different storage systems and methods of arranging data on storage devices. This document defines the flexible file layout type used with file-based data servers that are accessed using the NFS protocols: NFSv3 [RFC1813], NFSv4.0 [RFC7530], NFSv4.1 [RFC5661], and NFSv4.2 [RFC7862]. To provide a global state model equivalent to that of the files layout type, a back-end control protocol might be implemented between the metadata server and NFSv4.1+ storage devices. An implementation can either define its own proprietary mechanism or it could define a control protocol in a Standards Track document. The requirements for a control protocol are specified in [RFC5661] and clarified in [RFC8434]. The control protocol described in this document is based on NFS. It does not provide for knowledge of stateids to be passed between the metadata server and the storage devices. Instead, the storage devices are configured such that the metadata server has full access rights to the data file system and then the metadata server uses synthetic ids to control client access to individual files. In traditional mirroring of data, the server is responsible for replicating, validating, and repairing copies of the data file. With client-side mirroring, the metadata server provides a layout that presents the available mirrors to the client. The client then picks a mirror to read from and ensures that all writes go to all mirrors. The client only considers the write transaction to have succeeded if all mirrors are successfully updated. In case of error, the client can use the LAYOUTERROR operation to inform the metadata server, which is then responsible for the repairing of the mirrored copies of the file.
1.1. Definitions
control communication requirements: the specification for information on layouts, stateids, file metadata, and file data that must be communicated between the metadata server and the storage devices. There is a separate set of requirements for each layout type. control protocol: the particular mechanism that an implementation of a layout type would use to meet the control communication requirement for that layout type. This need not be a protocol as normally understood. In some cases, the same protocol may be used as a control protocol and storage protocol. client-side mirroring: a feature in which the client, not the server, is responsible for updating all of the mirrored copies of a layout segment. (file) data: that part of the file system object that contains the data to be read or written. It is the contents of the object rather than the attributes of the object. data server (DS): a pNFS server that provides the file's data when the file system object is accessed over a file-based protocol. fencing: the process by which the metadata server prevents the storage devices from processing I/O from a specific client to a specific file. file layout type: a layout type in which the storage devices are accessed via the NFS protocol (see Section 13 of [RFC5661]). gid: the group id, a numeric value that identifies to which group a file belongs. layout: the information a client uses to access file data on a storage device. This information includes specification of the protocol (layout type) and the identity of the storage devices to be used. layout iomode: a grant of either read-only or read/write I/O to the client. layout segment: a sub-division of a layout. That sub-division might be by the layout iomode (see Sections 3.3.20 and 12.2.9 of [RFC5661]), a striping pattern (see Section 13.3 of [RFC5661]), or requested byte range.
layout stateid: a 128-bit quantity returned by a server that uniquely defines the layout state provided by the server for a specific layout that describes a layout type and file (see Section 12.5.2 of [RFC5661]). Further, Section 12.5.3 of [RFC5661] describes differences in handling between layout stateids and other stateid types. layout type: a specification of both the storage protocol used to access the data and the aggregation scheme used to lay out the file data on the underlying storage devices. loose coupling: when the control protocol is a storage protocol. (file) metadata: the part of the file system object that contains various descriptive data relevant to the file object, as opposed to the file data itself. This could include the time of last modification, access time, EOF position, etc. metadata server (MDS): the pNFS server that provides metadata information for a file system object. It is also responsible for generating, recalling, and revoking layouts for file system objects, for performing directory operations, and for performing I/O operations to regular files when the clients direct these to the metadata server itself. mirror: a copy of a layout segment. Note that if one copy of the mirror is updated, then all copies must be updated. recalling a layout: a graceful recall, via a callback, of a specific layout by the metadata server to the client. Graceful here means that the client would have the opportunity to flush any WRITEs, etc., before returning the layout to the metadata server. revoking a layout: an invalidation of a specific layout by the metadata server. Once revocation occurs, the metadata server will not accept as valid any reference to the revoked layout, and a storage device will not accept any client access based on the layout. resilvering: the act of rebuilding a mirrored copy of a layout segment from a known good copy of the layout segment. Note that this can also be done to create a new mirrored copy of the layout segment. rsize: the data transfer buffer size used for READs.
stateid: a 128-bit quantity returned by a server that uniquely defines the set of locking-related state provided by the server. Stateids may designate state related to open files, byte-range locks, delegations, or layouts. storage device: the target to which clients may direct I/O requests when they hold an appropriate layout. See Section 2.1 of [RFC8434] for further discussion of the difference between a data server and a storage device. storage protocol: the protocol used by clients to do I/O operations to the storage device. Each layout type specifies the set of storage protocols. tight coupling: an arrangement in which the control protocol is one designed specifically for control communication. It may be either a proprietary protocol adapted specifically to a particular metadata server or a protocol based on a Standards Track document. uid: the user id, a numeric value that identifies which user owns a file. wsize: the data transfer buffer size used for WRITEs.1.2. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.2. Coupling of Storage Devices
A server implementation may choose either a loosely coupled model or a tightly coupled model between the metadata server and the storage devices. [RFC8434] describes the general problems facing pNFS implementations. This document details how the new flexible file layout type addresses these issues. To implement the tightly coupled model, a control protocol has to be defined. As the flexible file layout imposes no special requirements on the client, the control protocol will need to provide: (1) management of both security and LAYOUTCOMMITs and (2) a global stateid model and management of these stateids.
When implementing the loosely coupled model, the only control protocol will be a version of NFS, with no ability to provide a global stateid model or to prevent clients from using layouts inappropriately. To enable client use in that environment, this document will specify how security, state, and locking are to be managed.2.1. LAYOUTCOMMIT
Regardless of the coupling model, the metadata server has the responsibility, upon receiving a LAYOUTCOMMIT (see Section 18.42 of [RFC5661]) to ensure that the semantics of pNFS are respected (see Section 3.1 of [RFC8434]). These do include a requirement that data written to a data storage device be stable before the occurrence of the LAYOUTCOMMIT. It is the responsibility of the client to make sure the data file is stable before the metadata server begins to query the storage devices about the changes to the file. If any WRITE to a storage device did not result with stable_how equal to FILE_SYNC, a LAYOUTCOMMIT to the metadata server MUST be preceded by a COMMIT to the storage devices written to. Note that if the client has not done a COMMIT to the storage device, then the LAYOUTCOMMIT might not be synchronized to the last WRITE operation to the storage device.2.2. Fencing Clients from the Storage Device
With loosely coupled storage devices, the metadata server uses synthetic uids (user ids) and gids (group ids) for the data file, where the uid owner of the data file is allowed read/write access and the gid owner is allowed read-only access. As part of the layout (see ffds_user and ffds_group in Section 5.1), the client is provided with the user and group to be used in the Remote Procedure Call (RPC) [RFC5531] credentials needed to access the data file. Fencing off of clients is achieved by the metadata server changing the synthetic uid and/or gid owners of the data file on the storage device to implicitly revoke the outstanding RPC credentials. A client presenting the wrong credential for the desired access will get an NFS4ERR_ACCESS error. With this loosely coupled model, the metadata server is not able to fence off a single client; it is forced to fence off all clients. However, as the other clients react to the fencing, returning their layouts and trying to get new ones, the metadata server can hand out a new uid and gid to allow access.
It is RECOMMENDED to implement common access control methods at the storage device file system to allow only the metadata server root (super user) access to the storage device and to set the owner of all directories holding data files to the root user. This approach provides a practical model to enforce access control and fence off cooperative clients, but it cannot protect against malicious clients; hence, it provides a level of security equivalent to AUTH_SYS. It is RECOMMENDED that the communication between the metadata server and storage device be secure from eavesdroppers and man-in-the-middle protocol tampering. The security measure could be physical security (e.g., the servers are co-located in a physically secure area), encrypted communications, or some other technique. With tightly coupled storage devices, the metadata server sets the user and group owners, mode bits, and Access Control List (ACL) of the data file to be the same as the metadata file. And the client must authenticate with the storage device and go through the same authorization process it would go through via the metadata server. In the case of tight coupling, fencing is the responsibility of the control protocol and is not described in detail in this document. However, implementations of the tightly coupled locking model (see Section 2.3) will need a way to prevent access by certain clients to specific files by invalidating the corresponding stateids on the storage device. In such a scenario, the client will be given an error of NFS4ERR_BAD_STATEID. The client need not know the model used between the metadata server and the storage device. It need only react consistently to any errors in interacting with the storage device. It should both return the layout and error to the metadata server and ask for a new layout. At that point, the metadata server can either hand out a new layout, hand out no layout (forcing the I/O through it), or deny the client further access to the file.2.2.1. Implementation Notes for Synthetic uids/gids
The selection method for the synthetic uids and gids to be used for fencing in loosely coupled storage devices is strictly an implementation issue. That is, an administrator might restrict a range of such ids available to the Lightweight Directory Access Protocol (LDAP) 'uid' field [RFC4519]. The administrator might also be able to choose an id that would never be used to grant access. Then, when the metadata server had a request to access a file, a SETATTR would be sent to the storage device to set the owner and group of the data file. The user and group might be selected in a round-robin fashion from the range of available ids.
Those ids would be sent back as ffds_user and ffds_group to the client, who would present them as the RPC credentials to the storage device. When the client is done accessing the file and the metadata server knows that no other client is accessing the file, it can reset the owner and group to restrict access to the data file. When the metadata server wants to fence off a client, it changes the synthetic uid and/or gid to the restricted ids. Note that using a restricted id ensures that there is a change of owner and at least one id available that never gets allowed access. Under an AUTH_SYS security model, synthetic uids and gids of 0 SHOULD be avoided. These typically either grant super access to files on a storage device or are mapped to an anonymous id. In the first case, even if the data file is fenced, the client might still be able to access the file. In the second case, multiple ids might be mapped to the anonymous ids.2.2.2. Example of Using Synthetic uids/gids
The user loghyr creates a file "ompha.c" on the metadata server, which then creates a corresponding data file on the storage device. The metadata server entry may look like: -rw-r--r-- 1 loghyr staff 1697 Dec 4 11:31 ompha.c On the storage device, the file may be assigned some unpredictable synthetic uid/gid to deny access: -rw-r----- 1 19452 28418 1697 Dec 4 11:31 data_ompha.c When the file is opened on a client and accessed, the user will try to get a layout for the data file. Since the layout knows nothing about the user (and does not care), it does not matter whether the user loghyr or garbo opens the file. The client has to present an uid of 19452 to get write permission. If it presents any other value for the uid, then it must give a gid of 28418 to get read access. Further, if the metadata server decides to fence the file, it should change the uid and/or gid such that these values neither match earlier values for that file nor match a predictable change based on an earlier fencing. -rw-r----- 1 19453 28419 1697 Dec 4 11:31 data_ompha.c
The set of synthetic gids on the storage device should be selected such that there is no mapping in any of the name services used by the storage device, i.e., each group should have no members. If the layout segment has an iomode of LAYOUTIOMODE4_READ, then the metadata server should return a synthetic uid that is not set on the storage device. Only the synthetic gid would be valid. The client is thus solely responsible for enforcing file permissions in a loosely coupled model. To allow loghyr write access, it will send an RPC to the storage device with a credential of 1066:1067. To allow garbo read access, it will send an RPC to the storage device with a credential of 1067:1067. The value of the uid does not matter as long as it is not the synthetic uid granted when getting the layout. While pushing the enforcement of permission checking onto the client may seem to weaken security, the client may already be responsible for enforcing permissions before modifications are sent to a server. With cached writes, the client is always responsible for tracking who is modifying a file and making sure to not coalesce requests from multiple users into one request.2.3. State and Locking Models
An implementation can always be deployed as a loosely coupled model. There is, however, no way for a storage device to indicate over an NFS protocol that it can definitively participate in a tightly coupled model: o Storage devices implementing the NFSv3 and NFSv4.0 protocols are always treated as loosely coupled. o NFSv4.1+ storage devices that do not return the EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID are indicating that they are to be treated as loosely coupled. From the locking viewpoint, they are treated in the same way as NFSv4.0 storage devices. o NFSv4.1+ storage devices that do identify themselves with the EXCHGID4_FLAG_USE_PNFS_DS flag set to EXCHANGE_ID can potentially be tightly coupled. They would use a back-end control protocol to implement the global stateid model as described in [RFC5661]. A storage device would have to be either discovered or advertised over the control protocol to enable a tightly coupled model.
2.3.1. Loosely Coupled Locking Model
When locking-related operations are requested, they are primarily dealt with by the metadata server, which generates the appropriate stateids. When an NFSv4 version is used as the data access protocol, the metadata server may make stateid-related requests of the storage devices. However, it is not required to do so, and the resulting stateids are known only to the metadata server and the storage device. Given this basic structure, locking-related operations are handled as follows: o OPENs are dealt with by the metadata server. Stateids are selected by the metadata server and associated with the client ID describing the client's connection to the metadata server. The metadata server may need to interact with the storage device to locate the file to be opened, but no locking-related functionality need be used on the storage device. OPEN_DOWNGRADE and CLOSE only require local execution on the metadata server. o Advisory byte-range locks can be implemented locally on the metadata server. As in the case of OPENs, the stateids associated with byte-range locks are assigned by the metadata server and only used on the metadata server. o Delegations are assigned by the metadata server that initiates recalls when conflicting OPENs are processed. No storage device involvement is required. o TEST_STATEID and FREE_STATEID are processed locally on the metadata server, without storage device involvement. All I/O operations to the storage device are done using the anonymous stateid. Thus, the storage device has no information about the openowner and lockowner responsible for issuing a particular I/O operation. As a result: o Mandatory byte-range locking cannot be supported because the storage device has no way of distinguishing I/O done on behalf of the lock owner from those done by others. o Enforcement of share reservations is the responsibility of the client. Even though I/O is done using the anonymous stateid, the client must ensure that it has a valid stateid associated with the openowner.
In the event that a stateid is revoked, the metadata server is responsible for preventing client access, since it has no way of being sure that the client is aware that the stateid in question has been revoked. As the client never receives a stateid generated by a storage device, there is no client lease on the storage device and no prospect of lease expiration, even when access is via NFSv4 protocols. Clients will have leases on the metadata server. In dealing with lease expiration, the metadata server may need to use fencing to prevent revoked stateids from being relied upon by a client unaware of the fact that they have been revoked.2.3.2. Tightly Coupled Locking Model
When locking-related operations are requested, they are primarily dealt with by the metadata server, which generates the appropriate stateids. These stateids must be made known to the storage device using control protocol facilities, the details of which are not discussed in this document. Given this basic structure, locking-related operations are handled as follows: o OPENs are dealt with primarily on the metadata server. Stateids are selected by the metadata server and associated with the client ID describing the client's connection to the metadata server. The metadata server needs to interact with the storage device to locate the file to be opened and to make the storage device aware of the association between the metadata-server-chosen stateid and the client and openowner that it represents. OPEN_DOWNGRADE and CLOSE are executed initially on the metadata server, but the state change made must be propagated to the storage device. o Advisory byte-range locks can be implemented locally on the metadata server. As in the case of OPENs, the stateids associated with byte-range locks are assigned by the metadata server and are available for use on the metadata server. Because I/O operations are allowed to present lock stateids, the metadata server needs the ability to make the storage device aware of the association between the metadata-server-chosen stateid and the corresponding open stateid it is associated with. o Mandatory byte-range locks can be supported when both the metadata server and the storage devices have the appropriate support. As in the case of advisory byte-range locks, these are assigned by
the metadata server and are available for use on the metadata server. To enable mandatory lock enforcement on the storage device, the metadata server needs the ability to make the storage device aware of the association between the metadata-server-chosen stateid and the client, openowner, and lock (i.e., lockowner, byte-range, and lock-type) that it represents. Because I/O operations are allowed to present lock stateids, this information needs to be propagated to all storage devices to which I/O might be directed rather than only to storage device that contain the locked region. o Delegations are assigned by the metadata server that initiates recalls when conflicting OPENs are processed. Because I/O operations are allowed to present delegation stateids, the metadata server requires the ability (1) to make the storage device aware of the association between the metadata-server-chosen stateid and the filehandle and delegation type it represents and (2) to break such an association. o TEST_STATEID is processed locally on the metadata server, without storage device involvement. o FREE_STATEID is processed on the metadata server, but the metadata server requires the ability to propagate the request to the corresponding storage devices. Because the client will possess and use stateids valid on the storage device, there will be a client lease on the storage device, and the possibility of lease expiration does exist. The best approach for the storage device is to retain these locks as a courtesy. However, if it does not do so, control protocol facilities need to provide the means to synchronize lock state between the metadata server and storage device. Clients will also have leases on the metadata server that are subject to expiration. In dealing with lease expiration, the metadata server would be expected to use control protocol facilities enabling it to invalidate revoked stateids on the storage device. In the event the client is not responsive, the metadata server may need to use fencing to prevent revoked stateids from being acted upon by the storage device.3. XDR Description of the Flexible File Layout Type
This document contains the External Data Representation (XDR) [RFC4506] description of the flexible file layout type. The XDR description is embedded in this document in a way that makes it simple for the reader to extract into a ready-to-compile form. The
reader can feed this document into the following shell script to produce the machine-readable XDR description of the flexible file layout type: <CODE BEGINS> #!/bin/sh grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??' <CODE ENDS> That is, if the above script is stored in a file called "extract.sh" and this document is in a file called "spec.txt", then the reader can do: sh extract.sh < spec.txt > flex_files_prot.x The effect of the script is to remove leading white space from each line, plus a sentinel sequence of "///". The embedded XDR file header follows. Subsequent XDR descriptions with the sentinel sequence are embedded throughout the document. Note that the XDR code contained in this document depends on types from the NFSv4.1 nfs4_prot.x file [RFC5662]. This includes both nfs types that end with a 4, such as offset4, length4, etc., as well as more generic types such as uint32_t and uint64_t.3.1. Code Components Licensing Notice
Both the XDR description and the scripts used for extracting the XDR description are Code Components as described in Section 4 of "Trust Legal Provisions (TLP)" [LEGAL]. These Code Components are licensed according to the terms of that document. <CODE BEGINS> /// /* /// * Copyright (c) 2018 IETF Trust and the persons identified /// * as authors of the code. All rights reserved. /// * /// * Redistribution and use in source and binary forms, with /// * or without modification, are permitted provided that the /// * following conditions are met: /// * /// * - Redistributions of source code must retain the above /// * copyright notice, this list of conditions and the /// * following disclaimer.
/// * /// * - Redistributions in binary form must reproduce the above /// * copyright notice, this list of conditions and the /// * following disclaimer in the documentation and/or other /// * materials provided with the distribution. /// * /// * - Neither the name of Internet Society, IETF or IETF /// * Trust, nor the names of specific contributors, may be /// * used to endorse or promote products derived from this /// * software without specific prior written permission. /// * /// * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS /// * AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED /// * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE /// * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS /// * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO /// * EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE /// * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, /// * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT /// * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR /// * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS /// * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF /// * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, /// * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING /// * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF /// * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /// * /// * This code was derived from RFC 8435. /// * Please reproduce this note if possible. /// */ /// /// /* /// * flex_files_prot.x /// */ /// /// /* /// * The following include statements are for example only. /// * The actual XDR definition files are generated separately /// * and independently and are likely to have a different name. /// * %#include <nfsv42.x> /// * %#include <rpc_prot.x> /// */ /// <CODE ENDS>
4. Device Addressing and Discovery
Data operations to a storage device require the client to know the network address of the storage device. The NFSv4.1+ GETDEVICEINFO operation (Section 18.40 of [RFC5661]) is used by the client to retrieve that information.4.1. ff_device_addr4
The ff_device_addr4 data structure is returned by the server as the layout-type-specific opaque field da_addr_body in the device_addr4 structure by a successful GETDEVICEINFO operation. <CODE BEGINS> /// struct ff_device_versions4 { /// uint32_t ffdv_version; /// uint32_t ffdv_minorversion; /// uint32_t ffdv_rsize; /// uint32_t ffdv_wsize; /// bool ffdv_tightly_coupled; /// }; /// /// struct ff_device_addr4 { /// multipath_list4 ffda_netaddrs; /// ff_device_versions4 ffda_versions<>; /// }; /// <CODE ENDS> The ffda_netaddrs field is used to locate the storage device. It MUST be set by the server to a list holding one or more of the device network addresses. The ffda_versions array allows the metadata server to present choices as to NFS version, minor version, and coupling strength to the client. The ffdv_version and ffdv_minorversion represent the NFS protocol to be used to access the storage device. This layout specification defines the semantics for ffdv_versions 3 and 4. If ffdv_version equals 3, then the server MUST set ffdv_minorversion to 0 and ffdv_tightly_coupled to false. The client MUST then access the storage device using the NFSv3 protocol [RFC1813]. If ffdv_version equals 4, then the server MUST set ffdv_minorversion to one of the NFSv4 minor version numbers, and the client MUST access the storage device using NFSv4 with the specified minor version.
Note that while the client might determine that it cannot use any of the configured combinations of ffdv_version, ffdv_minorversion, and ffdv_tightly_coupled, when it gets the device list from the metadata server, there is no way to indicate to the metadata server as to which device it is version incompatible. However, if the client waits until it retrieves the layout from the metadata server, it can at that time clearly identify the storage device in question (see Section 5.4). The ffdv_rsize and ffdv_wsize are used to communicate the maximum rsize and wsize supported by the storage device. As the storage device can have a different rsize or wsize than the metadata server, the ffdv_rsize and ffdv_wsize allow the metadata server to communicate that information on behalf of the storage device. ffdv_tightly_coupled informs the client as to whether or not the metadata server is tightly coupled with the storage devices. Note that even if the data protocol is at least NFSv4.1, it may still be the case that there is loose coupling in effect. If ffdv_tightly_coupled is not set, then the client MUST commit writes to the storage devices for the file before sending a LAYOUTCOMMIT to the metadata server. That is, the writes MUST be committed by the client to stable storage via issuing WRITEs with stable_how == FILE_SYNC or by issuing a COMMIT after WRITEs with stable_how != FILE_SYNC (see Section 3.3.7 of [RFC1813]).4.2. Storage Device Multipathing
The flexible file layout type supports multipathing to multiple storage device addresses. Storage-device-level multipathing is used for bandwidth scaling via trunking and for higher availability of use in the event of a storage device failure. Multipathing allows the client to switch to another storage device address that may be that of another storage device that is exporting the same data stripe unit, without having to contact the metadata server for a new layout. To support storage device multipathing, ffda_netaddrs contains an array of one or more storage device network addresses. This array (data type multipath_list4) represents a list of storage devices (each identified by a network address), with the possibility that some storage device will appear in the list multiple times. The client is free to use any of the network addresses as a destination to send storage device requests. If some network addresses are less desirable paths to the data than others, then the metadata server SHOULD NOT include those network addresses in ffda_netaddrs. If less desirable network addresses exist to provide failover, the RECOMMENDED method to offer the addresses is to provide
them in a replacement device-ID-to-device-address mapping or a replacement device ID. When a client finds no response from the storage device using all addresses available in ffda_netaddrs, it SHOULD send a GETDEVICEINFO to attempt to replace the existing device-ID-to-device-address mappings. If the metadata server detects that all network paths represented by ffda_netaddrs are unavailable, the metadata server SHOULD send a CB_NOTIFY_DEVICEID (if the client has indicated it wants device ID notifications for changed device IDs) to change the device-ID-to-device-address mappings to the available addresses. If the device ID itself will be replaced, the metadata server SHOULD recall all layouts with the device ID and thus force the client to get new layouts and device ID mappings via LAYOUTGET and GETDEVICEINFO. Generally, if two network addresses appear in ffda_netaddrs, they will designate the same storage device. When the storage device is accessed over NFSv4.1 or a higher minor version, the two storage device addresses will support the implementation of client ID or session trunking (the latter is RECOMMENDED) as defined in [RFC5661]. The two storage device addresses will share the same server owner or major ID of the server owner. It is not always necessary for the two storage device addresses to designate the same storage device with trunking being used. For example, the data could be read-only, and the data consist of exact replicas.5. Flexible File Layout Type
The original layouttype4 introduced in [RFC5662] is modified to be: <CODE BEGINS> enum layouttype4 { LAYOUT4_NFSV4_1_FILES = 1, LAYOUT4_OSD2_OBJECTS = 2, LAYOUT4_BLOCK_VOLUME = 3, LAYOUT4_FLEX_FILES = 4 }; struct layout_content4 { layouttype4 loc_type; opaque loc_body<>; };
struct layout4 { offset4 lo_offset; length4 lo_length; layoutiomode4 lo_iomode; layout_content4 lo_content; }; <CODE ENDS> This document defines structures associated with the layouttype4 value LAYOUT4_FLEX_FILES. [RFC5661] specifies the loc_body structure as an XDR type "opaque". The opaque layout is uninterpreted by the generic pNFS client layers but is interpreted by the flexible file layout type implementation. This section defines the structure of this otherwise opaque value, ff_layout4.5.1. ff_layout4
<CODE BEGINS> /// const FF_FLAGS_NO_LAYOUTCOMMIT = 0x00000001; /// const FF_FLAGS_NO_IO_THRU_MDS = 0x00000002; /// const FF_FLAGS_NO_READ_IO = 0x00000004; /// const FF_FLAGS_WRITE_ONE_MIRROR = 0x00000008; /// typedef uint32_t ff_flags4; /// /// struct ff_data_server4 { /// deviceid4 ffds_deviceid; /// uint32_t ffds_efficiency; /// stateid4 ffds_stateid; /// nfs_fh4 ffds_fh_vers<>; /// fattr4_owner ffds_user; /// fattr4_owner_group ffds_group; /// }; /// /// struct ff_mirror4 { /// ff_data_server4 ffm_data_servers<>; /// }; /// /// struct ff_layout4 { /// length4 ffl_stripe_unit; /// ff_mirror4 ffl_mirrors<>; /// ff_flags4 ffl_flags; /// uint32_t ffl_stats_collect_hint;
/// }; /// <CODE ENDS> The ff_layout4 structure specifies a layout in that portion of the data file described in the current layout segment. It is either a single instance or a set of mirrored copies of that portion of the data file. When mirroring is in effect, it protects against loss of data in layout segments. While not explicitly shown in the above XDR, each layout4 element returned in the logr_layout array of LAYOUTGET4res (see Section 18.43.2 of [RFC5661]) describes a layout segment. Hence, each ff_layout4 also describes a layout segment. It is possible that the file is concatenated from more than one layout segment. Each layout segment MAY represent different striping parameters. The ffl_stripe_unit field is the stripe unit size in use for the current layout segment. The number of stripes is given inside each mirror by the number of elements in ffm_data_servers. If the number of stripes is one, then the value for ffl_stripe_unit MUST default to zero. The only supported mapping scheme is sparse and is detailed in Section 6. Note that there is an assumption here that both the stripe unit size and the number of stripes are the same across all mirrors. The ffl_mirrors field is the array of mirrored storage devices that provide the storage for the current stripe; see Figure 1. The ffl_stats_collect_hint field provides a hint to the client on how often the server wants it to report LAYOUTSTATS for a file. The time is in seconds.
+-----------+ | | | | | File | | | | | +-----+-----+ | +------------+------------+ | | +----+-----+ +-----+----+ | Mirror 1 | | Mirror 2 | +----+-----+ +-----+----+ | | +-----------+ +-----------+ |+-----------+ |+-----------+ ||+-----------+ ||+-----------+ +|| Storage | +|| Storage | +| Devices | +| Devices | +-----------+ +-----------+ Figure 1 The ffs_mirrors field represents an array of state information for each mirrored copy of the current layout segment. Each element is described by a ff_mirror4 type. ffds_deviceid provides the deviceid of the storage device holding the data file. ffds_fh_vers is an array of filehandles of the data file matching the available NFS versions on the given storage device. There MUST be exactly as many elements in ffds_fh_vers as there are in ffda_versions. Each element of the array corresponds to a particular combination of ffdv_version, ffdv_minorversion, and ffdv_tightly_coupled provided for the device. The array allows for server implementations that have different filehandles for different combinations of version, minor version, and coupling strength. See Section 5.4 for how to handle versioning issues between the client and storage devices. For tight coupling, ffds_stateid provides the stateid to be used by the client to access the file. For loose coupling and an NFSv4 storage device, the client will have to use an anonymous stateid to perform I/O on the storage device. With no control protocol, the metadata server stateid cannot be used to provide a global stateid model. Thus, the server MUST set the ffds_stateid to be the anonymous stateid.
This specification of the ffds_stateid restricts both models for NFSv4.x storage protocols: loosely coupled model: the stateid has to be an anonymous stateid tightly coupled model: the stateid has to be a global stateid A number of issues stem from a mismatch between the fact that ffds_stateid is defined as a single item while ffds_fh_vers is defined as an array. It is possible for each open file on the storage device to require its own open stateid. Because there are established loosely coupled implementations of the version of the protocol described in this document, such potential issues have not been addressed here. It is possible for future layout types to be defined that address these issues, should it become important to provide multiple stateids for the same underlying file. For loosely coupled storage devices, ffds_user and ffds_group provide the synthetic user and group to be used in the RPC credentials that the client presents to the storage device to access the data files. For tightly coupled storage devices, the user and group on the storage device will be the same as on the metadata server; that is, if ffdv_tightly_coupled (see Section 4.1) is set, then the client MUST ignore both ffds_user and ffds_group. The allowed values for both ffds_user and ffds_group are specified as owner and owner_group, respectively, in Section 5.9 of [RFC5661]. For NFSv3 compatibility, user and group strings that consist of decimal numeric values with no leading zeros can be given a special interpretation by clients and servers that choose to provide such support. The receiver may treat such a user or group string as representing the same user as would be represented by an NFSv3 uid or gid having the corresponding numeric value. Note that if using Kerberos for security, the expectation is that these values will be a name@domain string. ffds_efficiency describes the metadata server's evaluation as to the effectiveness of each mirror. Note that this is per layout and not per device as the metric may change due to perceived load, availability to the metadata server, etc. Higher values denote higher perceived utility. The way the client can select the best mirror to access is discussed in Section 8.1. ffl_flags is a bitmap that allows the metadata server to inform the client of particular conditions that may result from more or less tight coupling of the storage devices.
FF_FLAGS_NO_LAYOUTCOMMIT: can be set to indicate that the client is not required to send LAYOUTCOMMIT to the metadata server. FF_FLAGS_NO_IO_THRU_MDS: can be set to indicate that the client should not send I/O operations to the metadata server. That is, even if the client could determine that there was a network disconnect to a storage device, the client should not try to proxy the I/O through the metadata server. FF_FLAGS_NO_READ_IO: can be set to indicate that the client should not send READ requests with the layouts of iomode LAYOUTIOMODE4_RW. Instead, it should request a layout of iomode LAYOUTIOMODE4_READ from the metadata server. FF_FLAGS_WRITE_ONE_MIRROR: can be set to indicate that the client only needs to update one of the mirrors (see Section 8.2).5.1.1. Error Codes from LAYOUTGET
[RFC5661] provides little guidance as to how the client is to proceed with a LAYOUTGET that returns an error of either NFS4ERR_LAYOUTTRYLATER, NFS4ERR_LAYOUTUNAVAILABLE, and NFS4ERR_DELAY. Within the context of this document: NFS4ERR_LAYOUTUNAVAILABLE: there is no layout available and the I/O is to go to the metadata server. Note that it is possible to have had a layout before a recall and not after. NFS4ERR_LAYOUTTRYLATER: there is some issue preventing the layout from being granted. If the client already has an appropriate layout, it should continue with I/O to the storage devices. NFS4ERR_DELAY: there is some issue preventing the layout from being granted. If the client already has an appropriate layout, it should not continue with I/O to the storage devices.5.1.2. Client Interactions with FF_FLAGS_NO_IO_THRU_MDS
Even if the metadata server provides the FF_FLAGS_NO_IO_THRU_MDS flag, the client can still perform I/O to the metadata server. The flag functions as a hint. The flag indicates to the client that the metadata server prefers to separate the metadata I/O from the data I/ O, most likely for performance reasons.
5.2. LAYOUTCOMMIT
The flexible file layout does not use lou_body inside the loca_layoutupdate argument to LAYOUTCOMMIT. If lou_type is LAYOUT4_FLEX_FILES, the lou_body field MUST have a zero length (see Section 18.42.1 of [RFC5661]).5.3. Interactions between Devices and Layouts
In [RFC5661], the file layout type is defined such that the relationship between multipathing and filehandles can result in either 0, 1, or N filehandles (see Section 13.3). Some rationales for this are clustered servers that share the same filehandle or allow for multiple read-only copies of the file on the same storage device. In the flexible file layout type, while there is an array of filehandles, they are independent of the multipathing being used. If the metadata server wants to provide multiple read-only copies of the same file on the same storage device, then it should provide multiple mirrored instances, each with a different ff_device_addr4. The client can then determine that, since the each of the ffds_fh_vers are different, there are multiple copies of the file for the current layout segment available.5.4. Handling Version Errors
When the metadata server provides the ffda_versions array in the ff_device_addr4 (see Section 4.1), the client is able to determine whether or not it can access a storage device with any of the supplied combinations of ffdv_version, ffdv_minorversion, and ffdv_tightly_coupled. However, due to the limitations of reporting errors in GETDEVICEINFO (see Section 18.40 in [RFC5661]), the client is not able to specify which specific device it cannot communicate with over one of the provided ffdv_version and ffdv_minorversion combinations. Using ff_ioerr4 (see Section 9.1.1) inside either the LAYOUTRETURN (see Section 18.44 of [RFC5661]) or the LAYOUTERROR (see Section 15.6 of [RFC7862] and Section 10 of this document), the client can isolate the problematic storage device. The error code to return for LAYOUTRETURN and/or LAYOUTERROR is NFS4ERR_MINOR_VERS_MISMATCH. It does not matter whether the mismatch is a major version (e.g., client can use NFSv3 but not NFSv4) or minor version (e.g., client can use NFSv4.1 but not NFSv4.2), the error indicates that for all the supplied combinations for ffdv_version and ffdv_minorversion, the client cannot communicate with the storage device. The client can retry the GETDEVICEINFO to see if the metadata server can provide a different combination, or it can fall back to doing the I/O through the metadata server.