DTLS 1.3 reuses the TLS 1.3 handshake messages and flows, with the following changes:
-
To handle message loss, reordering, and fragmentation, modifications to the handshake header are necessary.
-
Retransmission timers are introduced to handle message loss.
-
A new ACK content type has been added for reliable message delivery of handshake messages.
In addition, DTLS reuses TLS 1.3's "cookie" extension to provide a return-routability check as part of connection establishment. This is an important DoS prevention mechanism for UDP-based protocols, unlike TCP-based protocols, for which TCP establishes return-routability as part of the connection establishment.
DTLS implementations do not use the TLS 1.3 "compatibility mode" described in
Appendix D.4 of [
TLS13]. DTLS servers
MUST NOT echo the "legacy_session_id" value from the client and endpoints
MUST NOT send ChangeCipherSpec messages.
With these exceptions, the DTLS message formats, flows, and logic are the same as those of TLS 1.3.
Datagram security protocols are extremely susceptible to a variety of DoS attacks. Two attacks are of particular concern:
-
An attacker can consume excessive resources on the server by transmitting a series of handshake initiation requests, causing the server to allocate state and potentially to perform expensive cryptographic operations.
-
An attacker can use the server as an amplifier by sending connection initiation messages with a forged source address that belongs to a victim. The server then sends its response to the victim machine, thus flooding it. Depending on the selected parameters, this response message can be quite large, as is the case for a Certificate message.
In order to counter both of these attacks, DTLS borrows the stateless cookie technique used by Photuris [
RFC 2522] and IKE [
RFC 7296]. When the client sends its ClientHello message to the server, the server
MAY respond with a HelloRetryRequest message. The HelloRetryRequest message, as well as the "cookie" extension, is defined in TLS 1.3. The HelloRetryRequest message contains a stateless cookie (see [
TLS13],
Section 4.2.2). The client
MUST send a new ClientHello with the cookie added as an extension. The server then verifies the cookie and proceeds with the handshake only if it is valid. This mechanism forces the attacker/client to be able to receive the cookie, which makes DoS attacks with spoofed IP addresses difficult. This mechanism does not provide any defense against DoS attacks mounted from valid IP addresses.
The DTLS 1.3 specification changes how cookies are exchanged compared to DTLS 1.2. DTLS 1.3 reuses the HelloRetryRequest message and conveys the cookie to the client via an extension. The client receiving the cookie uses the same extension to place the cookie subsequently into a ClientHello message. DTLS 1.2, on the other hand, used a separate message, namely the HelloVerifyRequest, to pass a cookie to the client and did not utilize the extension mechanism. For backwards compatibility reasons, the cookie field in the ClientHello is present in DTLS 1.3 but is ignored by a DTLS 1.3-compliant server implementation.
The exchange is shown in
Figure 6. Note that the figure focuses on the cookie exchange; all other extensions are omitted.
Client Server
------ ------
ClientHello ------>
<----- HelloRetryRequest
+ cookie
ClientHello ------>
+ cookie
[Rest of handshake]
The "cookie" extension is defined in
Section 4.2.2 of [
TLS13]. When sending the initial ClientHello, the client does not have a cookie yet. In this case, the "cookie" extension is omitted and the legacy_cookie field in the ClientHello message
MUST be set to a zero-length vector (i.e., a zero-valued single byte length field).
When responding to a HelloRetryRequest, the client
MUST create a new ClientHello message following the description in
Section 4.1.2 of [
TLS13].
If the HelloRetryRequest message is used, the initial ClientHello and the HelloRetryRequest are included in the calculation of the transcript hash. The computation of the message hash for the HelloRetryRequest is done according to the description in
Section 4.4.1 of [
TLS13].
The handshake transcript is not reset with the second ClientHello, and a stateless server-cookie implementation requires the content or hash of the initial ClientHello (and HelloRetryRequest) to be stored in the cookie. The initial ClientHello is included in the handshake transcript as a synthetic "message_hash" message, so only the hash value is needed for the handshake to complete, though the complete HelloRetryRequest contents are needed.
When the second ClientHello is received, the server can verify that the cookie is valid and that the client can receive packets at the given IP address. If the client's apparent IP address is embedded in the cookie, this prevents an attacker from generating an acceptable ClientHello apparently from another user.
One potential attack on this scheme is for the attacker to collect a number of cookies from different addresses where it controls endpoints and then reuse them to attack the server. The server can defend against this attack by changing the secret value frequently, thus invalidating those cookies. If the server wishes to allow legitimate clients to handshake through the transition (e.g., a client received a cookie with Secret 1 and then sent the second ClientHello after the server has changed to Secret 2), the server can have a limited window during which it accepts both secrets. [
RFC 7296] suggests adding a key identifier to cookies to detect this case. An alternative approach is simply to try verifying with both secrets. It is
RECOMMENDED that servers implement a key rotation scheme that allows the server to manage keys with overlapping lifetimes.
Alternatively, the server can store timestamps in the cookie and reject cookies that were generated outside a certain interval of time.
DTLS servers
SHOULD perform a cookie exchange whenever a new handshake is being performed. If the server is being operated in an environment where amplification is not a problem, e.g., where ICE [
RFC 8445] has been used to establish bidirectional connectivity, the server
MAY be configured not to perform a cookie exchange. The default
SHOULD be that the exchange is performed, however. In addition, the server
MAY choose not to do a cookie exchange when a session is resumed or, more generically, when the DTLS handshake uses a PSK-based key exchange and the IP address matches one associated with the PSK. Servers which process 0-RTT requests and send 0.5-RTT responses without a cookie exchange risk being used in an amplification attack if the size of outgoing messages greatly exceeds the size of those that are received. A server
SHOULD limit the amount of data it sends toward a client address to three times the amount of data sent by the client before it verifies that the client is able to receive data at that address. A client address is valid after a cookie exchange or handshake completion. Clients
MUST be prepared to do a cookie exchange with every handshake. Note that cookies are only valid for the existing handshake and cannot be stored for future handshakes.
If a server receives a ClientHello with an invalid cookie, it
MUST terminate the handshake with an "illegal_parameter" alert. This allows the client to restart the connection from scratch without a cookie.
As described in
Section 4.1.4 of [
TLS13], clients
MUST abort the handshake with an "unexpected_message" alert in response to any second HelloRetryRequest which was sent in the same connection (i.e., where the ClientHello was itself in response to a HelloRetryRequest).
DTLS clients which do not want to receive a Connection ID
SHOULD still offer the "connection_id" extension [
RFC 9146] unless there is an application profile to the contrary. This permits a server which wants to receive a CID to negotiate one.
DTLS uses the same Handshake messages as TLS 1.3. However, prior to transmission they are converted to DTLSHandshake messages, which contain extra data needed to support message loss, reordering, and message fragmentation.
enum {
client_hello(1),
server_hello(2),
new_session_ticket(4),
end_of_early_data(5),
encrypted_extensions(8),
request_connection_id(9), /* New */
new_connection_id(10), /* New */
certificate(11),
certificate_request(13),
certificate_verify(15),
finished(20),
key_update(24),
message_hash(254),
(255)
} HandshakeType;
struct {
HandshakeType msg_type; /* handshake type */
uint24 length; /* bytes in message */
uint16 message_seq; /* DTLS-required field */
uint24 fragment_offset; /* DTLS-required field */
uint24 fragment_length; /* DTLS-required field */
select (msg_type) {
case client_hello: ClientHello;
case server_hello: ServerHello;
case end_of_early_data: EndOfEarlyData;
case encrypted_extensions: EncryptedExtensions;
case certificate_request: CertificateRequest;
case certificate: Certificate;
case certificate_verify: CertificateVerify;
case finished: Finished;
case new_session_ticket: NewSessionTicket;
case key_update: KeyUpdate;
case request_connection_id: RequestConnectionId;
case new_connection_id: NewConnectionId;
} body;
} DTLSHandshake;
In DTLS 1.3, the message transcript is computed over the original TLS 1.3-style Handshake messages without the message_seq, fragment_offset, and fragment_length values. Note that this is a change from DTLS 1.2 where those values were included in the transcript.
The first message each side transmits in each association always has message_seq = 0. Whenever a new message is generated, the message_seq value is incremented by one. When a message is retransmitted, the old message_seq value is reused, i.e., not incremented. From the perspective of the DTLS record layer, the retransmission is a new record. This record will have a new DTLSPlaintext.sequence_number value.
Note: In DTLS 1.2, the message_seq was reset to zero in case of a rehandshake (i.e., renegotiation). On the surface, a rehandshake in DTLS 1.2 shares similarities with a post-handshake message exchange in DTLS 1.3. However, in DTLS 1.3 the message_seq is not reset, to allow distinguishing a retransmission from a previously sent post-handshake message from a newly sent post-handshake message.
DTLS implementations maintain (at least notionally) a next_receive_seq counter. This counter is initially set to zero. When a handshake message is received, if its message_seq value matches next_receive_seq, next_receive_seq is incremented and the message is processed. If the sequence number is less than next_receive_seq, the message
MUST be discarded. If the sequence number is greater than next_receive_seq, the implementation
SHOULD queue the message but
MAY discard it. (This is a simple space/bandwidth trade-off).
In addition to the handshake messages that are deprecated by the TLS 1.3 specification, DTLS 1.3 furthermore deprecates the HelloVerifyRequest message originally defined in DTLS 1.0. DTLS 1.3-compliant implementations
MUST NOT use the HelloVerifyRequest to execute a return-routability check. A dual-stack DTLS 1.2 / DTLS 1.3 client
MUST, however, be prepared to interact with a DTLS 1.2 server.
The format of the ClientHello used by a DTLS 1.3 client differs from the TLS 1.3 ClientHello format, as shown below.
uint16 ProtocolVersion;
opaque Random[32];
uint8 CipherSuite[2]; /* Cryptographic suite selector */
struct {
ProtocolVersion legacy_version = { 254,253 }; // DTLSv1.2
Random random;
opaque legacy_session_id<0..32>;
opaque legacy_cookie<0..2^8-1>; // DTLS
CipherSuite cipher_suites<2..2^16-2>;
opaque legacy_compression_methods<1..2^8-1>;
Extension extensions<8..2^16-1>;
} ClientHello;
-
legacy_version:
-
In previous versions of DTLS, this field was used for versionnegotiation and represented the highest version number supported bythe client. Experience has shown that many servers do not properlyimplement version negotiation, leading to "version intolerance" inwhich the server rejects an otherwise acceptable ClientHello with aversion number higher than it supports. In DTLS 1.3, the clientindicates its version preferences in the "supported_versions"extension (see Section 4.2.1 of [TLS13]) and thelegacy_version field MUST be set to {254, 253}, which was the versionnumber for DTLS 1.2. The supported_versions entries for DTLS 1.0 and DTLS 1.2 are0xfeff and 0xfefd (to match the wire versions). The value 0xfefc is usedto indicate DTLS 1.3.
-
random:
-
Same as for TLS 1.3, except that the downgrade sentinels describedin Section 4.1.3 of [TLS13] when TLS 1.2and TLS 1.1 and below are negotiated apply to DTLS 1.2 and DTLS 1.0, respectively.
-
legacy_session_id:
-
Versions of TLS and DTLS before version 1.3 supported a "session resumption"feature, which has been merged with pre-shared keys (PSK) in version 1.3. A clientwhich has a cached session ID set by a pre-DTLS 1.3 server SHOULD set thisfield to that value. Otherwise, it MUST be set as a zero-length vector(i.e., a zero-valued single byte length field).
-
legacy_cookie:
-
A DTLS 1.3-only client MUST set the legacy_cookie field to zero length.If a DTLS 1.3 ClientHello is received with any other value in this field,the server MUST abort the handshake with an "illegal_parameter" alert.
-
cipher_suites:
-
Same as for TLS 1.3; only suites with DTLS-OK=Y may be used.
-
legacy_compression_methods:
-
Same as for TLS 1.3.
-
extensions:
-
Same as for TLS 1.3.
The DTLS 1.3 ServerHello message is the same as the TLS 1.3 ServerHello message, except that the legacy_version field is set to 0xfefd, indicating DTLS 1.2.
As described in
Section 4.3, one or more handshake messages may be carried in a single datagram. However, handshake messages are potentially bigger than the size allowed by the underlying datagram transport. DTLS provides a mechanism for fragmenting a handshake message over a number of records, each of which can be transmitted in separate datagrams, thus avoiding IP fragmentation.
When transmitting the handshake message, the sender divides the message into a series of N contiguous data ranges. The ranges
MUST NOT overlap. The sender then creates N DTLSHandshake messages, all with the same message_seq value as the original DTLSHandshake message. Each new message is labeled with the fragment_offset (the number of bytes contained in previous fragments) and the fragment_length (the length of this fragment). The length field in all messages is the same as the length field of the original message. An unfragmented message is a degenerate case with fragment_offset=0 and fragment_length=length. Each handshake message fragment that is placed into a record
MUST be delivered in a single UDP datagram.
When a DTLS implementation receives a handshake message fragment corresponding to the next expected handshake message sequence number, it
MUST process it, either by buffering it until it has the entire handshake message or by processing any in-order portions of the message. The transcript consists of complete TLS Handshake messages (reassembled as necessary). Note that this requires removing the message_seq, fragment_offset, and fragment_length fields to create the Handshake structure.
DTLS implementations
MUST be able to handle overlapping fragment ranges. This allows senders to retransmit handshake messages with smaller fragment sizes if the PMTU estimate changes. Senders
MUST NOT change handshake message bytes upon retransmission. Receivers
MAY check that retransmitted bytes are identical and
SHOULD abort the handshake with an "illegal_parameter" alert if the value of a byte changes.
Note that as with TLS, multiple handshake messages may be placed in the same DTLS record, provided that there is room and that they are part of the same flight. Thus, there are two acceptable ways to pack two DTLS handshake messages into the same datagram: in the same record or in separate records.
The DTLS 1.3 handshake has one important difference from the TLS 1.3 handshake: the EndOfEarlyData message is omitted both from the wire and the handshake transcript. Because DTLS records have epochs, EndOfEarlyData is not necessary to determine when the early data is complete, and because DTLS is lossy, attackers can trivially mount the deletion attacks that EndOfEarlyData prevents in TLS. Servers
SHOULD NOT accept records from epoch 1 indefinitely once they are able to process records from epoch 3. Though reordering of IP packets can result in records from epoch 1 arriving after records from epoch 3, this is not likely to persist for very long relative to the round trip time. Servers could discard epoch 1 keys after the first epoch 3 data arrives, or retain keys for processing epoch 1 data for a short period. (See
Section 6.1 for the definitions of each epoch.)
DTLS handshake messages are grouped into a series of message flights. A flight starts with the handshake message transmission of one peer and ends with the expected response from the other peer.
Table 1 contains a complete list of message combinations that constitute flights.
Note |
Client |
Server |
Handshake Messages |
|
x |
|
ClientHello |
|
|
x |
HelloRetryRequest |
|
|
x |
ServerHello, EncryptedExtensions, CertificateRequest, Certificate, CertificateVerify, Finished |
1 |
x |
|
Certificate, CertificateVerify, Finished |
1 |
|
x |
NewSessionTicket |
Table 1: Flight Handshake Message Combinations
Remarks:
-
Table 1 does not highlight any of the optional messages.
-
Regarding note (1): When a handshake flight is sent without any expected response, as is the case with the client's final flight or with the NewSessionTicket message, the flight must be acknowledged with an ACK message.
Below are several example message exchanges illustrating the flight concept. The notational conventions from [
TLS13] are used.
Client Server
+--------+
ClientHello | Flight |
--------> +--------+
+--------+
<-------- HelloRetryRequest | Flight |
+ cookie +--------+
+--------+
ClientHello | Flight |
+ cookie --------> +--------+
ServerHello
{EncryptedExtensions} +--------+
{CertificateRequest*} | Flight |
{Certificate*} +--------+
{CertificateVerify*}
{Finished}
<-------- [Application Data*]
{Certificate*} +--------+
{CertificateVerify*} | Flight |
{Finished} --------> +--------+
[Application Data]
+--------+
<-------- [ACK] | Flight |
[Application Data*] +--------+
[Application Data] <-------> [Application Data]
ClientHello +--------+
+ pre_shared_key | Flight |
+ psk_key_exchange_modes +--------+
+ key_share* -------->
ServerHello
+ pre_shared_key +--------+
+ key_share* | Flight |
{EncryptedExtensions} +--------+
<-------- {Finished}
[Application Data*]
+--------+
{Finished} --------> | Flight |
[Application Data*] +--------+
+--------+
<-------- [ACK] | Flight |
[Application Data*] +--------+
[Application Data] <-------> [Application Data]
Client Server
ClientHello
+ early_data
+ psk_key_exchange_modes +--------+
+ key_share* | Flight |
+ pre_shared_key +--------+
(Application Data*) -------->
ServerHello
+ pre_shared_key
+ key_share* +--------+
{EncryptedExtensions} | Flight |
{Finished} +--------+
<-------- [Application Data*]
+--------+
{Finished} --------> | Flight |
[Application Data*] +--------+
+--------+
<-------- [ACK] | Flight |
[Application Data*] +--------+
[Application Data] <-------> [Application Data]
Client Server
+--------+
<-------- [NewSessionTicket] | Flight |
+--------+
+--------+
[ACK] --------> | Flight |
+--------+
KeyUpdate, NewConnectionId, and RequestConnectionId follow a similar pattern to NewSessionTicket: a single message sent by one side followed by an ACK by the other.
DTLS uses a simple timeout and retransmission scheme with the state machine shown in
Figure 11.
+-----------+
| PREPARING |
+----------> | |
| | |
| +-----------+
| |
| | Buffer next flight
| |
| \|/
| +-----------+
| | |
| | SENDING |<------------------+
| | | |
| +-----------+ |
Receive | | |
next | | Send flight or partial |
flight | | flight |
| | |
| | Set retransmit timer |
| \|/ |
| +-----------+ |
| | | |
+------------| WAITING |-------------------+
| +----->| | Timer expires |
| | +-----------+ |
| | | | | |
| | | | | |
| +----------+ | +--------------------+
| Receive record | Read retransmit or ACK
Receive | (Maybe Send ACK) |
last | |
flight | | Receive ACK
| | for last flight
\|/ |
|
+-----------+ |
| | <---------+
| FINISHED |
| |
+-----------+
| /|\
| |
| |
+---+
Server read retransmit
Retransmit ACK
The state machine has four basic states: PREPARING, SENDING, WAITING, and FINISHED.
In the PREPARING state, the implementation does whatever computations are necessary to prepare the next flight of messages. It then buffers them up for transmission (emptying the transmission buffer first) and enters the SENDING state.
In the SENDING state, the implementation transmits the buffered flight of messages. If the implementation has received one or more ACKs (see
Section 7) from the peer, then it
SHOULD omit any messages or message fragments which have already been acknowledged. Once the messages have been sent, the implementation then sets a retransmit timer and enters the WAITING state.
There are four ways to exit the WAITING state:
-
The retransmit timer expires: the implementation transitions to the SENDING state, where it retransmits the flight, adjusts and re-arms the retransmit timer (see Section 5.8.2), and returns to the WAITING state.
-
The implementation reads an ACK from the peer: upon receiving an ACK for a partial flight (as mentioned in Section 7.1), the implementation transitions to the SENDING state, where it retransmits the unacknowledged portion of the flight, adjusts and re-arms the retransmit timer, and returns to the WAITING state. Upon receiving an ACK for a complete flight, the implementation cancels all retransmissions and either remains in WAITING, or, if the ACK was for the final flight, transitions to FINISHED.
-
The implementation reads a retransmitted flight from the peer when none of the messages that it sent in response to that flight have been acknowledged: the implementation transitions to the SENDING state, where it retransmits the flight, adjusts and re-arms the retransmit timer, and returns to the WAITING state. The rationale here is that the receipt of a duplicate message is the likely result of timer expiry on the peer and therefore suggests that part of one's previous flight was lost.
-
The implementation receives some or all of the next flight of messages: if this is the final flight of messages, the implementation transitions to FINISHED. If the implementation needs to send a new flight, it transitions to the PREPARING state. Partial reads (whether partial messages or only some of the messages in the flight) may also trigger the implementation to send an ACK, as described in Section 7.1.
Because DTLS clients send the first message (ClientHello), they start in the PREPARING state. DTLS servers start in the WAITING state, but with empty buffers and no retransmit timer.
In addition, for at least twice the default MSL defined for [
RFC 0793], when in the FINISHED state, the server
MUST respond to retransmission of the client's final flight with a retransmit of its ACK.
Note that because of packet loss, it is possible for one side to be sending application data even though the other side has not received the first side's Finished message. Implementations
MUST either discard or buffer all application data records for epoch 3 and above until they have received the Finished message from the peer. Implementations
MAY treat receipt of application data with a new epoch prior to receipt of the corresponding Finished message as evidence of reordering or packet loss and retransmit their final flight immediately, shortcutting the retransmission timer.
The configuration of timer settings varies with implementations, and certain deployment environments require timer value adjustments. Mishandling of the timer can lead to serious congestion problems -- for example, if many instances of a DTLS time out early and retransmit too quickly on a congested link.
Unless implementations have deployment-specific and/or external information about the round trip time, implementations
SHOULD use an initial timer value of 1000 ms and double the value at each retransmission, up to no less than 60 seconds (the maximum as specified in
RFC 6298 [
RFC 6298]). Application-specific profiles
MAY recommend shorter or longer timer values. For instance:
-
Profiles for specific deployment environments, such as in low-power, multi-hop mesh scenarios as used in some Internet of Things (IoT) networks, MAY specify longer timeouts. See [IOT-PROFILE] for more information about one such DTLS 1.3 IoT profile.
-
Real-time protocols MAY specify shorter timeouts. It is RECOMMENDED that for DTLS-SRTP [RFC 5764], a default timeout of 400 ms be used; because customer experience degrades with one-way latencies of greater than 200 ms, real-time deployments are less likely to have long latencies.
In settings where there is external information (for instance, from an ICE [
RFC 8445] handshake, or from previous connections to the same server) about the RTT, implementations
SHOULD use 1.5 times that RTT estimate as the retransmit timer.
Implementations
SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value
SHOULD be adjusted to 1.5 times the measured round trip time for that message. After a long period of idleness, no less than 10 times the current timer value, implementations
MAY reset the timer to the initial value.
Note that because retransmission is for the handshake and not dataflow, the effect on congestion of shorter timeouts is smaller than in generic protocols such as TCP or QUIC. Experience with DTLS 1.2, which uses a simpler "retransmit everything on timeout" approach, has not shown serious congestion problems in practice.
DTLS does not have any built-in congestion control or rate control; in general, this is not an issue because messages tend to be small. However, in principle, some messages -- especially Certificate -- can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. Several extensions have been standardized to reduce the size of the Certificate message -- for example, the "cached_info" extension [
RFC 7924]; certificate compression [
RFC 8879]; and [
RFC 6066], which defines the "client_certificate_url" extension allowing DTLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate.
DTLS stacks
SHOULD NOT send more than 10 records in a single transmission.
DTLS 1.3 makes use of the following categories of post-handshake messages:
-
NewSessionTicket
-
KeyUpdate
-
NewConnectionId
-
RequestConnectionId
-
Post-handshake client authentication
Messages of each category can be sent independently, and reliability is established via independent state machines, each of which behaves as described in
Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created.
Sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server
MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket messages first. Likewise, a server
MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations
MUST NOT send KeyUpdate, NewConnectionId, or RequestConnectionId messages if an earlier message of the same type has not yet been acknowledged.
Note: Except for post-handshake client authentication, which involves handshake messages in both directions, post-handshake messages are single-flight, and their respective state machines on the sender side reduce to waiting for an ACK and retransmitting the original message. In particular, note that a RequestConnectionId message does not force the receiver to send a NewConnectionId message in reply, and both messages are therefore treated independently.
Creating and correctly updating multiple state machines requires feedback from the handshake logic to the state machine layer, indicating which message belongs to which state machine. For example, if a server sends multiple CertificateRequest messages and receives a Certificate message in response, the corresponding state machine can only be determined after inspecting the certificate_request_context field. Similarly, a server sending a single CertificateRequest and receiving a NewConnectionId message in response can only decide that the NewConnectionId message should be treated through an independent state machine after inspecting the handshake message type.
Section 7.1 of [
TLS13] specifies that HKDF-Expand-Label uses a label prefix of "tls13 ". For DTLS 1.3, that label
SHALL be "dtls13". This ensures key separation between DTLS 1.3 and TLS 1.3. Note that there is no trailing space; this is necessary in order to keep the overall label size inside of one hash iteration because "DTLS" is one letter longer than "TLS".
Note that alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert
SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations
SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementations
SHOULD NOT depend on receiving alerts in order to signal errors or connection closure.
Any data received with an epoch/sequence number pair after that of a valid received closure alert
MUST be ignored. Note: this is a change from TLS 1.3 which depends on the order of receipt rather than the epoch and sequence number.
If a DTLS client-server pair is configured in such a way that repeated connections happen on the same host/port quartet, then it is possible that a client will silently abandon one connection and then initiate another with the same parameters (e.g., after a reboot). This will appear to the server as a new handshake with epoch=0. In cases where a server believes it has an existing association on a given host/port quartet and it receives an epoch=0 ClientHello, it
SHOULD proceed with a new handshake but
MUST NOT destroy the existing association until the client has demonstrated reachability either by completing a cookie exchange or by completing a complete handshake including delivering a verifiable Finished message. After a correct Finished message is received, the server
MUST abandon the previous association to avoid confusion between two valid associations with overlapping epochs. The reachability requirement prevents off-path/blind attackers from destroying associations merely by sending forged ClientHellos.
Note: It is not always possible to distinguish which association a given record is from. For instance, if the client performs a handshake, abandons the connection, and then immediately starts a new handshake, it may not be possible to tell which connection a given protected record is for. In these cases, trial decryption may be necessary, though implementations could use CIDs to avoid the 5-tuple-based ambiguity.