5. Operational Experience with the Yeti DNS Testbed
The following sections provide commentary on the operation and impact analyses of the Yeti DNS testbed described in Section 4. More detailed descriptions of observed phenomena are available in the Yeti DNS mailing list archives <http://lists.yeti-dns.org/pipermail/ discuss/> and on the Yeti DNS blog <https://yeti-dns.org/blog.html>.5.1. Viability of IPv6-Only Operation
All Yeti-Root servers were deployed with IPv6 connectivity, and no IPv4 addresses for any Yeti-Root server were made available (e.g., in the Yeti hints file or in the DNS itself). This implementation decision constrained the Yeti-Root system to be v6 only. DNS implementations are generally adept at using both IPv4 and IPv6 when both are available. Servers that cannot be reliably reached over one protocol might be better queried over the other, to the benefit of end-users in the common case where DNS resolution is on the critical path for end-users' perception of performance. However, this optimization also means that systemic problems with one protocol can be masked by the other. By forcing all traffic to be carried over IPv6, the Yeti DNS testbed aimed to expose any such problems and make them easier to identify and understand. Several examples of IPv6-specific phenomena observed during the operation of the testbed are described in the sections that follow. Although the Yeti-Root servers themselves were only reachable using IPv6, real-world end-users often have no IPv6 connectivity. The testbed was also able to explore the degree to which IPv6-only Yeti- Root servers were able to serve single-stack, IPv4-only end-user populations through the use of dual-stack Yeti resolvers.
5.1.1. IPv6 Fragmentation
In the Root Server system, structural changes with the potential to increase response sizes (and hence fragmentation, fallback to TCP transport, or both) have been exercised with great care, since the impact on clients has been difficult to predict or measure. The Yeti DNS testbed is experimental and has the luxury of a known client base, making it far easier to make such changes and measure their impact. Many of the experimental design choices described in this document were expected to trigger larger responses. For example, the choice of naming scheme for Yeti-Root servers described in Section 4.5 defeats label compression. It makes a large priming response (up to 1754 octets with 25 NS records and their corresponding glue records); the Yeti-Root zone transformation approach described in Section 4.2.2 greatly enlarges the apex DNSKEY RRset especially during the KSK rollover (up to 1975 octets with 3 ZSKs and 2 KSKs). Therefore, an increased incidence of fragmentation was expected. The Yeti DNS testbed provides service on IPv6 only. However, middleboxes (such as firewalls and some routers) are not friendly on IPv6 fragments. There are reports of a notable packet drop rate due to the mistreatment of middleboxes on IPv6 fragments [FRAGDROP] [RFC7872]. One APNIC study [IPv6-frag-DNS] reported that 37% of endpoints using IPv6-capable DNS resolvers cannot receive a fragmented IPv6 response over UDP. To study the impact, RIPE Atlas probes were used. For each Yeti-Root server, an Atlas measurement was set up using 100 IPv6-enabled probes from five regions, sending a DNS query for "./IN/DNSKEY" using UDP transport with DO=1. This measurement, when carried out concurrently with a Yeti KSK rollover, further exacerbating the potential for fragmentation, identified a 7% failure rate compared with a non- fragmented control. A failure rate of 2% was observed with response sizes of 1414 octets, which was surprising given the expected prevalence of 1500-octet (Ethernet-framed) MTUs. The consequences of fragmentation were not limited to failures in delivering DNS responses over UDP transport. There were two cases where a Yeti-Root server failed when using TCP to transfer the Yeti- Root zone from a DM. DM log files revealed "socket is not connected" errors corresponding to zone transfer requests. Further experimentation revealed that combinations of NetBSD 6.1, NetBSD 7.0RC1, FreeBSD 10.0, Debian 3.2, and VMWare ESXI 5.5 resulted in a high TCP Maximum Segment Size (MSS) value of 1440 octets being negotiated between client and server despite the presence of the IPV6_USE_MIN_MTU socket option, as described in [USE_MIN_MTU]. The
mismatch appears to cause outbound segments of a size greater than 1280 octets to be dropped before sending. Setting the local TCP MSS to 1220 octets (chosen as 1280 - 60, the size of the IPv6 TCP header with no other extension headers) was observed to be a pragmatic mitigation.5.1.2. Serving IPv4-Only End-Users
Yeti resolvers have been successfully used by real-world end-users for general name resolution within a number of participant organizations, including resolution of names to IPv4 addresses and resolution by IPv4-only end-user devices. Some participants, recognizing the operational importance of reliability in resolver infrastructure and concerned about the stability of their IPv6 connectivity, chose to deploy Yeti resolvers in parallel to conventional resolvers, making both available to end- users. While the viability of this approach provides a useful data point, end-users using Yeti resolvers exclusively provided a better opportunity to identify and understand any failures in the Yeti DNS testbed infrastructure. Resolvers deployed in IPv4-only environments were able to join the Yeti DNS testbed by way of upstream, dual-stack Yeti resolvers. In one case (CERNET2), this was done by assigning IPv4 addresses to Yeti-Root servers and mapping them in dual-stack IVI translation devices [RFC6219].5.2. Zone Distribution
The Yeti DNS testbed makes use of multiple DMs to distribute the Yeti-Root zone, an approach that would allow the number of Yeti-Root servers to scale to a higher number than could be supported by a single distribution source and that provided redundancy. The use of multiple DMs introduced some operational challenges, however, which are described in the following sections.5.2.1. Zone Transfers
Yeti-Root servers were configured to serve the Yeti-Root zone as slaves. Each slave had all DMs configured as masters, providing redundancy in zone synchronization. Each DM in the Yeti testbed served a Yeti-Root zone that was functionally equivalent but not congruent to that served by every other DM (see Section 4.3). The differences included variations in the SOA.MNAME field and, more critically, in the RRSIGs for everything other than the apex DNSKEY RRset, since signatures for all
other RRsets are generated using a private key that is only available to the DM serving its particular variant of the zone (see Sections 4.2.1 and 4.2.2). Incremental Zone Transfer (IXFR), as described in [RFC1995], is a viable mechanism to use for zone synchronization between any Yeti- Root server and a consistent, single DM. However, if that Yeti-Root server ever selected a different DM, IXFR would no longer be a safe mechanism; structural changes between the incongruent zones on different DMs would not be included in any transferred delta, and the result would be a zone that was not internally self-consistent. For this reason, the first transfer after a change of DM would require AXFR not IXFR. None of the DNS software in use on Yeti-Root servers supports this mixture of IXFR/AXFR according to the master server in use. This is unsurprising, given that the environment described above in the Yeti- Root system is idiosyncratic; conventional zone transfer graphs involve zones that are congruent between all nodes. For this reason, all Yeti-Root servers are configured to use AXFR at all times, and never IXFR, to ensure that zones being served are internally self- consistent.5.2.2. Delays in Yeti-Root Zone Distribution
Each Yeti DM polled the Root Server system for a new revision of the root zone on an interleaved schedule, as described in Section 4.1. Consequently, different DMs were expected to retrieve each revision of the root zone, and make a corresponding revision of the Yeti-Root zone available, at different times. The availability of a new revision of the Yeti-Root zone on the first DM would typically precede that of the last by 40 minutes. Given this distribution mechanism, it might be expected that the maximum latency between the publication of a new revision of the root zone and the availability of the corresponding Yeti-Root zone on any Yeti-Root server would be 20 minutes, since in normal operation at least one DM should serve that Yeti-Zone within 20 minutes of root zone publication. In practice, this was not observed. In one case, a Yeti-Root server running Bundy 1.2.0 on FreeBSD 10.2-RELEASE was found to lag root zone publication by as much as ten hours. Upon investigation, this was found to be due to software defects that were subsequently corrected. More generally, Yeti-Root servers were observed routinely to lag root zone publication by more than 20 minutes, and relatively often by more than 40 minutes. Whilst in some cases this might be assumed to
be a result of connectivity problems, perhaps suppressing the delivery of NOTIFY messages, it was also observed that Yeti-Root servers receiving a NOTIFY from one DM would often send SOA queries and AXFR requests to a different DM. If that DM were not yet serving the new revision of the Yeti-Root zone, a delay in updating the Yeti- Root server would naturally result.5.2.3. Mixed RRSIGs from Different DM ZSKs
The second approach for doing the transformation of Root Zone to Yeti-Root zone (Section 4.2.2) introduces a situation where mixed RRSIGs from different DM ZSKs are cached in one resolver. It is observed that the Yeti-Root zone served by any particular Yeti- Root server will include signatures generated using the ZSK from the DM that served the Yeti-Root zone to that Yeti-Root server. Signatures cached at resolvers might be retrieved from any Yeti-Root server, and hence are expected to be a mixture of signatures generated by different ZSKs. Since all ZSKs can be trusted through the signature by the Yeti KSK over the DNSKEY RRset, which includes all ZSKs, the mixture of signatures was predicted not to be a threat to reliable validation. It was first tested in BII's lab environment as a proof of concept. It was observed in the resolver's DNSSEC log that the process of verifying an RDATA set shows "success" with a key (keyid) in the DNSKEY RRset. It was implemented later in three DMs that were carefully coordinated and made public to all Yeti resolver operators and participants in Yeti's mailing list. At least 45 Yeti resolvers (deployed by Yeti operators) were being monitored and had set a reporting trigger if anything was wrong. In addition, the Yeti mailing list is open for error reports from other participants. So far, the Yeti testbed has been operated in this configuration (with multiple ZSKs) for 2 years. This configuration has proven workable and reliable, even when rollovers of individual ZSKs are on different schedules. Another consequence of this approach is that the apex DNSKEY RRset in the Yeti-Root zone is much larger than the corresponding DNSKEY RRset in the Root Zone. This requires more space and produces a larger response to the query for the DNSKEY RRset especially during the KSK rollover.
5.3. DNSSEC KSK Rollover
At the time of writing, the Root Zone KSK is expected to undergo a carefully orchestrated rollover as described in [ICANN2016]. ICANN has commissioned various tests and has published an external test plan [ICANN2017]. Three related DNSSEC KSK rollover exercises were carried out on the Yeti DNS testbed, somewhat concurrent with the planning and execution of the rollover in the root zone. Brief descriptions of these exercises are included below.5.3.1. Failure-Case KSK Rollover
The first KSK rollover that was executed on the Yeti DNS testbed deliberately ignored the 30-day hold-down timer specified in [RFC5011] before retiring the outgoing KSK. It was confirmed that clients of some (but not all) validating Yeti resolvers experienced resolution failures (received SERVFAIL responses) following this change. Those resolvers required administrator intervention to install a functional trust anchor before resolution was restored.5.3.2. KSK Rollover vs. BIND9 Views
The second Yeti KSK rollover was designed with similar phases to the ICANN's KSK rollover, although with modified timings to reduce the time required to complete the process. The "slot" used in this rollover was ten days long, as follows: +-----------------+----------------+----------+ | | Old Key: 19444 | New Key | +-----------------+----------------+----------+ | slot 1 | pub+sign | | | slot 2, 3, 4, 5 | pub+sign | pub | | slot 6, 7 | pub | pub+sign | | slot 8 | revoke | pub+sign | | slot 9 | | pub+sign | +-----------------+----------------+----------+ During this rollover exercise, a problem was observed on one Yeti resolver that was running BIND 9.10.4-p2 [KROLL-ISSUE]. That resolver was configured with multiple views serving clients in different subnets at the time that the KSK rollover began. DNSSEC validation failures were observed following the completion of the KSK rollover, triggered by the addition of a new view that was intended to serve clients from a new subnet.
BIND 9.10 requires "managed-keys" configuration to be specified in every view, a detail that was apparently not obvious to the operator in this case and that was subsequently highlighted by the Internet Systems Consortium (ISC) in their general advice relating to KSK rollover in the root zone to users of BIND 9 [ISC-BIND]. When the "managed-keys" configuration is present in every view that is configured to perform validation, trust anchors for all views are updated during a KSK rollover.5.3.3. Large Responses during KSK Rollover
Since a KSK rollover necessarily involves the publication of outgoing and incoming public keys simultaneously, an increase in the size of DNSKEY responses is expected. The third KSK rollover carried out on the Yeti DNS testbed was accompanied by a concerted effort to observe response sizes and their impact on end-users. As described in Section 4.2.2, in the Yeti DNS testbed each DM can maintain control of its own set of ZSKs, which can undergo rollover independently. During a KSK rollover where concurrent ZSK rollovers are executed by each of three DMs, the maximum number of apex DNSKEY RRs present is eight (incoming and outgoing KSK, plus incoming and outgoing of each of three ZSKs). In practice, however, such concurrency did not occur; only the BII ZSK was rolled during the KSK rollover, and hence only three DNSKEY RRset configurations were observed: o 3 ZSKs and 2 KSKs, DNSKEY response of 1975 octets; o 3 ZSKs and 1 KSK, DNSKEY response of 1414 octets; and o 2 ZSKs and 1 KSK, DNSKEY response of 1139 octets. RIPE Atlas probes were used as described in Section 5.1.1 to send DNSKEY queries directly to Yeti-Root servers. The numbers of queries and failures were recorded and categorized according to the response sizes at the time the queries were sent. A summary of the results ([YetiLR]) is as follows: +---------------+----------+---------------+--------------+ | Response Size | Failures | Total Queries | Failure Rate | +---------------+----------+---------------+--------------+ | 1139 | 274 | 64252 | 0.0042 | | 1414 | 3141 | 126951 | 0.0247 | | 1975 | 2920 | 42529 | 0.0687 | +---------------+----------+---------------+--------------+
The general approach illustrated briefly here provides a useful example of how the design of the Yeti DNS testbed, separate from the Root Server system but constructed as a live testbed on the Internet, facilitates the use of general-purpose active measurement facilities (such as RIPE Atlas probes) as well as internal passive measurement (such as packet capture).5.4. Capture of Large DNS Response
Packet capture is a common approach in production DNS systems where operators require fine-grained insight into traffic in order to understand production traffic. For authoritative servers, capture of inbound query traffic is often sufficient, since responses can be synthesized with knowledge of the zones being served at the time the query was received. Queries are generally small enough not to be fragmented, and even with TCP transport are generally packed within a single segment. The Yeti DNS testbed has different requirements; in particular, there is a desire to compare responses obtained from the Yeti infrastructure with those received from the Root Server system in response to a single query stream (e.g., using the "Yeti Many Mirror Verifier" (YmmV) as described in Appendix D). Some Yeti-Root servers were capable of recovering complete DNS messages from within nameservers, e.g., using dnstap; however, not all servers provided that functionality, and a consistent approach was desirable. The requirement to perform passive capture of responses from the wire together with experiments that were expected (and in some cases designed) to trigger fragmentation and use of TCP transport led to the development of a new tool, PcapParser, to perform fragment and TCP stream reassembly from raw packet capture data. A brief description of PcapParser is included in Appendix D.5.5. Automated Maintenance of the Hints File
Renumbering events in the Root Server system are relatively rare. Although each such event is accompanied by the publication of an updated hints file in standard locations, the task of updating local copies of that file used by DNS resolvers is manual, and the process has an observably long tail. For example, in 2015 J-Root was still receiving traffic at its old address some thirteen years after renumbering [Wessels2015]. The observed impact of these old, deployed hints files is minimal, likely due to the very low frequency of such renumbering events. Even the oldest of hints files would still contain some accurate root server addresses from which priming responses could be obtained.
By contrast, due to the experimental nature of the system and the fact that it is operated mainly by volunteers, Yeti-Root servers are added, removed, and renumbered with much greater frequency. A tool to facilitate automatic maintenance of hints files was therefore created: [hintUpdate]. The automated procedure followed by the hintUpdate tool is as follows. 1. Use the local resolver to obtain a response to the query "./IN/NS". 2. Use the local resolver to obtain a set of IPv4 and IPv6 addresses for each name server. 3. Validate all signatures obtained from the local resolvers and confirm that all data is signed. 4. Compare the data obtained to that contained within the currently active hints file; if there are differences, rotate the old one away and replace it with a new one. This tool would not function unmodified when used in the Root Server system, since the names of individual Root Servers (e.g., A.ROOT- SERVERS.NET) are not DNSSEC signed. All Yeti-Root server names are DNSSEC signed, however, and hence this tool functions as expected in that environment.5.6. Root Label Compression in Knot DNS Server
[RFC1035] specifies that domain names can be compressed when encoded in DNS messages, and can be represented as one of 1. a sequence of labels ending in a zero octet; 2. a pointer; or 3. a sequence of labels ending with a pointer. The purpose of this flexibility is to reduce the size of domain names encoded in DNS messages. It was observed that Yeti-Root servers running Knot 2.0 would compress the zero-length label (the root domain, often represented as ".") using a pointer to an earlier example. Although legal, this encoding increases the encoded size of the root label from one octet to two; it was also found to break some client software -- in
particular, the Go DNS library. Bug reports were filed against both Knot and the Go DNS library, and both were resolved in subsequent releases.6. Conclusions
Yeti DNS was designed and implemented as a live DNS root system testbed. It serves a root zone ("Yeti-Root" in this document) derived from the root zone published by the IANA with only those structural modifications necessary to ensure its function in the testbed system. The Yeti DNS testbed has proven to be a useful platform to address many questions that would be challenging to answer using the production Root Server system, such as those included in Section 3. Indicative findings following from the construction and operation of the Yeti DNS testbed include: o Operation in a pure IPv6-only environment; confirmation of a significant failure rate in the transmission of large responses (~7%), but no other persistent failures observed. Two cases in which Yeti-Root servers failed to retrieve the Yeti-Root zone due to fragmentation of TCP segments; mitigated by setting a TCP MSS of 1220 octets (see Section 5.1.1). o Successful operation with three autonomous Yeti-Root zone signers and 25 Yeti-Root servers, and confirmation that IXFR is not an appropriate transfer mechanism of zones that are structurally incongruent across different transfer paths (see Section 5.2). o ZSK size increased to 2048 bits and multiple KSK rollovers executed to exercise support of RFC 5011 in validating resolvers; identification of pitfalls relating to views in BIND9 when configured with "managed-keys" (see Section 5.3). o Use of natural (non-normalized) names for Yeti-Root servers exposed some differences between implementations in the inclusion of additional-section glue in responses to priming queries; however, despite this inefficiency, Yeti resolvers were observed to function adequately (see Section 4.5). o It was observed that Knot 2.0 performed label compression on the root (empty) label. This resulted in an increased encoding size for references to the root label, since a pointer is encoded as two octets whilst the root label itself only requires one (see Section 5.6).
o Some tools were developed in response to the operational experience of running and using the Yeti DNS testbed: DNS fragment and DNS Additional Truncated Response (ATR) for large DNS responses, a BIND9 patch for additional-section glue, YmmV, and IPv6 defrag for capturing and mirroring traffic. In addition, a tool to facilitate automatic maintenance of hints files was created (see Appendix D). The Yeti DNS testbed was used only by end-users whose local infrastructure providers had made the conscious decision to do so, as is appropriate for an experimental, non-production system. So far, no serious user complaints have reached Yeti's mailing list during Yeti normal operation. Adding more instances into the Yeti root system may help to enhance the quality of service, but it is generally accepted that Yeti DNS performance is good enough to serve the purpose of DNS Root testbed. The experience gained during the operation of the Yeti DNS testbed suggested several topics worthy of further study: o Priming truncation and TCP-only Yeti-Root servers: observe and measure the worst-possible case for priming truncation by responding with TC=1 to all priming queries received over UDP transport, forcing clients to retry using TCP. This should also give some insight into the usefulness of TCP-only DNS in general. o KSK ECDSA Rollover: one possible way to reduce DNSKEY response sizes is to change to an elliptic curve signing algorithm. While in principle this can be done separately for the KSK and the ZSK, the RIPE NCC has done research recently and discovered that some resolvers require that both KSK and ZSK use the same algorithm. This means that an algorithm roll also involves a KSK roll. Performing an algorithm roll at the root would be an interesting challenge. o Sticky Notify for zone transfer: the non-applicability of IXFR as a zone transfer mechanism in the Yeti DNS testbed could be mitigated by the implementation of a sticky preference for master server for each slave. This would be so that an initial AXFR response could be followed up with IXFR requests without compromising zone integrity in the case (as with Yeti) that equivalent but incongruent versions of a zone are served by different masters.
o Key distribution for zone transfer credentials: the use of a shared secret between slave and master requires key distribution and management whose scaling properties are not ideally suited to systems with large numbers of transfer clients. Other approaches for key distribution and authentication could be considered. o DNS is a tree-based hierarchical database. Mathematically, it has a root node and dependency between parent and child nodes. So, any failures and instability of parent nodes (Root in Yeti's case) may impact their child nodes if there is a human mistake, a malicious attack, or even an earthquake. It is proposed to define technology and practices to allow any organization, from the smallest company to nations, to be self-sufficient in their DNS. o In Section 3.12 of [RFC8324], a "Centrally Controlled Root" is viewed as an issue of DNS. In future work, it would be interesting to test some technical tools like blockchain [BC] to either remove the technical requirement for a central authority over the root or enhance the security and stability of the existing Root.7. Security Considerations
As introduced in Section 4.4, service metadata is synchronized among 3 DMs using Git tool. Any security issue around Git may affect Yeti DM operation. For example, a hacker may compromise one DM's Git repository and push unwanted changes to the Yeti DM system; this may introduce a bad root server or bad key for a period of time. The Yeti resolver needs the bootstrapping files to join the testbed, like the hints file and trust anchor of Yeti. All required information is published on <yeti-dns.org> and <github.com>. If a hacker tampers with those websites by creating a fake page, a new resolver may lose its way and be configured with a bad root. DNSSEC is an important research goal in the Yeti DNS testbed. To reduce the central function of DNSSEC for Root zone, we sign the Yeti-Root zone using multiple, independently operated DNSSEC signers and multiple corresponding ZSKs (see Section 4.2). To verify ICANN's KSK rollover, we rolled the Yeti KSK three times according to RFC 5011, and we do have some observations (see Section 5.3). In addition, larger RSA key sizes were used in the testbed before 2048-bit keys were used in the ZSK signing process of the IANA Root zone.8. IANA Considerations
This document has no IANA actions.
9. References
9.1. Normative References
[RFC1034] Mockapetris, P., "Domain names - concepts and facilities", STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, <https://www.rfc-editor.org/info/rfc1034>. [RFC1035] Mockapetris, P., "Domain names - implementation and specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, November 1987, <https://www.rfc-editor.org/info/rfc1035>. [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995, DOI 10.17487/RFC1995, August 1996, <https://www.rfc-editor.org/info/rfc1995>. [RFC1996] Vixie, P., "A Mechanism for Prompt Notification of Zone Changes (DNS NOTIFY)", RFC 1996, DOI 10.17487/RFC1996, August 1996, <https://www.rfc-editor.org/info/rfc1996>. [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, September 2007, <https://www.rfc-editor.org/info/rfc5011>. [RFC5890] Klensin, J., "Internationalized Domain Names for Applications (IDNA): Definitions and Document Framework", RFC 5890, DOI 10.17487/RFC5890, August 2010, <https://www.rfc-editor.org/info/rfc5890>.9.2. Informative References
[ATR] Song, L., "ATR: Additional Truncation Response for Large DNS Response", Work in Progress, draft-song-atr-large- resp-02, August 2018. [BC] Wikipedia, "Blockchain", September 2018, <https://en.wikipedia.org/w/ index.php?title=Blockchain&oldid=861681529>. [FRAGDROP] Jaeggli, J., Colitti, L., Kumari, W., Vyncke, E., Kaeo, M., and T. Taylor, "Why Operators Filter Fragments and What It Implies", Work in Progress, draft-taylor-v6ops- fragdrop-02, December 2013. [FRAGMENTS] Sivaraman, M., Kerr, S., and D. Song, "DNS message fragments", Work in Progress, draft-muks-dns-message- fragments-00, July 2015.
[hintUpdate] "Hintfile Auto Update", commit de428c0, October 2015, <https://github.com/BII-Lab/Hintfile-Auto-Update>. [HOW_ATR_WORKS] Huston, G., "How well does ATR actually work?", APNIC blog, April 2018, <https://blog.apnic.net/2018/04/16/ how-well-does-atr-actually-work/>. [ICANN2010] Schlyter, J., Lamb, R., and R. Balasubramanian, "DNSSEC Key Management Implementation for the Root Zone (DRAFT)", May 2010, <http://www.root-dnssec.org/wp-content/ uploads/2010/05/draft-icann-dnssec-keymgmt-01.txt>. [ICANN2016] Design Team, "Root Zone KSK Rollover Plan", March 2016, <https://www.iana.org/reports/2016/ root-ksk-rollover-design-20160307.pdf>. [ICANN2017] ICANN, "2017 KSK Rollover External Test Plan", July 2016, <https://www.icann.org/en/system/files/files/ ksk-rollover-external-test-plan-22jul16-en.pdf>. [IPv6-frag-DNS] Huston, G., "Dealing with IPv6 fragmentation in the DNS", APNIC blog, August 2017, <https://blog.apnic.net/2017/08/22/ dealing-ipv6-fragmentation-dns>. [ISC-BIND] Risk, V., "2017 Root Key Rollover - What Does it Mean for BIND Users?", Internet Systems Consortium, December 2016, <https://www.isc.org/blogs/2017-root-key-rollover-what- does-it-mean-for-bind-users/>. [ISC-TN-2003-1] Abley, J., "Hierarchical Anycast for Global Service Distribution", March 2003, <http://ftp.isc.org/isc/pubs/tn/isc-tn-2003-1.txt>. [ITI2014] ICANN, "Identifier Technology Innovation Report", May 2014, <https://www.icann.org/en/system/files/files/ iti-report-15may14-en.pdf>.
[KROLL-ISSUE] Song, D., "A DNSSEC issue during Yeti KSK rollover", Yeti DNS blog, October 2016, <http://yeti-dns.org/yeti/blog/ 2016/10/26/A-DNSSEC-issue-during-Yeti-KSK-rollover.html>. [PINZ] Song, D., "Yeti experiment plan for PINZ", Yeti DNS blog, May 2018, <http://yeti-dns.org/yeti/blog/2018/05/01/ Experiment-plan-for-PINZ.html>. [RFC2826] Internet Architecture Board, "IAB Technical Comment on the Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May 2000, <https://www.rfc-editor.org/info/rfc2826>. [RFC2845] Vixie, P., Gudmundsson, O., Eastlake 3rd, D., and B. Wellington, "Secret Key Transaction Authentication for DNS (TSIG)", RFC 2845, DOI 10.17487/RFC2845, May 2000, <https://www.rfc-editor.org/info/rfc2845>. [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The China Education and Research Network (CERNET) IVI Translation Design and Deployment for the IPv4/IPv6 Coexistence and Transition", RFC 6219, DOI 10.17487/RFC6219, May 2011, <https://www.rfc-editor.org/info/rfc6219>. [RFC6891] Damas, J., Graff, M., and P. Vixie, "Extension Mechanisms for DNS (EDNS(0))", STD 75, RFC 6891, DOI 10.17487/RFC6891, April 2013, <https://www.rfc-editor.org/info/rfc6891>. [RFC7720] Blanchet, M. and L-J. Liman, "DNS Root Name Service Protocol and Deployment Requirements", BCP 40, RFC 7720, DOI 10.17487/RFC7720, December 2015, <https://www.rfc-editor.org/info/rfc7720>. [RFC7872] Gont, F., Linkova, J., Chown, T., and W. Liu, "Observations on the Dropping of Packets with IPv6 Extension Headers in the Real World", RFC 7872, DOI 10.17487/RFC7872, June 2016, <https://www.rfc-editor.org/info/rfc7872>. [RFC8109] Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS Resolver with Priming Queries", BCP 209, RFC 8109, DOI 10.17487/RFC8109, March 2017, <https://www.rfc-editor.org/info/rfc8109>.
[RFC8324] Klensin, J., "DNS Privacy, Authorization, Special Uses, Encoding, Characters, Matching, and Root Structure: Time for Another Look?", RFC 8324, DOI 10.17487/RFC8324, February 2018, <https://www.rfc-editor.org/info/rfc8324>. [RRL] Vixie, P. and V. Schryver, "Response Rate Limiting in the Domain Name System (DNS RRL)", June 2012, <http://www.redbarn.org/dns/ratelimits>. [RSSAC001] Root Server System Advisory Committee (RSSAC), "Service Expectations of Root Servers", RSSAC001 Version 1, December 2015, <https://www.icann.org/en/system/files/files/ rssac-001-root-service-expectations-04dec15-en.pdf>. [RSSAC023] Root Server System Advisory Committee (RSSAC), "History of the Root Server System", November 2016, <https://www.icann.org/en/system/files/files/ rssac-023-04nov16-en.pdf>. [SUNSET4] IETF, "Sunsetting IPv4 (sunset4) Concluded WG", <https://datatracker.ietf.org/wg/sunset4/about/>. [TNO2009] Gijsen, B., Jamakovic, A., and F. Roijers, "Root Scaling Study: Description of the DNS Root Scaling Model", TNO report, September 2009, <https://www.icann.org/en/system/files/files/ root-scaling-model-description-29sep09-en.pdf>. [USE_MIN_MTU] Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU", Work in Progress, draft-andrews-tcp-and-ipv6-use-minmtu-04, October 2015. [Wessels2015] Wessels, D., Castonguay, J., and P. Barber, "Thirteen Years of 'Old J-Root'", DNS-OARC Fall 2015 Workshop, October 2015, <https://indico.dns-oarc.net/event/24/ session/10/contribution/10/material/slides/0.pdf>. [YetiLR] "Observation on Large response issue during Yeti KSK rollover", Yeti DNS blog, August 2017, <https://yeti-dns.org/yeti/blog/2017/08/02/ large-packet-impact-during-yeti-ksk-rollover.html>.