The workshop investigated deployment cases from certificate authorities for web connections (WebPKI) to DNS Security (DNSSEC), from the Border Gateway Protocol (BGP) to Network Address Translators (NATs), from DNS resolvers to CDNs, and from Internet of Things (IoT) systems to instant messaging and social media applications.
In many cases, (1) there was a surprise in how technology was deployed, (2) there was a lack of sufficient adoption, or (3) the business models associated with chosen technologies were not in favor of broader interoperability.
In general, the protocol designers cannot affect market forces but must work within them. But there are often competing technical approaches or features that are tailored for a particular deployment pattern. In some cases, it is possible to choose whether to support, for instance, a clear need for an established business, a feature designed to support collaboration among smaller players, or some kind of disruption through a more speculative new feature or technology.
Lessons learned include the following:
-
Feedback from those who deploy often comes too late.
-
Building blocks get repurposed in unexpected ways.
-
User communities come in too late.
-
The Web is getting more centralized, and counteracting this trend is difficult. It is not necessarily clear what technical path leads to distributed markets and decentralized architectures, for instance.
-
There are also many forces that make it easier to pursue centralized models than other models. For instance, deployment is often easier in a centralized model. And various business and regulatory processes work best within a small, well-defined set of entities that can interact with each other. This can lead to, for instance, regulators preferring a situation with a small number of entities that they can talk to, rather than a diverse set of providers.
-
It is important but hard to determine how useful new protocols are.
-
It is difficult for the IETF community to interact with other communities, e.g., specific business sectors that need new technology (such as aviation or healthcare) or regulators.
Several underlying principles can be observed in the example cases that were discussed. Deployment failures tend to be associated with cases where interdependencies make progress difficult and there's no major advantage for early deployment. Despite persistent problems in the currently used technology, it becomes difficult for the ecosystem to switch to better technology. For instance, there are a number of areas where the Internet routing protocol BGP [
RFC 4271] is lacking, but there has been only limited success in deploying significant improvements -- for instance, in the area of security.
Another principle appears to be first-mover advantage. Several equally interesting technologies have fared in very different ways, depending on whether there was an earlier system that provided most of the benefits of the new system. Again, despite potential problems in an already-deployed technology, it becomes difficult to deploy improvements due to a lack of immediate incentives and due to the competing and already-deployed alternative that is proceeding forward in the ecosystem. For instance, WebPKI is very widely deployed and used, but DNSSEC [
RFC 4033] is not. Is this because of the earlier commercial adoption of WebPKI, the more complex interdependencies between systems that wished to deploy DNSSEC, or some other reason?
The definition of "success" in [
RFC 5218] appears to be part of the problem. The only way to control deployments up front is to prevent wild success, but wild successes are actually what we want. And it seems very difficult to predict these successes.
The workshop also discussed the extent to which protocol work even should be controlled by the IETF, or the IESG. It seems unproductive to attempt to constrain deployment models, as one can only offer possibilities but not force anyone to use a particular possibility.
The workshop also discussed different types of deployment patterns on the Internet:
-
Delivering functionality over the Internet as a web service. The Internet is an open and standardized system, but the service on top may be closed, essentially running two components of the same service provider's software against each other over the browser and Internet infrastructure. Several large application systems have grown in the Internet in this manner, encompassing large amounts of functionality and a large fraction of Internet users. This makes it easier for web applications to grow by themselves without cross-fertilization or interoperability.
-
Delivering concentrated network services that offer the standard capabilities of the Internet. Examples in this category include the provisioning of some mail services, DNS resolution, and so on.
The second case is more interesting for an Internet architecture discussion. There can, however, be different underlying situations even in that case. The service may be simply a concentrated way to provide a commodity service. The market should find a natural equilibrium for such situations. This may be fine, particularly where the service does not provide any new underlying advantage to whoever is providing it (in the form of user data that can be commercialized, for instance, or as training data for an important Machine Learning service).
Secondly, the service may be an extension beyond standard protocols, leading to some questions about how well standards and user expectations match. But those questions could be addressed by better or newer standards. Thirdly, and potentially most disturbingly, the service may be provided in this concentrated manner due to business patterns that make it easier for particular entities to deploy such services.
The group also discussed monocultures, and their negative effect on the Internet and its stability and resistance to various problems and attacks.
Regulation may affect the Internet businesses as well. Regulation can exist in multiple forms, based on economic rationale (e.g., competition law) or other factors. For instance, user privacy is a common regulatory topic.
Many of the participants have struggled with these trends and their effect on desirable characteristics of Internet systems, such as distributed, end-to-end architecture or privacy. Yet, there are many business and technical drivers causing the Internet architecture to become further and further centralized.
Some observations that were made:
-
When standardizing new technology, the parties involved in the effort may think they agree on what the goals are but in reality are often surprised in the end. For instance, with DNS (queries) over HTTPS (DoH) [RFC 8484], there were very different aspirations, some around improvements in confidentiality of the queries, some around operational and latency improvements to DNS operations, and some about shifting business and deployment models. The full picture was not clear before the work was completed.
-
In DNS, DDoS is a practical reality, and only a handful of providers can handle the traffic load in these attacks.
The hopeful side of this issue is that there are some potential answers:
-
DDoS defenses do not have to come through large entities, as layered defenses and federation also help similarly.
-
Surveillance state data capture can be fought with data object encryption and by not storing all of the data in one place.
-
Web tracking can be combatted by browsers choosing to avoid techniques that are sensitive to tracking. Competition in the browser market may help drive some of these changes.
-
Open interfaces help guard against the bundling of services in one large entity; as long as there are open, well-defined interfaces to specific functions, these functions can also be performed by other parties.
-
Commercial surveillance does not seem to be curbed by current means. But there are still possibilities, such as stronger regulation, data minimization, or browsers acting on behalf of users. There are hopeful signs that at least some browsers are becoming more aggressive in this regard. But more is needed.
One comment made in the workshop was that the Internet community needs to curb the architectural trend of centralization. Another comment was that discussing this in the abstract is not as useful as more concrete, practical actions. For instance, one might imagine different DoH deployments with widely different implications for privacy or tolerance of failures. Getting to the specifics of how a particular service can be made better is important.
This part of the discussion focused on whether in the current state of the Internet we actually need a new threat model.
Many of the security concerns regarding communications have been addressed in the past few years, with increasing encryption. However, issues with trusting endpoints on the other side of the communication have not been addressed and are becoming more urgent with the advent of centralized service architectures.
Further effort may be needed to minimize centralization, as having only a few places to tap increases the likelihood of surveillance.
There may be a need to update [
RFC 3552] and [
RFC 7258].
The participants in the workshop agreed that a new threat model is needed and that non-communications-security issues need to be handled.
Other security discussions were focused on IoT systems, algorithm agility issues, experiences from difficult security upgrades such as DNSSEC key rollovers, and routing security.
The participants cautioned against relying too much on device manufacturers for security, and being clear on security models and assumptions. Security is often poorly understood, and the assumptions about who the system defends against and who it does not are not clear.
The workshop turned into a discussion of what actions we can take:
-
Documenting our experiences?
-
Providing advice (to the IETF or to others)?
-
Waiting for the catastrophe that will make people agree to changes? The participants of course did not wish for this.
-
Work at the IETF?
-
Technical solutions/choices?
The best way for the IETF to do things is through standards; convincing people through other requests is difficult. The IETF needs to:
-
Pick pieces that it is responsible for.
-
Be reactive for the rest, be available as an expert in other discussions, provide Internet technology knowledge where needed, etc.
One key question is what other parties need to be involved in any discussions. Platform developers (mobile platforms, cloud systems, etc.) are one such group. Specific technology or business groups (such as email provider or certificate authority forums) are another.
The workshop also discussed specific technology issues -- for instance, around IoT systems. One observation in those systems is that there is no single model for applications; they vary. There are a lot of different constraints in different systems and different control points. What is perhaps most needed today is user control and transparency (for instance, via Manufacturer Usage Descriptions (MUDs) [
RFC 8520]). Another issue is management, particularly for devices that could be operational for decades. Given the diversity of IoT systems, it may also make more sense to build support systems for broader solutions than for specific solutions or specific protocols.
There are also many security issues. While some of them are trivial (such as default passwords), one should also look forward and be prepared to have solutions for, say, trust management for long time scales, or be able to provide data minimization to cut down on the potential for leakages. And the difficulty of establishing peer-to-peer security strengthens the need for a central point, which may also be harmful from a long-term privacy perspective.