Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8404

Effects of Pervasive Encryption on Operators

Pages: 53
Informational
Part 2 of 3 – Pages 23 to 39
First   Prev   Next

Top   ToC   RFC8404 - Page 23   prevText

3. Encryption in Hosting and Application SP Environments

Hosted environments have had varied requirements in the past for encryption, with many businesses choosing to use these services primarily for data and applications that are not business or privacy sensitive. A shift prior to the revelations on surveillance/passive monitoring began where businesses were asking for hosted environments to provide higher levels of security so that additional applications and service could be hosted externally. Businesses understanding the threats of monitoring in hosted environments increased that pressure to provide more secure access and session encryption to protect the management of hosted environments as well as the data and applications.

3.1. Management-Access Security

Hosted environments may have multiple levels of management access, where some may be strictly for the Hosting service provider (infrastructure that may be shared among customers), and some may be accessed by a specific customer for application management. In some cases, there are multiple levels of hosting service providers, further complicating the security of management infrastructure and the associated requirements. Hosting service provider management access is typically segregated from other traffic with a control channel and may or may not be encrypted depending upon the isolation characteristics of the management session. Customer access may be through a dedicated connection, but discussion for that connection method is out of scope for this document. In overlay networks (e.g., Virtual eXtensible Local Area Network (VXLAN), Geneve, etc.) that are used to provide hosted services, management access for a customer to support application management may depend upon the security mechanisms available as part of that overlay network. While overlay-network data encapsulations may be used to indicate the desired isolation, this is not sufficient to prevent deliberate attacks that are aware of the use of the overlay network. [GENEVE-REQS] describes requirements to handle attacks. It is possible to use an overlay header in combination with IPsec or other encrypted traffic sessions, but this adds the requirement for authentication infrastructure and may reduce packet transfer performance. The use of an overlay header may also be deployed as a mechanism to manage encrypted traffic streams on the network-by- network service providers. Additional extension mechanisms to provide integrity and/or privacy protections are being investigated for overlay encapsulations. Section 7 of [RFC7348] describes some of
Top   ToC   RFC8404 - Page 24
   the security issues possible when deploying VXLAN on Layer 2
   networks.  Rogue endpoints can join the multicast groups that carry
   broadcast traffic, for example.

3.1.1. Monitoring Customer Access

Hosted applications that allow some level of customer-management access may also require monitoring by the hosting service provider. Monitoring could include access-control restrictions such as authentication, authorization, and accounting for filtering and firewall rules to ensure they are continuously met. Customer access may occur on multiple levels, including user-level and administrative access. The hosting service provider may need to monitor access through either session monitoring or log evaluation to ensure security SLAs for access management are met. The use of session encryption to access hosted environments limits access restrictions to the metadata described below. Monitoring and filtering may occur at a: 2-tuple: IP level with source and destination IP addresses alone, or 5-tuple: IP and protocol level with a source IP address, destination IP address, protocol number, source port number, and destination port number. Session encryption at the application level, for example, TLS, currently allows access to the 5-tuple. IP-level encryption, such as IPsec in tunnel mode, prevents access to the original 5-tuple and may limit the ability to restrict traffic via filtering techniques. This shift may not impact all hosting service provider solutions as alternate controls may be used to authenticate sessions, or access may require that clients access such services by first connecting to the organization before accessing the hosted application. Shifts in access may be required to maintain equivalent access-control management. Logs may also be used for monitoring that access-control restrictions are met, but would be limited to the data that could be observed due to encryption at the point of log generation. Log analysis is out of scope for this document.

3.1.2. SP Content Monitoring of Applications

The following observations apply to any IT organization that is responsible for delivering services, whether to third parties, for example, as a web-based service, or to internal customers in an enterprise, e.g., a data-processing system that forms a part of the enterprise's business.
Top   ToC   RFC8404 - Page 25
   Organizations responsible for the operation of a data center have
   many processes that access the contents of IP packets (passive
   methods of measurement, as defined in [RFC7799]).  These processes
   are typically for service assurance or security purposes as part of
   their data-center operations.

   Examples include:

      - Network-Performance Monitoring / Application-Performance
        Monitoring

      - Intrusion defense/prevention systems

      - Malware detection

      - Fraud monitoring

      - Application DDOS protection

      - Cyber-attack investigation

      - Proof of regulatory compliance

      - Data leakage prevention

   Many application service providers simply terminate sessions to/from
   the Internet at the edge of the data center in the form of SSL/TLS
   offload in the load balancer.  Not only does this reduce the load on
   application servers, it simplifies the processes to enable monitoring
   of the session content.

   However, in some situations, encryption deeper in the data center may
   be necessary to protect personal information or in order to meet
   industry regulations, e.g., those set out by the Payment Card
   Industry (PCI).  In such situations, various methods have been used
   to allow service assurance and security processes to access
   unencrypted data.  These include SSL/TLS decryption in dedicated
   units, which then forward packets to SP-controlled tools, or real-
   time or post-capture decryption in the tools themselves.  A number of
   these tools provide passive decryption by providing the monitoring
   device with the server's private key.  The move to increased use of
   the forward-secret key exchange mechanism impacts the use of these
   techniques.

   Operators of data centers may also maintain packet recordings in
   order to be able to investigate attacks, breaches of internal
   processes, etc.  In some industries, organizations may be legally
   required to maintain such information for compliance purposes.
Top   ToC   RFC8404 - Page 26
   Investigations of this nature have used access to the unencrypted
   contents of the packet.  Alternate methods to investigate attacks or
   breaches of process will rely on endpoint information, such as logs.
   As previously noted, logs often lack complete information, and this
   is seen as a concern resulting in some relying on session access for
   additional information.

   Application service providers may offer content-level monitoring
   options to detect intellectual property leakage or other attacks.  In
   service provider environments where Data Loss Prevention (DLP) has
   been implemented on the basis of the service provider having
   cleartext access to session streams, the use of encrypted streams
   prevents these implementations from conducting content searches for
   the keywords or phrases configured in the DLP system.  DLP is often
   used to prevent the leakage of Personally Identifiable Information
   (PII) as well as financial account information, Personal Health
   Information (PHI), and PCI.  If session encryption is terminated at a
   gateway prior to accessing these services, DLP on session data can
   still be performed.  The decision of where to terminate encryption to
   hosted environments will be a risk decision made between the
   application service provider and customer organization according to
   their priorities.  DLP can be performed at the server for the hosted
   application and on an end user's system in an organization as
   alternate or additional monitoring points of content; however, this
   is not frequently done in a service provider environment.

   Application service providers, by their very nature, control the
   application endpoint.  As such, much of the information gleaned from
   sessions is still available on that endpoint.  However, when a gap
   exists in the application's logging and debugging capabilities, it
   has led the application service provider to access data in transport
   for monitoring and debugging.

3.2. Hosted Applications

Organizations are increasingly using hosted applications rather than in-house solutions that require maintenance of equipment and software. Examples include Enterprise Resource Planning (ERP) solutions, payroll service, time and attendance, travel and expense reporting, among others. Organizations may require some level of management access to these hosted applications and will typically require session encryption or a dedicated channel for this activity. In other cases, hosted applications may be fully managed by a hosting service provider with SLA expectations for availability and performance as well as for security functions including malware detection. Due to the sensitive nature of these hosted environments, the use of encryption is already prevalent. Any impact may be
Top   ToC   RFC8404 - Page 27
   similar to an enterprise with tools being used inside of the hosted
   environment to monitor traffic.  Additional concerns were not
   reported in the call for contributions.

3.2.1. Monitoring Managed Applications

Performance, availability, and other aspects of an SLA are often collected through passive monitoring. For example: o Availability: ability to establish connections with hosts to access applications and to discern the difference between network- or host-related causes of unavailability. o Performance: ability to complete transactions within a target response time and to discern the difference between network- or host-related causes of excess response time. Here, as with all passive monitoring, the accuracy of inferences is dependent on the cleartext information available, and encryption would tend to reduce the information and, therefore, the accuracy of each inference. Passive measurement of some metrics will be impossible with encryption that prevents inferring-packet correspondence across multiple observation points, such as for packet-loss metrics. Application logging currently lacks detail sufficient to make accurate inferences in an environment with increased encryption, and so this constitutes a gap for passive performance monitoring (which could be closed if log details are enhanced in the future).

3.2.2. Mail Service Providers

Mail (application) service providers vary in what services they offer. Options may include a fully hosted solution where mail is stored external to an organization's environment on mail service provider equipment or the service offering may be limited to monitor incoming mail to remove spam (Section 5.1), phishing attacks (Section 5.3), and malware (Section 5.6) before mail is directed to the organization's equipment. In both of these cases, content of the messages and headers is monitored to detect and remove messages that are undesirable or that may be considered an attack. STARTTLS should have zero effect on anti-spam efforts for SMTP traffic. Anti-spam services could easily be performed on an SMTP gateway, eliminating the need for TLS decryption services. The impact to anti-spam service providers should be limited to a change in tools, where middleboxes were deployed to perform these functions.
Top   ToC   RFC8404 - Page 28
   Many efforts are emerging to improve user-to-user encryption,
   including promotion of PGP and newer efforts such as Dark Mail
   [DarkMail].  Of course, content-based spam filtering will not be
   possible on encrypted content.

3.3. Data Storage

Numerous service offerings exist that provide hosted storage solutions. This section describes the various offerings and details the monitoring for each type of service and how encryption may impact the operational and security monitoring performed. Trends in data storage encryption for hosted environments include a range of options. The following list is intentionally high-level to describe the types of encryption used in coordination with data storage that may be hosted remotely, meaning the storage is physically located in an external data center requiring transport over the Internet. Options for monitoring will vary with each encryption approach described below. In most cases, solutions have been identified to provide encryption while ensuring management capabilities were maintained through logging or other means.

3.3.1. Object-Level Encryption

For higher security and/or privacy of data and applications, options that provide end-to-end encryption of the data from the user's desktop or server to the storage platform may be preferred. This description includes any solution that encrypts data at the object level, not the transport level. Encryption of data may be performed with libraries on the system or at the application level, which includes file-encryption services via a file manager. Object-level encryption is useful when data storage is hosted or scenarios when the storage location is determined based on capacity or based on a set of parameters to automate decisions. This could mean that large datasets accessed infrequently could be sent to an off-site storage platform at an external hosting service, data accessed frequently may be stored locally, or the decision of where to store datasets could be based on the transaction type. Object-level encryption is grouped separately for the purpose of this document since data may be stored in multiple locations including off-site remote storage platforms. If session encryption is also used, the protocol is likely to be TLS. Impacts to monitoring may include access to content inspection for data-leakage prevention and similar technologies, depending on their placement in the network.
Top   ToC   RFC8404 - Page 29
3.3.1.1. Monitoring for Hosted Storage
Monitoring of hosted storage solutions that use host-level (object) encryption is described in this subsection. Host-level encryption can be employed for backup services and occasionally for external storage services (operated by a third party) when internal storage limits are exceeded. Monitoring of data flows to hosted storage solutions is performed for security and operational purposes. The security monitoring may be to detect anomalies in the data flows that could include changes to destination, the amount of data transferred, or alterations in the size and frequency of flows. Operational considerations include capacity and availability monitoring.

3.3.2. Disk Encryption, Data at Rest (DAR)

There are multiple ways to achieve full disk encryption for stored data. Encryption may be performed on data to be stored while in transit close to the storage media with solutions like Controller Based Encryption (CBE) or in the drive system with Self-Encrypting Drives (SEDs). Session encryption is typically coupled with encryption of these data at rest (DAR) solutions to also protect data in transit. Transport encryption is likely via TLS.
3.3.2.1. Monitoring Session Flows for DAR Solutions
Monitoring for transport of data-to-storage platforms, where object- level encryption is performed close to or on the storage platform, is similar to that described in Section 3.3.1.1. The primary difference for these solutions is the possible exposure of sensitive information, which could include privacy-related data, financial information, or intellectual property if session encryption via TLS is not deployed. Session encryption is typically used with these solutions, but that decision would be based on a risk assessment. There are use cases where DAR or disk-level encryption is required. Examples include preventing exposure of data if physical disks are stolen or lost. In the case where TLS is in use, monitoring and the exposure of data is limited to a 5-tuple.

3.3.3. Cross-Data-Center Replication Services

Storage services also include data replication, which may occur between data centers and may leverage Internet connections to tunnel traffic. The traffic may use an Internet Small Computer System Interface (iSCSI) [RFC7143] or Fibre Channel over TCP/IP (FCIP) [RFC7146] encapsulated in IPsec. Either transport or tunnel mode may be used for IPsec depending upon the termination points of the IPsec
Top   ToC   RFC8404 - Page 30
   session, if it is from the storage platform itself or from a gateway
   device at the edge of the data center, respectively.

3.3.3.1. Monitoring IPsec for Data Replication Services
The monitoring of data flows between data centers (for data replication) may be performed for security and operational purposes and would typically concentrate more on operational aspects since these flows are essentially virtual private networks (VPNs) between data centers. Operational considerations include capacity and availability monitoring. The security monitoring may be to detect anomalies in the data flows, similar to what was described in Section 3.3.1.1. If IPsec tunnel mode is in use, monitoring is limited to a 2-tuple; with transport mode, it's limited to a 5-tuple.

4. Encryption for Enterprises

Encryption of network traffic within the private enterprise is a growing trend, particularly in industries with audit and regulatory requirements. Some enterprise-internal networks are almost completely TLS and/or IPsec encrypted. For each type of monitoring, different techniques and access to parts of the data stream are part of current practice. As we transition to an increased use of encryption, alternate methods of monitoring for operational purposes may be necessary to reduce the practice of breaking encryption (other policies may apply in some enterprise settings).

4.1. Monitoring Practices of the Enterprise

Large corporate enterprises are the owners of the platforms, data, and network infrastructure that provide critical business services to their user communities. As such, these enterprises are responsible for all aspects of the performance, availability, security, and quality of experience for all user sessions. In many such enterprises, users are required to consent to the enterprise monitoring all their activities as a condition of employment. Subsections of Section 4 discuss techniques that access data beyond the data-link, network, and transport-level headers typically used in service provider networks since the corporate enterprise owns the data. These responsibilities break down into three basic areas: 1. Security Monitoring and Control 2. Application-Performance Monitoring and Reporting 3. Network Diagnostics and Troubleshooting
Top   ToC   RFC8404 - Page 31
   In each of the above areas, technical support teams utilize
   collection, monitoring, and diagnostic systems.  Some organizations
   currently use attack methods such as replicated TLS server RSA
   private keys to decrypt passively monitored copies of encrypted TLS
   packet streams.

   For an enterprise to avoid costly application down time and deliver
   expected levels of performance, protection, and availability, some
   forms of traffic analysis, sometimes including examination of packet
   payloads, are currently used.

4.1.1. Security Monitoring in the Enterprise

Enterprise users are subject to the policies of their organization and the jurisdictions in which the enterprise operates. As such, proxies may be in use to: 1. intercept outbound session traffic to monitor for intellectual property leakage (by users, malware, and trojans), 2. detect viruses/malware entering the network via email or web traffic, 3. detect malware/trojans in action, possibly connecting to remote hosts, 4. detect attacks (cross-site scripting and other common web-related attacks), 5. track misuse and abuse by employees, 6. restrict the types of protocols permitted to/from the entire corporate environment, and 7. detect and defend against Internet DDoS attacks, including both volumetric and Layer 7 attacks. A significant portion of malware hides its activity within TLS or other encryption protocols. This includes lateral movement, Command and Control (C&C), and Data Exfiltration. The impact to a fully encrypted internal network would include cost and possible loss of detection capabilities associated with the transformation of the network architecture and tools for monitoring. The capabilities of detection through traffic fingerprinting, logging, host-level transaction monitoring, and flow analysis would vary depending on access to a 2-tuple or 5-tuple in the network as well.
Top   ToC   RFC8404 - Page 32
   Security monitoring in the enterprise may also be performed at the
   endpoint with numerous current solutions that mitigate the same
   problems as some of the above-mentioned solutions.  Since the
   software agents operate on the device, they are able to monitor
   traffic before it is encrypted, monitor for behavior changes and lock
   down devices to use only the expected set of applications.  Session
   encryption does not affect these solutions.  Some might argue that
   scaling is an issue in the enterprise, but some large enterprises
   have used these tools effectively.

   Use of bring-your-own-device (BYOD) policies within organizations may
   limit the scope of monitoring permitted with these alternate
   solutions.  Network endpoint assessment (NEA) or the use of virtual
   hosts could help to bridge the monitoring gap.

4.1.2. Monitoring Application Performance in the Enterprise

There are two main goals of monitoring: 1. Assess traffic volume on a per-application basis for billing, capacity planning, optimization of geographical location for servers or proxies, and other goals. 2. Assess performance in terms of application response time and user-perceived response time. Network-based application-performance monitoring tracks application response time by user and by URL, which is the information that the application owners and the lines of business request. CDNs add complexity in determining the ultimate endpoint destination. By their very nature, such information is obscured by CDNs and encrypted protocols, adding a new challenge for troubleshooting network and application problems. URL identification allows the application support team to do granular, code-level troubleshooting at multiple tiers of an application. New methodologies to monitor user-perceived response time and to separate network from server time are evolving. For example, the IPv6 Destination Option Header (DOH) implementation of Performance and Diagnostic Metrics (PDM) [RFC8250] will provide this. Using PDM with IPsec Encapsulating Security Payload (ESP) Transport Mode requires placement of the PDM DOH within the ESP-encrypted payload to avoid leaking timing and sequence number information that could be useful to an attacker. Use of PDM DOH also may introduce some security weaknesses, including a timing attack, as described in Section 4 of [RFC8250]. For these and other reasons, [RFC8250]
Top   ToC   RFC8404 - Page 33
   requires that the PDM DOH option be explicitly turned on by
   administrative action in each host where this measurement feature
   will be used.

4.1.3. Diagnostics and Troubleshooting for Enterprise Networks

One primary key to network troubleshooting is the ability to follow a transaction through the various tiers of an application in order to isolate the fault domain. A variety of factors relating to the structure of the modern data center and multi-tiered application have made it difficult to follow a transaction in network traces without the ability to examine some of the packet payload. Alternate methods, such as log analysis, need improvement to fill this gap.
4.1.3.1. Address Sharing (NAT)
CDNs, NATs, and Network Address and Port Translators (NAPTs) obscure the ultimate endpoint designation (see [RFC6269] for types of address sharing and a list of issues). Troubleshooting a problem for a specific end user requires finding information such as the IP address and other identifying information so that their problem can be resolved in a timely manner. NAT is also frequently used by lower layers of the data-center infrastructure. Firewalls, load balancers, web servers, app servers, and middleware servers all regularly NAT the source IP of packets. Combine this with the fact that users are often allocated randomly by load balancers to all these devices, and the network troubleshooter is often left with very few options in today's environment due to poor logging implementations in applications. As such, network troubleshooting is used to trace packets at a particular layer, decrypt them, and look at the payload to find a user session. This kind of bulk packet capture and bulk decryption is frequently used when troubleshooting a large and complex application. Endpoints typically don't have the capacity to handle this level of network packet capture, so out-of-band networks of robust packet brokers and network sniffers that use techniques such as copies of TLS RSA private keys accomplish this task today.
4.1.3.2. TCP Pipelining / Session Multiplexing
TCP pipelining / session multiplexing used mainly by middleboxes today allows for multiple end-user sessions to share the same TCP connection. This raises several points of interest with an increased use of encryption. TCP session multiplexing should still be possible when TLS or TCPcrypt is in use since the TCP header information is exposed, leaving the 5-tuple accessible. The use of TCP session
Top   ToC   RFC8404 - Page 34
   multiplexing of an IP-layer encryption, e.g., IPsec, that only
   exposes a 2-tuple would not be possible.  Troubleshooting
   capabilities with encrypted sessions from the middlebox may limit
   troubleshooting to the use of logs from the endpoints performing the
   TCP multiplexing or from the middleboxes prior to any additional
   encryption that may be added to tunnel the TCP multiplexed traffic.

   Increased use of HTTP/2 will likely further increase the prevalence
   of session multiplexing, both on the Internet and in the private data
   center.  HTTP pipelining requires both the client and server to
   participate; visibility of packets once encrypted will hide the use
   of HTTP pipelining for any monitoring that takes place outside of the
   endpoint or proxy solution.  Since HTTP pipelining is between a
   client and server, logging capabilities may require improvement in
   some servers and clients for debugging purposes if this is not
   already possible.  Visibility for middleboxes includes anything
   exposed by TLS and the 5-tuple.

4.1.3.3. HTTP Service Calls
When an application server makes an HTTP service call to back-end services on behalf of a user session, it uses a completely different URL and a completely different TCP connection. Troubleshooting via network trace involves matching up the user request with the HTTP service call. Some organizations do this today by decrypting the TLS packet and inspecting the payload. Logging has not been adequate for their purposes.
4.1.3.4. Application-Layer Data
Many applications use text formats such as XML to transport data or application-level information. When transaction failures occur and the logs are inadequate to determine the cause, network and application teams work together, each having a different view of the transaction failure. Using this troubleshooting method, the network packet is correlated with the actual problem experienced by an application to find a root cause. The inability to access the payload prevents this method of troubleshooting.

4.2. Techniques for Monitoring Internet-Session Traffic

Corporate networks commonly monitor outbound session traffic to detect or prevent attacks as well as to guarantee service-level expectations. In some cases, alternate options are available when encryption is in use through a proxy or a shift to monitoring at the endpoint. In both cases, scaling is a concern, and advancements to support this shift in monitoring practices will assist the deployment of end-to-end encryption.
Top   ToC   RFC8404 - Page 35
   Some DLP tools intercept traffic at the Internet gateway or proxy
   services with the ability to MITM encrypted session traffic (HTTP/
   TLS).  These tools may monitor for key words important to the
   enterprise including business-sensitive information such as trade
   secrets, financial data, PII, or PHI.  Various techniques are used to
   intercept HTTP/TLS sessions for DLP and other purposes and can be
   misused as described in "Summarizing Known Attacks on Transport Layer
   Security (TLS) and Datagram TLS (DTLS)" [RFC7457] (see Section 2.8).
   Note: many corporate policies allow access to personal financial and
   other sites for users without interception.  Another option is to
   terminate a TLS session prior to the point where monitoring is
   performed.  Aside from exposing user information to the enterprise,
   MITM devices often are subject to severe security defects, which can
   lead to exposure of user data to attackers outside the enterprise
   user data [UserData].  In addition, implementation errors in
   middleboxes have led to major difficulties in deploying new versions
   of security protocols such as TLS [Ben17a] [Ben17b] [Res17a]
   [Res17b].

   Monitoring traffic patterns for anomalous behavior such as increased
   flows of traffic that could be bursty at odd times or flows to
   unusual destinations (small or large amounts of traffic) is common.
   This traffic may or may not be encrypted, and various methods of
   encryption or just obfuscation may be used.

   Web-filtering devices are sometimes used to allow only access to
   well-known sites found to be legitimate and free of malware on last
   check by a web-filtering service company.  One common example of web
   filtering in a corporate environment is blocking access to sites that
   are not well known to these tools for the purpose of blocking
   malware; this may be noticeable to those in research who are unable
   to access colleagues' individual sites or new websites that have not
   yet been screened.  In situations where new sites are required for
   access, they can typically be added after notification by the user or
   log alerts and review.  Account access for personal mail may be
   blocked in corporate settings to prevent another vector for malware
   from entering as well as to prevent intellectual property leaks out
   of the network.  This method remains functional with increased use of
   encryption and may be more effective at preventing malware from
   entering the network.  Some enterprises may be more aggressive in
   their filtering and monitoring policy, causing undesirable outcomes.
   Web-filtering solutions monitor and potentially restrict access based
   on the destination URL (when available), server name, IP address, or
   DNS name.  A complete URL may be used in cases where access
   restrictions vary for content on a particular site or for the sites
   hosted on a particular server.  In some cases, the enterprise may use
   a proxy to access this additional information based on their policy.
   This type of restriction is intended to be transparent to users in a
Top   ToC   RFC8404 - Page 36
   corporate setting as the typical corporate user does not access sites
   that are not well known to these tools.  However, the mechanisms that
   these web filters use to do monitoring and enforcement have the
   potential to cause access issues or other user-visible failures.

   Desktop DLP tools are used in some corporate environments as well.
   Since these tools reside on the desktop, they can intercept traffic
   before it is encrypted and may provide a continued method for
   monitoring leakage of intellectual property from the desktop to the
   Internet or attached devices.

   DLP tools can also be deployed by network service providers, as they
   have the vantage point of monitoring all traffic paired with
   destinations off the enterprise network.  This makes an effective
   solution for enterprises that allow "bring-your-own" devices when the
   traffic is not encrypted and for devices outside the desktop category
   (such as mobile phones) that are used on corporate networks
   nonetheless.

   Enterprises may wish to reduce the traffic on their Internet access
   facilities by monitoring requests for within-policy content and
   caching it.  In this case, repeated requests for Internet content
   spawned by URLs in email trade newsletters or other sources can be
   served within the enterprise network.  Gradual deployment of end-to-
   end encryption would tend to reduce the cacheable content over time,
   owing to concealment of critical headers and payloads.  Many forms of
   enterprise-performance management may be similarly affected.  It
   should be noted that transparent caching is considered an anti-
   pattern.

5. Security Monitoring for Specific Attack Types

Effective incident response today requires collaboration at Internet scale. This section will only focus on efforts of collaboration at Internet scale that are dedicated to specific attack types. They may require new monitoring and detection techniques in an increasingly encrypted Internet. As mentioned previously, some service providers have been interfering with STARTTLS to prevent session encryption to be able to perform functions they are used to (injecting ads, monitoring, etc.). By detailing the current monitoring methods used for attack detection and response, this information can be used to devise new monitoring methods that will be effective in the changed Internet via collaboration and innovation. Changes to improve encryption or to deploy OS methods have little impact on the detection of malicious actors. Malicious actors have had access to strong encryption for quite some time. Incident responders, in many cases, have developed techniques to locate
Top   ToC   RFC8404 - Page 37
   malicious traffic within encrypted sessions.  The following section
   will note some examples where detection and mitigation of such
   traffic has been successful.

5.1. Mail Abuse and Spam

The largest operational effort to prevent mail abuse is through the Messaging, Malware, Mobile Anti-Abuse Working Group (M3AAWG) [M3AAWG]. Mail abuse is combatted directly with mail administrators who can shut down or stop continued mail abuse originating from large-scale providers that participate in using the Abuse Reporting Format (ARF) agents standardized in the IETF [RFC5965] [RFC6430] [RFC6590] [RFC6591] [RFC6650] [RFC6651] [RFC6652]. The ARF agent directly reports abuse messages to the appropriate service provider who can take action to stop or mitigate the abuse. Since this technique uses the actual message, the use of SMTP over TLS between mail gateways will not affect its usefulness. As mentioned previously, SMTP over TLS only protects data while in transit, and the messages may be exposed on mail servers or mail gateways if a user-to-user encryption method is not used. Current user-to-user message encryption methods on email (S/MIME and PGP) do not encrypt the email header information used by ARF and the service provider operators in their efforts to mitigate abuse. Another effort, "Domain-based Message Authentication, Reporting, and Conformance (DMARC)" [RFC7489], is a mechanism for policy distribution that enables increasingly strict handling of messages that fail authentication checks, ranging from no action, through altered delivery, up to message rejection. DMARC is also not affected by the use of STARTTLS.

5.2. Denial of Service

Responses to Denial-of-Service (DoS) attacks are typically coordinated by the service provider community with a few key vendors who have tools to assist in the mitigation efforts. Traffic patterns are determined from each DoS attack to stop or rate limit the traffic flows with patterns unique to that DoS attack. Data types used in monitoring traffic for DDoS are described in the documents in development by the DDoS Open Threat Signaling (DOTS) [DOTS] Working Group. The impact of encryption can be understood from their documented use cases [DDOS-USECASE]. Data types used in DDoS attacks have been detailed in the Incident Object Description Exchange Format (IODEF) Guidance document (see [RFC8274], Appendix B.2) with the help of several members of the service provider community. The examples provided are intended to
Top   ToC   RFC8404 - Page 38
   help identify the useful data in detecting and mitigating these
   attacks independent of the transport and protocol descriptions in the
   documents.

5.3. Phishing

Investigations and responses to phishing attacks follow well-known patterns, requiring access to specific fields in email headers as well as content from the body of the message. When reporting phishing attacks, the recipient has access to each field as well as the body to make content reporting possible, even when end-to-end encryption is used. The email header information is useful to identify the mail servers and accounts used to generate or relay the attack messages in order to take the appropriate actions. The content of the message often includes an embedded attack that may be in an infected file or may be a link that results in the download of malware to the user's system. Administrators often find it helpful to use header information to track down similar messages in their mail queue or in users' inboxes to prevent further infection. Combinations of To:, From:, Subject:, and Received: from header information might be used for this purpose. Administrators may also search for document attachments of the same name or size or that contain a file with a matching hash to a known phishing attack. Administrators might also add URLs contained in messages to block lists locally, or this may also be done by browser vendors through larger-scale efforts like that of the Anti-Phishing Working Group (APWG). See "Coordinating Attack Response at Internet Scale (CARIS) Workshop Report" [RFC8073] for additional information and pointers to the APWG's efforts on anti-phishing. A full list of the fields used in phishing attack incident responses can be found in RFC 5901. Future plans to increase privacy protections may limit some of these capabilities if some email header fields are encrypted, such as the To:, From:, and Subject: header fields. This does not mean that those fields should not be encrypted, only that we should be aware of how they are currently used. Some products protect users from phishing by maintaining lists of known phishing domains (such as misspelled bank names) and blocking access. This can be done by observing DNS, cleartext HTTP, or Server Name Indication (SNI) in TLS, in addition to analyzing email. Alternate options to detect and prevent phishing attacks may be needed. More recent examples of data exchanged in spear phishing attacks has been detailed in the IODEF Guidance document (see [RFC8274], Appendix B.3).
Top   ToC   RFC8404 - Page 39

5.4. Botnets

Botnet detection and mitigation is complex as botnets may involve hundreds or thousands of hosts with numerous C&C servers. The techniques and data used to monitor and detect each may vary. Connections to C&C servers are typically encrypted; therefore, a move to an increasingly encrypted Internet may not affect the detection and sharing methods used.

5.5. Malware

Techniques for the detection and monitoring of malware vary. As mentioned in Section 4, malware monitoring may occur at gateways to the organization analyzing email and web traffic. These services can also be provided by service providers, changing the scale and location of this type of monitoring. Additionally, incident responders may identify attributes unique to types of malware to help track down instances by their communication patterns on the Internet or by alterations to hosts and servers. Data types used in malware investigations have been summarized in an example of the IODEF Guidance document (see [RFC8274], Appendix B.3).

5.6. Spoofed-Source IP Address Protection

The IETF has reacted to spoofed-source IP address-based attacks, recommending the use of network ingress filtering in BCP 38 [RFC2827] and of the unicast Reverse Path Forwarding (uRPF) mechanism [RFC3704]. But uRPF suffers from limitations regarding its granularity: a malicious node can still use a spoofed IP address included inside the prefix assigned to its link. Source Address Validation Improvement (SAVI) mechanisms try to solve this issue. Basically, a SAVI mechanism is based on the monitoring of a specific address assignment/management protocol (e.g., Stateless Address Autoconfiguration (SLAAC) [RFC4862], Secure Neighbor Discovery (SEND) [RFC3971], and DHCPv4/v6 [RFC2131][RFC3315]) and, according to this monitoring, sets up a filtering policy allowing only the IP flows with a correct source IP address (i.e., any packet with a source IP address from a node not owning it is dropped). The encryption of parts of the address assignment/management protocols, critical for SAVI mechanisms, can result in a dysfunction of the SAVI mechanisms.

5.7. Further Work

Although incident response work will continue, new methods to prevent system compromise through security automation and continuous monitoring [SACM] may provide alternate approaches where system security is maintained as a preventative measure.


(next page on part 3)

Next Section