8. Forwarding Adjacencies (FA)
To improve scalability of MPLS TE (and thus GMPLS) it may be useful to aggregate multiple TE LSPs inside a bigger TE LSP. Intermediate nodes see the external LSP only. They do not have to maintain forwarding states for each internal LSP, less signaling messages need to be exchanged and the external LSP can be somehow protected instead (or in addition) to the internal LSPs. This can considerably increase the scalability of the signaling. The aggregation is accomplished by (a) an LSR creating a TE LSP, (b) the LSR forming a forwarding adjacency out of that LSP (advertising this LSP as a Traffic Engineering (TE) link into IS-IS/OSPF), (c) allowing other LSRs to use forwarding adjacencies for their path computation, and (d) nesting of LSPs originated by other LSRs into that LSP (e.g., by using the label stack construct in the case of IP). ISIS/OSPF floods the information about "Forwarding Adjacencies" FAs just as it floods the information about any other links. Consequently to this flooding, an LSR has in its TE link state database the information about not just conventional links, but FAs as well. An LSR, when performing path computation, uses not just conventional links, but FAs as well. Once a path is computed, the LSR uses RSVP- TE/CR-LDP for establishing label binding along the path. FAs need simple extensions to signaling and routing protocols.8.1. Routing and Forwarding Adjacencies
Forwarding adjacencies may be represented as either unnumbered or numbered links. A FA can also be a bundle of LSPs between two nodes. FAs are advertised as GMPLS TE links such as defined in [HIERARCHY]. GMPLS TE links are advertised in OSPF and IS-IS such as defined in [OSPF-TE-GMPLS] and [ISIS-TE-GMPLS]. These last two specifications enhance [OSPF-TE] and [ISIS-TE] that defines a base TE link. When a FA is created dynamically, its TE attributes are inherited from the FA-LSP that induced its creation. [HIERARCHY] specifies how each TE parameter of the FA is inherited from the FA-LSP. Note that the bandwidth of the FA must be at least as big as the FA-LSP that induced it, but may be bigger if only discrete bandwidths are available for the FA-LSP. In general, for dynamically provisioned forwarding adjacencies, a policy-based mechanism may be needed to associate attributes to forwarding adjacencies.
A FA advertisement could contain the information about the path taken by the FA-LSP associated with that FA. Other LSRs may use this information for path computation. This information is carried in a new OSPF and IS-IS TLV called the Path TLV. It is possible that the underlying path information might change over time, via configuration updates, or dynamic route modifications, resulting in the change of that TLV. If forwarding adjacencies are bundled (via link bundling), and if the resulting bundled link carries a Path TLV, the underlying path followed by each of the FA-LSPs that form the component links must be the same. It is expected that forwarding adjacencies will not be used for establishing IS-IS/OSPF peering relation between the routers at the ends of the adjacency. LSP hierarchy could exist both with the peer and with the overlay models. With the peer model, the LSP hierarchy is realized via FAs and an LSP is both created and used as a TE link by exactly the same instance of the control plane. Creating LSP hierarchies with overlays does not involve the concept of FA. With the overlay model an LSP created (and maintained) by one instance of the GMPLS control plane is used as a TE link by another instance of the GMPLS control plane. Moreover, the nodes using a TE link are expected to have a routing and signaling adjacency.8.2. Signaling Aspects
For the purpose of processing the explicit route in a Path/Request message of an LSP that is to be tunneled over a forwarding adjacency, an LSR at the head-end of the FA-LSP views the LSR at the tail of that FA-LSP as adjacent (one IP hop away).8.3. Cascading of Forwarding Adjacencies
With an integrated model, several layers are controlled using the same routing and signaling protocols. A network may then have links with different multiplexing/demultiplexing capabilities. For example, a node may be able to multiplex/demultiplex individual packets on a given link, and may be able to multiplex/demultiplex channels within a SONET payload on other links. A new OSPF and IS-IS sub-TLV has been defined to advertise the multiplexing capability of each interface: PSC, L2SC, TDM, LSC or FSC. This sub-TLV is called the Interface Switching Capability Descriptor sub-TLV, which complements the sub-TLVs defined in
[OSPF-TE-GMPLS] and [ISIS-TE-GMPLS]. The information carried in this sub-TLV is used to construct LSP regions, and determine region's boundaries. Path computation may take into account region boundaries when computing a path for an LSP. For example, path computation may restrict the path taken by an LSP to only the links whose multiplexing/demultiplexing capability is PSC. When an LSP need to cross a region boundary, it can trigger the establishment of an FA at the underlying layer (i.e., the L2SC layer). This can trigger a cascading of FAs between layers with the following obvious order: L2SC, then TDM, then LSC, and then finally FSC.9. Routing and Signaling Adjacencies
By definition, two nodes have a routing (IS-IS/OSPF) adjacency if they are neighbors in the IS-IS/OSPF sense. By definition, two nodes have a signaling (RSVP-TE/CR-LDP) adjacency if they are neighbors in the RSVP-TE/CR-LDP sense. Nodes A and B are RSVP-TE neighbors if they directly exchange RSVP-TE messages (Path/Resv) (e.g., as described in sections 7.1.1 and 7.1.2 of [HIERARCHY]). The neighbor relationship includes exchanging RSVP-TE Hellos. By definition, a Forwarding Adjacency (FA) is a TE Link between two GMPLS nodes whose path transits one or more other (G)MPLS nodes in the same instance of the (G)MPLS control plane. If two nodes have one or more non-FA TE Links between them, these two nodes are expected (although not required) to have a routing adjacency. If two nodes do not have any non-FA TE Links between them, it is expected (although not required) that these two nodes would not have a routing adjacency. To state the obvious, if the TE links between two nodes are to be used for establishing LSPs, the two nodes must have a signaling adjacency. If one wants to establish routing and/or signaling adjacency between two nodes, there must be an IP path between them. This IP path can be, for example, a TE Link with an interface switching capability of PSC, anything that looks likes an IP link (e.g., GRE tunnel, or a (bi-directional) LSP that with an interface switching capability of PSC). A TE link may not be capable of being used directly for maintaining routing and/or signaling adjacencies. This is because GMPLS routing and signaling adjacencies requires exchanging data on a per frame/ packet basis, and a TE link (e.g., a link between OXCs) may not be capable of exchanging data on a per packet basis. In this case, the
routing and signaling adjacencies are maintained via a set of one or more control channels (see [LMP]). Two nodes may have a TE link between them even if they do not have a routing adjacency. Naturally, each node must run OSPF/IS-IS with GMPLS extensions in order for that TE link to be advertised. More precisely, the node needs to run GMPLS extensions for TE Links with an interface switching capability (see [GMPLS-ROUTING]) other than PSC. Moreover, this node needs to run either GMPLS or MPLS extensions for TE links with an interface switching capability of PSC. The mechanisms for Control Channel Separation [RFC3471] should be used (even if the IP path between two nodes is a TE link). I.e., RSVP-TE/CR-LDP signaling should use the Interface_ID (IF_ID) object to specify a particular TE link when establishing an LSP. The IP path could consist of multiple IP hops. In this case, the mechanisms of sections 7.1.1 and 7.1.2 of [HIERARCHY] should be used (in addition to Control Channel Separation).10. Control Plane Fault Handling
Two major types of faults can impact a control plane. The first, referred to as control channel fault, relates to the case where control communication is lost between two neighboring nodes. If the control channel is embedded with the data channel, data channel recovery procedure should solve the problem. If the control channel is independent of the data channel, additional procedures are required to recover from that problem. The second, referred to as nodal faults, relates to the case where node loses its control state (e.g., after a restart) but does not loose its data forwarding state. In transport networks, such types of control plane faults should not have service impact on the existing connections. Under such circumstances, a mechanism must exist to detect a control communication failure and a recovery procedure must guarantee connection integrity at both ends of the control channel. For a control channel fault, once communication is restored routing protocols are naturally able to recover but the underlying signaling protocols must indicate that the nodes have maintained their state through the failure. The signaling protocol must also ensure that any state changes that were instantiated during the failure are synchronized between the nodes.
For a nodal fault, a node's control plane restarts and loses most of its state information. In this case, both upstream and downstream nodes must synchronize their state information with the restarted node. In order for any resynchronization to occur the node undergoing the restart will need to preserve some information, such as it's mappings of incoming to outgoing labels. These issues are addressed in protocol specific fashions, see [RFC3473], [RFC3472], [OSPF-TE-GMPLS] and [ISIS-TE-GMPLS]. Note that these cases only apply when there are mechanisms to detect data channel failures independent of control channel failures. The LDP Fault tolerance (see [RFC3479]) specifies the procedures to recover from a control channel failure. [RFC3473] specifies how to recover from both a control channel failure and a node failure.11. LSP Protection and Restoration
This section discusses Protection and Restoration (P&R) issues for GMPLS LSPs. It is driven by the requirements outlined in [RFC3386] and some of the principles outlined in [RFC3469]. It will be enhanced, as more GMPLS P&R mechanisms are defined. The scope of this section is clarified hereafter: - This section is only applicable when a fault impacting LSP(s) happens in the data/transport plane. Section 10 deals with control plane fault handling for nodal and control channel faults. - This section focuses on P&R at the TDM, LSC and FSC layers. There are specific P&R requirements at these layers not present at the PSC layer. - This section focuses on intra-area P&R as opposed to inter-area P&R and even inter-domain P&R. Note that P&R can even be more restricted, e.g., to a collection of like customer equipment, or a collection of equipment of like capabilities, in one single routing area. - This section focuses on intra-layer P&R (horizontal hierarchy as defined in [RFC3386]) as opposed to the inter-layer P&R (vertical hierarchy). - P&R mechanisms are in general designed to handle single failures, which makes SRLG diversity a necessity. Recovery from multiple failures requires further study. - Both mesh and ring-like topologies are supported.
In the following, we assume that: - TDM, LSC and FSC devices are more generally committing recovery resources in a non-best effort way. Recovery resources are either allocated (thus used) or at least logically reserved (whether used or not by preemptable extra traffic but unavailable anyway for regular working traffic). - Shared P&R mechanisms are valuable to operators in order to maximize their network utilization. - Sending preemptable excess traffic on recovery resources is a valuable feature for operators.11.1. Protection Escalation across Domains and Layers
To describe the P&R architecture, one must consider two dimensions of hierarchy [RFC3386]: - A horizontal hierarchy consisting of multiple P&R domains, which is important in an LSP based protection scheme. The scope of P&R may extend over a link (or span), an administrative domain or sub-network, an entire LSP. An administrative domain may consist of a single P&R domain or as a concatenation of several smaller P&R domains. The operator can configure P&R domains, based on customers' requirements, and on network topology and traffic engineering constraints. - A vertical hierarchy consisting of multiple layers of P&R with varying granularities (packet flows, STS trails, lightpaths, fibers, etc). In the absence of adequate P&R coordination, a fault may propagate from one level to the next within a P&R hierarchy. It can lead to "collisions" and simultaneous recovery actions may lead to race conditions, reduced resource utilization, or instabilities [MANCHESTER]. Thus, a consistent escalation strategy is needed to coordinate recovery across domains and layers. The fact that GMPLS can be used at different layers could simplify this coordination. There are two types of escalation strategies: bottom-up and top- down. The bottom-up approach assumes that "lower-level" recovery schemes are more expedient. Therefore we can inhibit or hold off
higher-level P&R. The Top-down approach attempts service P&R at the higher levels before invoking "lower level" P&R. Higher-layer P&R is service selective, and permits "per-CoS" or "per-LSP" re- routing. Service Level Agreements (SLAs) between network operators and their clients are needed to determine the necessary time scales for P&R at each layer and at each domain.11.2. Mapping of Services to P&R Resources
The choice of a P&R scheme is a tradeoff between network utilization (cost) and service interruption time. In light of this tradeoff, network service providers are expected to support a range of different service offerings or service levels. One can classify LSPs into one of a small set of service levels. Among other things, these service levels define the reliability characteristics of the LSP. The service level associated with a given LSP is mapped to one or more P&R schemes during LSP establishment. An advantage that mapping is that an LSP may use different P&E schemes in different segments of a network (e.g., some links may be span protected, whilst other segments of the LSP may utilize ring protection). These details are likely to be service provider specific. An alternative to using service levels is for an application to specify the set of specific P&R mechanisms to be used when establishing the LSP. This allows greater flexibility in using different mechanisms to meet the application requirements. A differentiator between these service levels is service interruption time in case of network failures, which is defined as the length of time between when a failure occurs and when connectivity is re- established. The choice of service level (or P&R scheme) should be dictated by the service requirements of different applications.11.3. Classification of P&R Mechanism Characteristics
The following figure provides a classification of the possible provisioning types of recovery LSPs, and of the levels of overbooking that is possible for them.
+-Computed on +-Established +-Resources pre- | demand | on demand | allocated | | | Recovery LSP | | | Provisioning -+-Pre computed +-Pre established +-Resources allocated on demand +--- Dedicated (1:1, 1+1) | | +--- Shared (1:N, Ring, Shared mesh) | Level of | Overbooking ---+--- Best effort11.4. Different Stages in P&R
Recovery from a network fault or impairment takes place in several stages as discussed in [RFC3469], including fault detection, fault localization, notification, recovery (i.e., the P&R itself) and reversion of traffic (i.e., returning the traffic to the original working LSP or to a new one). - Fault detection is technology and implementation dependent. In general, failures are detected by lower layer mechanisms (e.g., SONET/SDH, Loss-of-Light (LOL)). When a node detects a failure, an alarm may be passed up to a GMPLS entity, which will take appropriate actions, or the alarm may be propagated at the lower layer (e.g., SONET/SDH AIS). - Fault localization can be done with the help of GMPLS, e.g., using LMP for fault localization (see section 6.4). - Fault notification can also be achieved through GMPLS, e.g., using GMPLS RSVP-TE/CR-LDP notification (see section 7.12). - This section focuses on the different mechanisms available for recovery and reversion of traffic once fault detection, localization and notification have taken place.11.5. Recovery Strategies
Network P&R techniques can be divided into Protection and Restoration. In protection, resources between the protection endpoints are established before failure, and connectivity after failure is achieved simply by switching performed at the protection end-points. In contrast, restoration uses signaling after failure to allocate resources along the recovery path.
- Protection aims at extremely fast reaction times and may rely on the use of overhead control fields for achieving end-point coordination. Protection for SONET/SDH networks is described in [ITUT-G.841] and [ANSI-T1.105]. Protection mechanisms can be further classified by the level of redundancy and sharing. - Restoration mechanisms rely on signaling protocols to coordinate switching actions during recovery, and may involve simple re- provisioning, i.e., signaling only at the time of recovery; or pre-signaling, i.e., signaling prior to recovery. In addition, P&R can be applied on a local or end-to-end basis. In the local approach, P&R is focused on the local proximity of the fault in order to reduce delay in restoring service. In the end-to- end approach, the LSP originating and terminating nodes control recovery. Using these strategies, the following recovery mechanisms can be defined.11.6. Recovery mechanisms: Protection schemes
Note that protection schemes are usually defined in technology specific ways, but this does not preclude other solutions. - 1+1 Link Protection: Two pre-provisioned resources are used in parallel. For example, data is transmitted simultaneously on two parallel links and a selector is used at the receiving node to choose the best source (see also [GMPLS-FUNCT]). - 1:N Link Protection: Working and protecting resources (N working, 1 backup) are pre-provisioned. If a working resource fails, the data is switched to the protecting resource, using a coordination mechanism (e.g., in overhead bytes). More generally, N working and M protecting resources can be assigned for M:N link protection (see also [GMPLS-FUNCT]). - Enhanced Protection: Various mechanisms such as protection rings can be used to enhance the level of protection beyond single link failures to include the ability to switch around a node failure or multiple link failures within a span, based on a pre-established topology of protection resources (note: no reference available at publication time). - 1+1 LSP Protection: Simultaneous data transmission on working and protecting LSPs and tail-end selection can be applied (see also [GMPLS-FUNCT]).
11.7. Recovery mechanisms: Restoration schemes
Thanks to the use of a distributed control plane like GMPLS, restoration is possible in multiple of tenths of milliseconds. It is much harder to achieve when only an NMS is used and can only be done in that case in a multiple of seconds. - End-to-end LSP restoration with re-provisioning: an end-to-end restoration path is established after failure. The restoration path may be dynamically calculated after failure, or pre- calculated before failure (often during LSP establishment). Importantly, no signaling is used along the restoration path before failure, and no restoration bandwidth is reserved. Consequently, there is no guarantee that a given restoration path is available when a failure occurs. Thus, one may have to crankback to search for an available path. - End-to-end LSP restoration with pre-signaled recovery bandwidth reservation and no label pre-selection: an end-to-end restoration path is pre-calculated before failure and a signaling message is sent along this pre-selected path to reserve bandwidth, but labels are not selected (see also [GMPLS-FUNCT]). The resources reserved on each link of a restoration path may be shared across different working LSPs that are not expected to fail simultaneously. Local node policies can be applied to define the degree to which capacity is shared across independent failures. Upon failure detection, LSP signaling is initiated along the restoration path to select labels, and to initiate the appropriate cross-connections. - End-to-end LSP restoration with pre-signaled recovery bandwidth reservation and label pre-selection: An end-to-end restoration path is pre-calculated before failure and a signaling procedure is initiated along this pre-selected path on which bandwidth is reserved and labels are selected (see also [GMPLS-FUNCT]). The resources reserved on each link may be shared across different working LSPs that are not expected to fail simultaneously. In networks based on TDM, LSC and FSC technology, LSP signaling is used after failure detection to establish cross-connections at the intermediate switches on the restoration path using the pre- selected labels. - Local LSP restoration: the above approaches can be applied on a local basis rather than end-to-end, in order to reduce recovery time (note: no reference available at publication time).
11.8. Schema Selection Criteria
This section discusses criteria that could be used by the operator in order to make a choice among the various P&R mechanisms. - Robustness: In general, the less pre-planning of the restoration path, the more robust the restoration scheme is to a variety of failures, provided that adequate resources are available. Restoration schemes with pre-planned paths will not be able to recover from network failures that simultaneously affect both the working and restoration paths. Thus, these paths should ideally be chosen to be as disjoint as possible (i.e., SRLG and node disjoint), so that any single failure event will not affect both paths. The risk of simultaneous failure of the two paths can be reduced by recalculating the restoration path whenever a failure occurs along it. The pre-selection of a label gives less flexibility for multiple failure scenarios than no label pre-selection. If failures occur that affect two LSPs that are sharing a label at a common node along their restoration routes, then only one of these LSPs can be recovered, unless the label assignment is changed. The robustness of a restoration scheme is also determined by the amount of reserved restoration bandwidth - as the amount of restoration bandwidth sharing increases (reserved bandwidth decreases), the restoration scheme becomes less robust to failures. Restoration schemes with pre-signaled bandwidth reservation (with or without label pre-selection) can reserve adequate bandwidth to ensure recovery from any specific set of failure events, such as any single SRLG failure, any two SRLG failures etc. Clearly, more restoration capacity is allocated if a greater degree of failure recovery is required. Thus, the degree to which the network is protected is determined by the policy that defines the amount of reserved restoration bandwidth. - Recovery time: In general, the more pre-planning of the restoration route, the more rapid the P&R scheme. Protection schemes generally recover faster than restoration schemes. Restoration with pre-signaled bandwidth reservation are likely to be (significantly) faster than path restoration with re- provisioning, especially because of the elimination of any crankback. Local restoration will generally be faster than end- to-end schemes.
Recovery time objectives for SONET/SDH protection switching (not including time to detect failure) are specified in [ITUT-G.841] at 50 ms, taking into account constraints on distance, number of connections involved, and in the case of ring enhanced protection, number of nodes in the ring. Recovery time objectives for restoration mechanisms are being defined through a separate effort [RFC3386]. - Resource Sharing: 1+1 and 1:N link and LSP protection require dedicated recovery paths with limited ability to share resources: 1+1 allows no sharing, 1:N allows some sharing of protection resources and support of extra (pre-emptable) traffic. Flexibility is limited because of topology restrictions, e.g., fixed ring topology for traditional enhanced protection schemes. The degree to which restoration schemes allow sharing amongst multiple independent failures is directly dictated by the size of the restoration pool. In restoration schemes with re- provisioning, a pool of restoration capacity can be defined from which all restoration routes are selected after failure. Thus, the degree of sharing is defined by the amount of available restoration capacity. In restoration with pre-signaled bandwidth reservation, the amount of reserved restoration capacity is determined by the local bandwidth reservation policies. In all restoration schemes, pre-emptable resources can use spare restoration capacity when that capacity is not being used for failure recovery.12. Network Management
Service Providers (SPs) use network management extensively to configure, monitor or provision various devices in their network. It is important to note that a SP's equipment may be distributed across geographically separate sites thus making distributed management even more important. The service provider should utilize an NMS system and standard management protocols such as SNMP (see [RFC3410], [RFC3411] and [RFC3416]) and the relevant MIB modules as standard interfaces to configure, monitor and provision devices at various locations. The service provider may also wish to use the command line interface (CLI) provided by vendors with their devices. However, this is not a standard or recommended solution because there is no standard CLI language or interface, which results in N different CLIs in a network with devices from N different vendors. In the context of GMPLS, it is extremely important for standard interfaces to the SP's devices (e.g., SNMP) to exist due to the nature of the technology itself. Since GMPLS comprises many different layers of control-plane
and data-plane technology, it is important for management interfaces in this area to be flexible enough to allow the manager to manage GMPLS easily, and in a standard way.12.1. Network Management Systems (NMS)
The NMS system should maintain the collective information about each device within the system. Note that the NMS system may actually be comprised of several distributed applications (i.e., alarm aggregators, configuration consoles, polling applications, etc.) that collectively comprises the SP's NMS. In this way, it can make provisioning and maintenance decisions with the full knowledge of the entire SP's network. Configuration or provisioning information (i.e., requests for new services) could be entered into the NMS and subsequently distributed via SNMP to the remote devices. Thus, making the SP's task of managing the network much more compact and effortless rather than having to manage each device individually (i.e., via CLI). Security and access control can be achieved using the SNMPv3 User- based Security Model (USM) [RFC3414] and the View-based Access Control Model (VACM) [RFC3415]. This approach can be very effectively used within a SP's network, since the SP has access to and control over all devices within its domain. Standardized MIBs will need to be developed before this approach can be used ubiquitously to provision, configure and monitor devices in non- heterogeneous networks or across SP's network boundaries.12.2. Management Information Base (MIB)
In the context of GMPLS, it is extremely important for standard interfaces to devices to exist due to the nature of the technology itself. Since GMPLS comprises many different layers of control-plane technology, it is important for SNMP MIB modules in this area to be flexible enough to allow the manager to manage the entire control plane. This should be done using MIB modules that may cooperate (i.e., coordinated row-creation on the agent) or through more generalized MIB modules that aggregate some of the desired actions to be taken and push those details down to the devices. It is important to note that in certain circumstances, it may be necessary to duplicate some small subset of manageable objects in new MIB modules for management convenience. Control of some parts of GMPLS may also be achieved using existing MIB interfaces (i.e., existing SONET MIB) or using separate ones, which are yet to be defined. MIB modules may have been previously defined in the IETF or ITU. Current MIB modules may need to be extended to facilitate some of the new functionality
desired by GMPLS. In these cases, the working group should work on new versions of these MIB modules so that these extensions can be added.12.3. Tools
As in traditional networks, standard tools such as traceroute [RFC1393] and ping [RFC2151] are needed for debugging and performance monitoring of GMPLS networks, and mainly for the control plane topology, that will mimic the data plane topology. Furthermore, such tools provide network reachability information. The GMPLS control protocols will need to expose certain pieces of information in order for these tools to function properly and to provide information germane to GMPLS. These tools should be made available via the CLI. These tools should also be made available for remote invocation via the SNMP interface [RFC2925].12.4. Fault Correlation between Multiple Layers
Due to the nature of GMPLS, and that potential layers may be involved in the control and transmission of GMPLS data and control information, it is required that a fault in one layer be passed to the adjacent higher and lower layers to notify them of the fault. However, due to nature of these many layers, it is possible and even probable, that hundreds or even thousands of notifications may need to transpire between layers. This is undesirable for several reasons. First, these notifications will overwhelm the device. Second, if the device(s) are programmed to emit SNMP Notifications [RFC3417] then the large number of notifications the device may attempt to emit may overwhelm the network with a storm of notifications. Furthermore, even if the device emits the notifications, the NMS that must process these notifications either will be overwhelmed or will be processing redundant information. That is, if 1000 interfaces at layer B are stacked above a single interface below it at layer A, and the interface at A goes down, the interfaces at layer B should not emit notifications. Instead, the interface at layer A should emit a single notification. The NMS receiving this notification should be able to correlate the fact that this interface has many others stacked above it and take appropriate action, if necessary. Devices that support GMPLS should provide mechanisms for aggregating, summarizing, enabling and disabling of inter-layer notifications for the reasons described above. In the context of SNMP MIB modules, all MIB modules that are used by GMPLS must provide enable/disable objects for all notification objects. Furthermore, these MIBs must also provide notification summarization objects or functionality (as described above) as well. NMS systems and standard tools which
process notifications or keep track of the many layers on any given devices must be capable of processing the vast amount of information which may potentially be emitted by network devices running GMPLS at any point in time.13. Security Considerations
GMPLS defines a control plane architecture for multiple technologies and types of network elements. In general, since LSPs established using GMPLS may carry high volumes of data and consume significant network resources, security mechanisms are required to safeguard the underlying network against attacks on the control plane and/or unauthorized usage of data transport resources. The GMPLS control plane should therefore include mechanisms that prevent or minimize the risk of attackers being able to inject and/or snoop on control traffic. These risks depend on the level of trust between nodes that exchange GMPLS control messages, as well as the realization and physical characteristics of the control channel. For example, an in- band, in-fiber control channel over SONET/SDH overhead bytes is, in general, considered less vulnerable than a control channel realized over an out-of-band IP network. Security mechanisms can provide authentication and confidentiality. Authentication can provide origin verification, message integrity and replay protection, while confidentiality ensures that a third party cannot decipher the contents of a message. In situations where GMPLS deployment requires primarily authentication, the respective authentication mechanisms of the GMPLS component protocols may be used (see [RFC2747], [RFC3036], [RFC2385] and [LMP]). Additionally, the IPsec suite of protocols (see [RFC2402], [RFC2406] and [RFC2409]) may be used to provide authentication, confidentiality or both, for a GMPLS control channel. IPsec thus offers the benefits of combined protection for all GMPLS component protocols as well as key management. A related issue is that of the authorization of requests for resources by GMPLS-capable nodes. Authorization determines whether a given party, presumable already authenticated, has a right to access the requested resources. This determination is typically a matter of local policy control [RFC2753], for example by setting limits on the total bandwidth available to some party in the presence of resource contention. Such policies may become quite complex as the number of users, types of resources and sophistication of authorization rules increases. After authenticating requests, control elements should match them against the local authorization policy. These control elements must be capable of making decisions based on the identity of the
requester, as verified cryptographically and/or topologically. For example, decisions may depend on whether the interface through which the request is made is an inter- or intra-domain one. The use of appropriate local authorization policies may help in limiting the impact of security breaches in remote parts of a network. Finally, it should be noted that GMPLS itself introduces no new security considerations to the current MPLS-TE signaling (RSVP-TE, CR-LDP), routing protocols (OSPF-TE, IS-IS-TE) or network management protocols (SNMP).14. Acknowledgements
This document is the work of numerous authors and consists of a composition of a number of previous documents in this area. Many thanks to Ben Mack-Crane (Tellabs) for all the useful SONET/SDH discussions we had together. Thanks also to Pedro Falcao, Alexandre Geyssens, Michael Moelants, Xavier Neerdaels, and Philippe Noel from Ebone for their SONET/SDH and optical technical advice and support. Finally, many thanks also to Krishna Mitra (Consultant), Curtis Villamizar (Avici), Ron Bonica (WorldCom), and Bert Wijnen (Lucent) for their revision effort on Section 12.15. References
15.1. Normative References
[RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol Label Switching Architecture", RFC 3031, January 2001. [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 3209, December 2001. [RFC3212] Jamoussi, B., Andersson, L., Callon, R., Dantu, R., Wu, L., Doolan, P., Worster, T., Feldman, N., Fredette, A., Girish, M., Gray, E., Heinanen, J., Kilty, T., and A. Malis, "Constraint-Based LSP Setup using LDP", RFC 3212, January 2002. [RFC3471] Berger, L., "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Functional Description", RFC 3471, January 2003.
[RFC3472] Ashwood-Smith, P. and L. Berger, "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Constraint-based Routed Label Distribution Protocol (CR-LDP) Extensions", RFC 3472, January 2003. [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions", RFC 3473, January 2003.15.2. Informative References
[ANSI-T1.105] "Synchronous Optical Network (SONET): Basic Description Including Multiplex Structure, Rates, And Formats," ANSI T1.105, 2000. [BUNDLE] Kompella, K., Rekhter, Y., and L. Berger, "Link Bundling in MPLS Traffic Engineering", Work in Progress. [GMPLS-FUNCT] Lang, J.P., Ed. and B. Rajagopalan, Ed., "Generalized MPLS Recovery Functional Specification", Work in Progress. [GMPLS-G709] Papadimitriou, D., Ed., "GMPLS Signaling Extensions for G.709 Optical Transport Networks Control", Work in Progress. [GMPLS-OVERLAY] Swallow, G., Drake, J., Ishimatsu, H., and Y. Rekhter, "GMPLS UNI: RSVP Support for the Overlay Model", Work in Progress. [GMPLS-ROUTING] Kompella, K., Ed. and Y. Rekhter, Ed., "Routing Extensions in Support of Generalized Multi- Protocol Label Switching", Work in Progress. [RFC3946] Mannie, E., Ed. and Papadimitriou D., Ed., "Generalized Multi-Protocol Label Switching (GMPLS) Extensions for Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH) Control", RFC 3946, October 2004. [HIERARCHY] Kompella, K. and Y. Rekhter, "LSP Hierarchy with Generalized MPLS TE", Work in Progress.
[ISIS-TE] Smit, H. and T. Li, "Intermediate System to Intermediate System (IS-IS) Extensions for Traffic Engineering (TE)", RFC 3784, June 2004. [ISIS-TE-GMPLS] Kompella, K., Ed. and Y. Rekhter, Ed., "IS-IS Extensions in Support of Generalized Multi- Protocol Label Switching", Work in Progress. [ITUT-G.707] ITU-T, "Network Node Interface for the Synchronous Digital Hierarchy", Recommendation G.707, October 2000. [ITUT-G.709] ITU-T, "Interface for the Optical Transport Network (OTN)," Recommendation G.709 version 1.0 (and Amendment 1), February 2001 (and October 2001). [ITUT-G.841] ITU-T, "Types and Characteristics of SDH Network Protection Architectures," Recommendation G.841, October 1998. [LMP] Lang, J., Ed., "Link Management Protocol (LMP)", Work in Progress. [LMP-WDM] Fredette, A., Ed. and J. Lang Ed., "Link Management Protocol (LMP) for Dense Wavelength Division Multiplexing (DWDM) Optical Line Systems", Work in Progress. [MANCHESTER] J. Manchester, P. Bonenfant and C. Newton, "The Evolution of Transport Network Survivability," IEEE Communications Magazine, August 1999. [OIF-UNI] The Optical Internetworking Forum, "User Network Interface (UNI) 1.0 Signaling Specification - Implementation Agreement OIF- UNI-01.0," October 2001. [OLI-REQ] Fredette, A., Ed., "Optical Link Interface Requirements," Work in Progress. [OSPF-TE-GMPLS] Kompella, K., Ed. and Y. Rekhter, Ed., "OSPF Extensions in Support of Generalized Multi-Protocol Label Switching", Work in Progress.
[OSPF-TE] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering (TE) Extensions to OSPF Version 2", RFC 3630, September 2003. [RFC1393] Malkin, G., "Traceroute Using an IP Option", RFC 1393, January 1993. [RFC2151] Kessler, G. and S. Shepard, "A Primer On Internet and TCP/IP Tools and Utilities", RFC 2151, June 1997. [RFC2205] Braden, R., Zhang, L., Berson, S., Herzog, S., and S. Jamin, "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional Specification", RFC 2205, September 1997. [RFC2385] Heffernan, A., "Protection of BGP Sessions via the TCP MD5 Signature Option", RFC 2385, August 1998. [RFC2402] Kent, S. and R. Atkinson, "IP Authentication Header", RFC 2402, November 1998. [RFC2406] Kent, S. and R. Atkinson, "IP Encapsulating Security Payload (ESP)", RFC 2406, November 1998. [RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange (IKE)", RFC 2409, November 1998. [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. McManus, "Requirements for Traffic Engineering Over MPLS", RFC 2702, September 1999. [RFC2747] Baker, F., Lindell, B., and M. Talwar, "RSVP Cryptographic Authentication", RFC 2747, January 2000. [RFC2753] Yavatkar, R., Pendarakis, D., and R. Guerin, "A Framework for Policy-based Admission Control", RFC 2753, January 2000. [RFC2925] White, K., "Definitions of Managed Objects for Remote Ping, Traceroute, and Lookup Operations", RFC 2925, September 2000.
[RFC3036] Andersson, L., Doolan, P., Feldman, N., Fredette, A., and B. Thomas, "LDP Specification", RFC 3036, January 2001. [RFC3386] Lai, W. and D. McDysan, "Network Hierarchy and Multilayer Survivability", RFC 3386, November 2002. [RFC3410] Case, J., Mundy, R., Partain, D., and B. Stewart, "Introduction and Applicability Statements for Internet-Standard Management Framework", RFC 3410, December 2002. [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, December 2002. [RFC3414] Blumenthal, U. and B. Wijnen, "User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)", STD 62, RFC 3414, December 2002. [RFC3415] Wijnen, B., Presuhn, R., and K. McCloghrie, "View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3415, December 2002. [RFC3416] Presuhn, R., "Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3416, December 2002. [RFC3417] Presuhn, R., "Transport Mappings for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3417, December 2002. [RFC3469] Sharma, V. and F. Hellstrand, "Framework for Multi-Protocol Label Switching (MPLS)-based Recovery", RFC 3469, February 2003. [RFC3477] Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links in Resource ReSerVation Protocol - Traffic Engineering (RSVP-TE)", RFC 3477, January 2003.
[RFC3479] Farrel, A., "Fault Tolerance for the Label Distribution Protocol (LDP)", RFC 3479, February 2003. [RFC3480] Kompella, K., Rekhter, Y., and A. Kullberg, "Signalling Unnumbered Links in CR-LDP (Constraint-Routing Label Distribution Protocol)", RFC 3480, February 2003. [SONET-SDH-GMPLS-FRM] Bernstein, G., Mannie, E., and V. Sharma, "Framework for GMPLS-based Control of SDH/SONET Networks", Work in Progress.16. Contributors
Peter Ashwood-Smith Nortel P.O. Box 3511 Station C, Ottawa, ON K1Y 4H7, Canada EMail: petera@nortelnetworks.com Eric Mannie Consult Phone: +32 2 648-5023 Mobile: +32 (0)495-221775 EMail: eric_mannie@hotmail.com Daniel O. Awduche Consult EMail: awduche@awduche.com Thomas D. Nadeau Cisco 250 Apollo Drive Chelmsford, MA 01824, USA EMail: tnadeau@cisco.com
Ayan Banerjee Calient 5853 Rue Ferrari San Jose, CA 95138, USA EMail: abanerjee@calient.net Lyndon Ong Ciena 10480 Ridgeview Ct Cupertino, CA 95014, USA EMail: lyong@ciena.com Debashis Basak Accelight 70 Abele Road, Bldg.1200 Bridgeville, PA 15017, USA EMail: dbasak@accelight.com Dimitri Papadimitriou Alcatel Francis Wellesplein, 1 B-2018 Antwerpen, Belgium EMail: dimitri.papadimitriou@alcatel.be Lou Berger Movaz 7926 Jones Branch Drive MCLean VA, 22102, USA EMail: lberger@movaz.com Dimitrios Pendarakis Tellium 2 Crescent Place, P.O. Box 901 Oceanport, NJ 07757-0901, USA EMail: dpendarakis@tellium.com
Greg Bernstein Grotto EMail: gregb@grotto-networking.com Bala Rajagopalan Tellium 2 Crescent Place, P.O. Box 901 Oceanport, NJ 07757-0901, USA EMail: braja@tellium.com Sudheer Dharanikota Consult EMail: sudheer@ieee.org Yakov Rekhter Juniper 1194 N. Mathilda Ave. Sunnyvale, CA 94089, USA EMail: yakov@juniper.net John Drake Calient 5853 Rue Ferrari San Jose, CA 95138, USA EMail: jdrake@calient.net Debanjan Saha Tellium 2 Crescent Place Oceanport, NJ 07757-0901, USA EMail: dsaha@tellium.com
Yanhe Fan Axiowave 200 Nickerson Road Marlborough, MA 01752, USA EMail: yfan@axiowave.com Hal Sandick Shepard M.S. 2401 Dakota Street Durham, NC 27705, USA EMail: sandick@nc.rr.com Don Fedyk Nortel 600 Technology Park Drive Billerica, MA 01821, USA EMail: dwfedyk@nortelnetworks.com Vishal Sharma Metanoia 1600 Villa Street, Unit 352 Mountain View, CA 94041, USA EMail: v.sharma@ieee.org Gert Grammel Alcatel Lorenzstrasse, 10 70435 Stuttgart, Germany EMail: gert.grammel@alcatel.de George Swallow Cisco 250 Apollo Drive Chelmsford, MA 01824, USA EMail: swallow@cisco.com
Dan Guo Turin 1415 N. McDowell Blvd, Petaluma, CA 95454, USA EMail: dguo@turinnetworks.com Z. Bo Tang Tellium 2 Crescent Place, P.O. Box 901 Oceanport, NJ 07757-0901, USA EMail: btang@tellium.com Kireeti Kompella Juniper 1194 N. Mathilda Ave. Sunnyvale, CA 94089, USA EMail: kireeti@juniper.net Jennifer Yates AT&T 180 Park Avenue Florham Park, NJ 07932, USA EMail: jyates@research.att.com Alan Kullberg NetPlane 888 Washington St.Dedham, MA 02026, USA EMail: akullber@netplane.com George R. Young Edgeflow 329 March Road Ottawa, Ontario, K2K 2E1, Canada EMail: george.young@edgeflow.com
Jonathan P. Lang Rincon Networks EMail: jplang@ieee.org John Yu Hammerhead Systems 640 Clyde Court Mountain View, CA 94043, USA EMail: john@hammerheadsystems.com Fong Liaw Solas Research Solas Research, LLC EMail: fongliaw@yahoo.com Alex Zinin Alcatel 1420 North McDowell Ave Petaluma, CA 94954, USA EMail: alex.zinin@alcatel.com17. Author's Address
Eric Mannie (Consultant) Avenue de la Folle Chanson, 2 B-1050 Brussels, Belgium Phone: +32 2 648-5023 Mobile: +32 (0)495-221775 EMail: eric_mannie@hotmail.com
Full Copyright Statement Copyright (C) The Internet Society (2004). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the IETF's procedures with respect to rights in IETF Documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Acknowledgement Funding for the RFC Editor function is currently provided by the Internet Society.