Network Working Group D. Papadimitriou, Ed. Request for Comments: 4428 Alcatel Category: Informational E. Mannie, Ed. Perceval March 2006 Analysis of Generalized Multi-Protocol Label Switching (GMPLS)-based Recovery Mechanisms (including Protection and Restoration) Status of This Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2006).Abstract
This document provides an analysis grid to evaluate, compare, and contrast the Generalized Multi-Protocol Label Switching (GMPLS) protocol suite capabilities with the recovery mechanisms currently proposed at the IETF CCAMP Working Group. A detailed analysis of each of the recovery phases is provided using the terminology defined in RFC 4427. This document focuses on transport plane survivability and recovery issues and not on control plane resilience and related aspects.Table of Contents
1. Introduction ....................................................3 2. Contributors ....................................................4 3. Conventions Used in this Document ...............................5 4. Fault Management ................................................5 4.1. Failure Detection ..........................................5 4.2. Failure Localization and Isolation .........................8 4.3. Failure Notification .......................................9 4.4. Failure Correlation .......................................11 5. Recovery Mechanisms ............................................11 5.1. Transport vs. Control Plane Responsibilities ..............11 5.2. Technology-Independent and Technology-Dependent Mechanisms ................................................12 5.2.1. OTN Recovery .......................................12 5.2.2. Pre-OTN Recovery ...................................13 5.2.3. SONET/SDH Recovery .................................13
5.3. Specific Aspects of Control Plane-Based Recovery Mechanisms ................................................14 5.3.1. In-Band vs. Out-Of-Band Signaling ..................14 5.3.2. Uni- vs. Bi-Directional Failures ...................15 5.3.3. Partial vs. Full Span Recovery .....................17 5.3.4. Difference between LSP, LSP Segment and Span Recovery ......................................18 5.4. Difference between Recovery Type and Scheme ...............19 5.5. LSP Recovery Mechanisms ...................................21 5.5.1. Classification .....................................21 5.5.2. LSP Restoration ....................................23 5.5.3. Pre-Planned LSP Restoration ........................24 5.5.4. LSP Segment Restoration ............................25 6. Reversion ......................................................26 6.1. Wait-To-Restore (WTR) .....................................26 6.2. Revertive Mode Operation ..................................26 6.3. Orphans ...................................................27 7. Hierarchies ....................................................27 7.1. Horizontal Hierarchy (Partitioning) .......................28 7.2. Vertical Hierarchy (Layers) ...............................28 7.2.1. Recovery Granularity ...............................30 7.3. Escalation Strategies .....................................30 7.4. Disjointness ..............................................31 7.4.1. SRLG Disjointness ..................................32 8. Recovery Mechanisms Analysis ...................................33 8.1. Fast Convergence (Detection/Correlation and Hold-off Time) ............................................34 8.2. Efficiency (Recovery Switching Time) ......................34 8.3. Robustness ................................................35 8.4. Resource Optimization .....................................36 8.4.1. Recovery Resource Sharing ..........................37 8.4.2. Recovery Resource Sharing and SRLG Recovery ........39 8.4.3. Recovery Resource Sharing, SRLG Disjointness and Admission Control .................40 9. Summary and Conclusions ........................................42 10. Security Considerations .......................................43 11. Acknowledgements ..............................................43 12. References ....................................................44 12.1. Normative References .....................................44 12.2. Informative References ...................................44
1. Introduction
This document provides an analysis grid to evaluate, compare, and contrast the Generalized MPLS (GMPLS) protocol suite capabilities with the recovery mechanisms proposed at the IETF CCAMP Working Group. The focus is on transport plane survivability and recovery issues and not on control-plane-resilience-related aspects. Although the recovery mechanisms described in this document impose different requirements on GMPLS-based recovery protocols, the protocols' specifications will not be covered in this document. Though the concepts discussed are technology independent, this document implicitly focuses on SONET [T1.105]/SDH [G.707], Optical Transport Networks (OTN) [G.709], and pre-OTN technologies, except when specific details need to be considered (for instance, in the case of failure detection). A detailed analysis is provided for each of the recovery phases as identified in [RFC4427]. These phases define the sequence of generic operations that need to be performed when a LSP/Span failure (or any other event generating such failures) occurs: - Phase 1: Failure Detection - Phase 2: Failure Localization (and Isolation) - Phase 3: Failure Notification - Phase 4: Recovery (Protection or Restoration) - Phase 5: Reversion (Normalization) Together, failure detection, localization, and notification phases are referred to as "fault management". Within a recovery domain, the entities involved during the recovery operations are defined in [RFC4427]; these entities include ingress, egress, and intermediate nodes. The term "recovery mechanism" is used to cover both protection and restoration mechanisms. Specific terms such as "protection" and "restoration" are used only when differentiation is required. Likewise the term "failure" is used to represent both signal failure and signal degradation. In addition, when analyzing the different hierarchical recovery mechanisms including disjointness-related issues, a clear distinction is made between partitioning (horizontal hierarchy) and layering (vertical hierarchy). In order to assess the current GMPLS protocol capabilities and the potential need for further extensions, the dimensions for analyzing each of the recovery mechanisms detailed in this document are introduced. This document concludes by detailing the applicability of the current GMPLS protocol building blocks for recovery purposes.
2. Contributors
This document is the result of the CCAMP Working Group Protection and Restoration design team joint effort. Besides the editors, the following are the authors that contributed to the present memo: Deborah Brungard (AT&T) 200 S. Laurel Ave. Middletown, NJ 07748, USA EMail: dbrungard@att.com Sudheer Dharanikota EMail: sudheer@ieee.org Jonathan P. Lang (Sonos) 506 Chapala Street Santa Barbara, CA 93101, USA EMail: jplang@ieee.org Guangzhi Li (AT&T) 180 Park Avenue, Florham Park, NJ 07932, USA EMail: gli@research.att.com Eric Mannie Perceval Rue Tenbosch, 9 1000 Brussels Belgium Phone: +32-2-6409194 EMail: eric.mannie@perceval.net Dimitri Papadimitriou (Alcatel) Francis Wellesplein, 1 B-2018 Antwerpen, Belgium EMail: dimitri.papadimitriou@alcatel.be
Bala Rajagopalan Microsoft India Development Center Hyderabad, India EMail: balar@microsoft.com Yakov Rekhter (Juniper) 1194 N. Mathilda Avenue Sunnyvale, CA 94089, USA EMail: yakov@juniper.net3. Conventions Used in this Document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Any other recovery-related terminology used in this document conforms to that defined in [RFC4427]. The reader is also assumed to be familiar with the terminology developed in [RFC3945], [RFC3471], [RFC3473], [RFC4202], and [RFC4204].4. Fault Management
4.1. Failure Detection
Transport failure detection is the only phase that cannot be achieved by the control plane alone because the latter needs a hook to the transport plane in order to collect the related information. It has to be emphasized that even if failure events themselves are detected by the transport plane, the latter, upon a failure condition, must trigger the control plane for subsequent actions through the use of GMPLS signaling capabilities (see [RFC3471] and [RFC3473]) or Link Management Protocol capabilities (see [RFC4204], Section 6). Therefore, by definition, transport failure detection is transport technology dependent (and so exceptionally, we keep here the "transport plane" terminology). In transport fault management, distinction is made between a defect and a failure. Here, the discussion addresses failure detection (persistent fault cause). In the technology-dependent descriptions, a more precise specification will be provided. As an example, SONET/SDH (see [G.707], [G.783], and [G.806]) provides supervision capabilities covering:
- Continuity: SONET/SDH monitors the integrity of the continuity of a trail (i.e., section or path). This operation is performed by monitoring the presence/absence of the signal. Examples are Loss of Signal (LOS) detection for the physical layer, Unequipped (UNEQ) Signal detection for the path layer, Server Signal Fail Detection (e.g., AIS) at the client layer. - Connectivity: SONET/SDH monitors the integrity of the routing of the signal between end-points. Connectivity monitoring is needed if the layer provides flexible connectivity, either automatically (e.g., cross-connects) or manually (e.g., fiber distribution frame). An example is the Trail (i.e., section or path) Trace Identifier used at the different layers and the corresponding Trail Trace Identifier Mismatch detection. - Alignment: SONET/SDH checks that the client and server layer frame start can be correctly recovered from the detection of loss of alignment. The specific processes depend on the signal/frame structure and may include: (multi-)frame alignment, pointer processing, and alignment of several independent frames to a common frame start in case of inverse multiplexing. Loss of alignment is a generic term. Examples are loss of frame, loss of multi-frame, or loss of pointer. - Payload type: SONET/SDH checks that compatible adaptation functions are used at the source and the destination. Normally, this is done by adding a payload type identifier (referred to as the "signal label") at the source adaptation function and comparing it with the expected identifier at the destination. For instance, the payload type identifier is compared with the corresponding mismatch detection. - Signal Quality: SONET/SDH monitors the performance of a signal. For instance, if the performance falls below a certain threshold, a defect -- excessive errors (EXC) or degraded signal (DEG) -- is detected. The most important point is that the supervision processes and the corresponding failure detection (used to initiate the recovery phase(s)) result in either: - Signal Degrade (SD): A signal indicating that the associated data has degraded in the sense that a degraded defect condition is active (for instance, a dDEG declared when the Bit Error Rate exceeds a preset threshold). Or
- Signal Fail (SF): A signal indicating that the associated data has failed in the sense that a signal interrupting near-end defect condition is active (as opposed to the degraded defect). In Optical Transport Networks (OTN), equivalent supervision capabilities are provided at the optical/digital section layers (i.e., Optical Transmission Section (OTS), Optical Multiplex Section (OMS) and Optical channel Transport Unit (OTU)) and at the optical/digital path layers (i.e., Optical Channel (OCh) and Optical channel Data Unit (ODU)). Interested readers are referred to the ITU-T Recommendations [G.798] and [G.709] for more details. The above are examples that illustrate cases where the failure detection and reporting entities (see [RFC4427]) are co-located. The following example illustrates the scenario where the failure detecting and reporting entities (see [RFC4427]) are not co-located. In pre-OTN networks, a failure may be masked by intermediate O-E-O based Optical Line System (OLS), preventing a Photonic Cross-Connect (PXC) from detecting upstream failures. In such cases, failure detection may be assisted by an out-of-band communication channel, and failure condition may be reported to the PXC control plane. This can be provided by using [RFC4209] extensions that deliver IP message-based communication between the PXC and the OLS control plane. Also, since PXCs are independent of the framing format, failure conditions can only be triggered either by detecting the absence of the optical signal or by measuring its quality. These mechanisms are generally less reliable than electrical (digital) ones. Both types of detection mechanisms are outside the scope of this document. If the intermediate OLS supports electrical (digital) mechanisms, using the LMP communication channel, these failure conditions are reported to the PXC and subsequent recovery actions are performed as described in Section 5. As such, from the control plane viewpoint, this mechanism turns the OLS-PXC-composed system into a single logical entity, thus having the same failure management mechanisms as any other O-E-O capable device. More generally, the following are typical failure conditions in SONET/SDH and pre-OTN networks: - Loss of Light (LOL)/Loss of Signal (LOS): Signal Failure (SF) condition where the optical signal is not detected any longer on the receiver of a given interface. - Signal Degrade (SD): detection of the signal degradation over a specific period of time.
- For SONET/SDH payloads, all of the above-mentioned supervision capabilities can be used, resulting in SD or SF conditions. In summary, the following cases apply when considering the communication between the detecting and reporting entities: - Co-located detecting and reporting entities: both the detecting and reporting entities are on the same node (e.g., SONET/SDH equipment, Opaque cross-connects, and, with some limitations, Transparent cross-connects, etc.) - Non-co-located detecting and reporting entities: o with in-band communication between entities: entities are physically separated, but the transport plane provides in-band communication between them (e.g., Server Signal Failures such as Alarm Indication Signal (AIS), etc.) o with out-of-band communication between entities: entities are physically separated, but an out-of-band communication channel is provided between them (e.g., using [RFCF4204]).4.2. Failure Localization and Isolation
Failure localization provides information to the deciding entity about the location (and so the identity) of the transport plane entity that detects the LSP(s)/span(s) failure. The deciding entity can then make an accurate decision to achieve finer grained recovery switching action(s). Note that this information can also be included as part of the failure notification (see Section 4.3). In some cases, this accurate failure localization information may be less urgent to determine if it requires performing more time- consuming failure isolation (see also Section 4.4). This is particularly the case when edge-to-edge LSP recovery is performed based on a simple failure notification (including the identification of the working LSPs under failure condition). Note that "edge" refers to a sub-network end-node, for instance. In this case, a more accurate localization and isolation can be performed after recovery of these LSPs. Failure localization should be triggered immediately after the fault detection phase. This operation can be performed at the transport plane and/or (if the operation is unavailable via the transport plane) the control plane level where dedicated signaling messages can be used. When performed at the control plane level, a protocol such as LMP (see [RFC4204], Section 6) can be used for failure localization purposes.
4.3. Failure Notification
Failure notification is used 1) to inform intermediate nodes that an LSP/span failure has occurred and has been detected and 2) to inform the deciding entities (which can correspond to any intermediate or end-point of the failed LSP/span) that the corresponding service is not available. In general, these deciding entities will be the ones making the appropriate recovery decision. When co-located with the recovering entity, these entities will also perform the corresponding recovery action(s). Failure notification can be provided either by the transport or by the control plane. As an example, let us first briefly describe the failure notification mechanism defined at the SONET/SDH transport plane level (also referred to as maintenance signal supervision): - AIS (Alarm Indication Signal) occurs as a result of a failure condition such as Loss of Signal and is used to notify downstream nodes (of the appropriate layer processing) that a failure has occurred. AIS performs two functions: 1) inform the intermediate nodes (with the appropriate layer monitoring capability) that a failure has been detected and 2) notify the connection end-point that the service is no longer available. For a distributed control plane supporting one (or more) failure notification mechanism(s), regardless of the mechanism's actual implementation, the same capabilities are needed with more (or less) information provided about the LSPs/spans under failure condition, their detailed statuses, etc. The most important difference between these mechanisms is related to the fact that transport plane notifications (as defined today) would directly initiate either a certain type of protection switching (such as those described in [RFC4427]) via the transport plane or restoration actions via the management plane. On the other hand, using a failure notification mechanism through the control plane would provide the possibility of triggering either a protection or a restoration action via the control plane. This has the advantage that a control-plane-recovery-responsible entity does not necessarily have to be co-located with a transport maintenance/recovery domain. A control plane recovery domain can be defined at entities not supporting a transport plane recovery. Moreover, as specified in [RFC3473], notification message exchanges through a GMPLS control plane may not follow the same path as the LSP/spans for which these messages carry the status. In turn, this ensures a fast, reliable (through acknowledgement and the use of
either a dedicated control plane network or disjoint control channels), and efficient (through the aggregation of several LSP/span statuses within the same message) failure notification mechanism. The other important properties to be met by the failure notification mechanism are mainly the following: - Notification messages must provide enough information such that the most efficient subsequent recovery action will be taken at the recovering entities (in most of the recovery types and schemes this action is even deterministic). Remember here that these entities can be either intermediate or end-points through which normal traffic flows. Based on local policy, intermediate nodes may not use this information for subsequent recovery actions (see for instance the APS protocol phases as described in [RFC4427]). In addition, since fast notification is a mechanism running in collaboration with the existing GMPLS signaling (see [RFC3473]) that also allows intermediate nodes to stay informed about the status of the working LSP/spans under failure condition. The trade-off here arises when defining what information the LSP/span end-points (more precisely, the deciding entities) need in order for the recovering entity to take the best recovery action: If not enough information is provided, the decision cannot be optimal (note that in this eventuality, the important issue is to quantify the level of sub-optimality). If too much information is provided, the control plane may be overloaded with unnecessary information and the aggregation/correlation of this notification information will be more complex and time-consuming to achieve. Note that a more detailed quantification of the amount of information to be exchanged and processed is strongly dependent on the failure notification protocol. - If the failure localization and isolation are not performed by one of the LSP/span end-points or some intermediate points, the points should receive enough information from the notification message in order to locate the failure. Otherwise, they would need to (re-) initiate a failure localization and isolation action. - Avoiding so-called notification storms implies that 1) the failure detection output is correlated (i.e., alarm correlation) and aggregated at the node detecting the failure(s), 2) the failure notifications are directed to a restricted set of destinations (in general the end-points), and 3) failure notification suppression (i.e., alarm suppression) is provided in order to limit flooding in case of multiple and/or correlated failures detected at several locations in the network.
- Alarm correlation and aggregation (at the failure-detecting node) implies a consistent decision based on the conditions for which a trade-off between fast convergence (at detecting node) and fast notification (implying that correlation and aggregation occurs at receiving end-points) can be found.4.4. Failure Correlation
A single failure event (such as a span failure) can cause multiple failure (such as individual LSP failures) conditions to be reported. These can be grouped (i.e., correlated) to reduce the number of failure conditions communicated on the reporting channel, for both in-band and out-of-band failure reporting. In such a scenario, it can be important to wait for a certain period of time, typically called failure correlation time, and gather all the failures to report them as a group of failures (or simply group failure). For instance, this approach can be provided using LMP-WDM for pre-OTN networks (see [RFC4209]) or when using Signal Failure/Degrade Group in the SONET/SDH context. Note that a default average time interval during which failure correlation operation can be performed is difficult to provide since it is strongly dependent on the underlying network topology. Therefore, providing a per-node configurable failure correlation time can be advisable. The detailed selection criteria for this time interval are outside of the scope of this document. When failure correlation is not provided, multiple failure notification messages may be sent out in response to a single failure (for instance, a fiber cut). Each failure notification message contains a set of information on the failed working resources (for instance, the individual lambda LSP flowing through this fiber). This allows for a more prompt response, but can potentially overload the control plane due to a large amount of failure notifications.5. Recovery Mechanisms
5.1. Transport vs. Control Plane Responsibilities
When applicable, recovery resources are provisioned, for both protection and restoration, using GMPLS signaling capabilities. Thus, these are control plane-driven actions (topological and resource-constrained) that are always performed in this context. The following tables give an overview of the responsibilities taken by the control plane in case of LSP/span recovery:
1. LSP/span Protection - Phase 1: Failure Detection Transport plane - Phase 2: Failure Localization/Isolation Transport/Control plane - Phase 3: Failure Notification Transport/Control plane - Phase 4: Protection Switching Transport/Control plane - Phase 5: Reversion (Normalization) Transport/Control plane Note: in the context of LSP/span protection, control plane actions can be performed either for operational purposes and/or synchronization purposes (vertical synchronization between transport and control plane) and/or notification purposes (horizontal synchronization between end-nodes at control plane level). This suggests the selection of the responsible plane (in particular for protection switching) during the provisioning phase of the protected/protection LSP. 2. LSP/span Restoration - Phase 1: Failure Detection Transport plane - Phase 2: Failure Localization/Isolation Transport/Control plane - Phase 3: Failure Notification Control plane - Phase 4: Recovery Switching Control plane - Phase 5: Reversion (Normalization) Control plane Therefore, this document primarily focuses on provisioning of LSP recovery resources, failure notification mechanisms, recovery switching, and reversion operations. Moreover, some additional considerations can be dedicated to the mechanisms associated to the failure localization/isolation phase.5.2. Technology-Independent and Technology-Dependent Mechanisms
The present recovery mechanisms analysis applies to any circuit- oriented data plane technology with discrete bandwidth increments (like SONET/SDH, G.709 OTN, etc.) being controlled by a GMPLS-based distributed control plane. The following sub-sections are not intended to favor one technology versus another. They list pro and cons for each technology in order to determine the mechanisms that GMPLS-based recovery must deliver to overcome their cons and make use of their pros in their respective applicability context.5.2.1. OTN Recovery
OTN recovery specifics are left for further consideration.
5.2.2. Pre-OTN Recovery
Pre-OTN recovery specifics (also referred to as "lambda switching") present mainly the following advantages: - They benefit from a simpler architecture, making it more suitable for mesh-based recovery types and schemes (on a per-channel basis). - Failure suppression at intermediate node transponders, e.g., use of squelching, implies that failures (such as LoL) will propagate to edge nodes. Thus, edge nodes will have the possibility to initiate recovery actions driven by upper layers (vs. use of non-standard masking of upstream failures). The main disadvantage is the lack of interworking due to the large amount of failure management (in particular failure notification protocols) and recovery mechanisms currently available. Note also, that for all-optical networks, combination of recovery with optical physical impairments is left for a future release of this document because corresponding detection technologies are under specification.5.2.3. SONET/SDH Recovery
Some of the advantages of SONET [T1.105]/SDH [G.707], and more generically any Time Division Multiplexing (TDM) transport plane recovery, are that they provide: - Protection types operating at the data plane level that are standardized (see [G.841]) and can operate across protected domains and interwork (see [G.842]). - Failure detection, notification, and path/section Automatic Protection Switching (APS) mechanisms. - Greater control over the granularity of the TDM LSPs/links that can be recovered with respect to coarser optical channel (or whole fiber content) recovery switching Some of the limitations of the SONET/SDH recovery are: - Limited topological scope: Inherently the use of ring topologies, typically, dedicated Sub-Network Connection Protection (SNCP) or shared protection rings, has reduced flexibility and resource efficiency with respect to the (somewhat more complex) meshed recovery.
- Inefficient use of spare capacity: SONET/SDH protection is largely applied to ring topologies, where spare capacity often remains idle, making the efficiency of bandwidth usage a real issue. - Support of meshed recovery requires intensive network management development, and the functionality is limited by both the network elements and the capabilities of the element management systems (thus justifying the development of GMPLS-based distributed recovery mechanisms.)5.3. Specific Aspects of Control Plane-Based Recovery Mechanisms
5.3.1. In-Band vs. Out-Of-Band Signaling
The nodes communicate through the use of IP terminating control channels defining the control plane (transport) topology. In this context, two classes of transport mechanisms can be considered here: in-fiber or out-of-fiber (through a dedicated physically diverse control network referred to as the Data Communication Network or DCN). The potential impact of the usage of an in-fiber (signaling) transport mechanism is briefly considered here. In-fiber transport mechanisms can be further subdivided into in-band and out-of-band. As such, the distinction between in-fiber in-band and in-fiber out-of-band signaling reduces to the consideration of a logically- versus physically-embedded control plane topology with respect to the transport plane topology. In the scope of this document, it is assumed that at least one IP control channel between each pair of adjacent nodes is continuously available to enable the exchange of recovery-related information and messages. Thus, in either case (i.e., in-band or out-of-band) at least one logical or physical control channel between each pair of nodes is always expected to be available. Therefore, the key issue when using in-fiber signaling is whether one can assume independence between the fault-tolerance capabilities of control plane and the failures affecting the transport plane (including the nodes). Note also that existing specifications like the OTN provide a limited form of independence for in-fiber signaling by dedicating a separate optical supervisory channel (OSC, see [G.709] and [G.874]) to transport the overhead and other control traffic. For OTNs, failure of the OSC does not result in failing the optical channels. Similarly, loss of the control channel must not result in failing the data channels (transport plane).
5.3.2. Uni- vs. Bi-Directional Failures
The failure detection, correlation, and notification mechanisms (described in Section 4) can be triggered when either a uni- directional or a bi-directional LSP/Span failure occurs (or a combination of both). As illustrated in Figures 1 and 2, two alternatives can be considered here: 1. Uni-directional failure detection: the failure is detected on the receiver side, i.e., it is detected by only the downstream node to the failure (or by the upstream node depending on the failure propagation direction, respectively). 2. Bi-directional failure detection: the failure is detected on the receiver side of both downstream node AND upstream node to the failure. Notice that after the failure detection time, if only control-plane- based failure management is provided, the peering node is unaware of the failure detection status of its neighbor. ------- ------- ------- ------- | | | |Tx Rx| | | | | NodeA |----...----| NodeB |xxxxxxxxx| NodeC |----...----| NodeD | | |----...----| |---------| |----...----| | ------- ------- ------- ------- t0 >>>>>>> F t1 x <---------------x Notification t2 <--------...--------x x--------...--------> Up Notification Down Notification Figure 1: Uni-directional failure detection
------- ------- ------- ------- | | | |Tx Rx| | | | | NodeA |----...----| NodeB |xxxxxxxxx| NodeC |----...----| NodeD | | |----...----| |xxxxxxxxx| |----...----| | ------- ------- ------- ------- t0 F <<<<<<< >>>>>>> F t1 x <-------------> x Notification t2 <--------...--------x x--------...--------> Up Notification Down Notification Figure 2: Bi-directional failure detection After failure detection, the following failure management operations can be subsequently considered: - Each detecting entity sends a notification message to the corresponding transmitting entity. For instance, in Figure 1, node C sends a notification message to node B. In Figure 2, node C sends a notification message to node B while node B sends a notification message to node C. To ensure reliable failure notification, a dedicated acknowledgement message can be returned back to the sender node. - Next, within a certain (and pre-determined) time window, nodes impacted by the failure occurrences may perform their correlation. In case of uni-directional failure, node B only receives the notification message from node C, and thus the time for this operation is negligible. In case of bi-directional failure, node B has to correlate the received notification message from node C with the corresponding locally detected information (and node C has to do the same with the message from node B). - After some (pre-determined) period of time, referred to as the hold-off time, if the local recovery actions (see Section 5.3.4) were not successful, the following occurs. In case of uni- directional failure and depending on the directionality of the LSP, node B should send an upstream notification message (see [RFC3473]) to the ingress node A. Node C may send a downstream notification message (see [RFC3473]) to the egress node D. However, in that case, only node A would initiate an edge to edge recovery action. Node A is referred to as the "master", and node D is referred to as the "slave", per [RFC4427]. Note that the other LSP end-node (node D in this case) may be optionally notified using a downstream notification message (see [RFC3473]).
In case of bi-directional failure, node B should send an upstream notification message (see [RFC3473]) to the ingress node A. Node C may send a downstream notification message (see [RFC3473]) to the egress node D. However, due to the dependence on the LSP directionality, only ingress node A would initiate an edge-to-edge recovery action. Note that the other LSP end-node (node D in this case) should also be notified of this event using a downstream notification message (see [RFC3473]). For instance, if an LSP directed from D to A is under failure condition, only the notification message sent from node C to D would initiate a recovery action. In this case, per [RFC4427], the deciding and recovering node D is referred to as the "master", while node A is referred to as the "slave" (i.e., recovering only entity). Note: The determination of the master and the slave may be based either on configured information or dedicated protocol capability. In the above scenarios, the path followed by the upstream and downstream notification messages does not have to be the same as the one followed by the failed LSP (see [RFC3473] for more details on the notification message exchange). The important point concerning this mechanism is that either the detecting/reporting entity (i.e., nodes B and C) is also the deciding/recovery entity or the detecting/reporting entity is simply an intermediate node in the subsequent recovery process. One refers to local recovery in the former case, and to edge-to-edge recovery in the latter one (see also Section 5.3.4).5.3.3. Partial vs. Full Span Recovery
When a given span carries more than one LSP or LSP segment, an additional aspect must be considered. In case of span failure, the LSPs it carries can be recovered individually, as a group (aka bulk LSP recovery), or as independent sub-groups. When correlation time windows are used and simultaneous recovery of several LSPs can be performed using a single request, the selection of this mechanism would be triggered independently of the failure notification granularity. Moreover, criteria for forming such sub-groups are outside of the scope of this document. Additional complexity arises in the case of (sub-)group LSP recovery. Between a given pair of nodes, the LSPs that a given (sub-)group contains may have been created from different source nodes (i.e., initiator) and directed toward different destination nodes. Consequently the failure notification messages following a bi- directional span failure that affects several LSPs (or the whole group of LSPs it carries) are not necessarily directed toward the same initiator nodes. In particular, these messages may be directed
to both the upstream and downstream nodes to the failure. Therefore, such span failure may trigger recovery actions to be performed from both sides (i.e., from both the upstream and the downstream nodes to the failure). In order to facilitate the definition of the corresponding recovery mechanisms (and their sequence), one assumes here as well that, per [RFC4427], the deciding (and recovering) entity (referred to as the "master") is the only initiator of the recovery of the whole LSP (sub-)group.5.3.4. Difference between LSP, LSP Segment and Span Recovery
The recovery definitions given in [RFC4427] are quite generic and apply for link (or local span) and LSP recovery. The major difference between LSP, LSP Segment and span recovery is related to the number of intermediate nodes that the signaling messages have to travel. Since nodes are not necessarily adjacent in the case of LSP (or LSP Segment) recovery, signaling message exchanges from the reporting to the deciding/recovery entity may have to cross several intermediate nodes. In particular, this applies to the notification messages due to the number of hops separating the location of a failure occurrence from its destination. This results in an additional propagation and forwarding delay. Note that the former delay may in certain circumstances be non-negligible; e.g., in a copper out-of-band network, the delay is approximately 1 ms per 200km. Moreover, the recovery mechanisms applicable to end-to-end LSPs and to the segments that may compose an end-to-end LSP (i.e., edge-to- edge recovery) can be exactly the same. However, one expects in the latter case, that the destination of the failure notification message will be the ingress/egress of each of these segments. Therefore, using the mechanisms described in Section 5.3.2, failure notification messages can be exchanged first between terminating points of the LSP segment, and after expiration of the hold-off time, between terminating points of the end-to-end LSP. Note: Several studies provide quantitative analysis of the relative performance of LSP/span recovery techniques. [WANG] for instance, provides an analysis grid for these techniques showing that dynamic LSP restoration (see Section 5.5.2) performs well under medium network loads, but suffers performance degradations at higher loads due to greater contention for recovery resources. LSP restoration upon span failure, as defined in [WANG], degrades at higher loads because paths around failed links tend to increase the hop count of the affected LSPs and thus consume additional network resources. Also, performance of LSP restoration can be enhanced by a failed working LSP's source node that initiates a new recovery attempt if an initial attempt fails. A single retry attempt is sufficient to
produce large increases in the restoration success rate and ability to initiate successful LSP restoration attempts, especially at high loads, while not adding significantly to the long-term average recovery time. Allowing additional attempts produces only small additional gains in performance. This suggests using additional (intermediate) crankback signaling when using dynamic LSP restoration (described in Section 5.5.2 - case 2). Details on crankback signaling are outside the scope of this document.5.4. Difference between Recovery Type and Scheme
[RFC4427] defines the basic LSP/span recovery types. This section describes the recovery schemes that can be built using these recovery types. In brief, a recovery scheme is defined as the combination of several ingress-egress node pairs supporting a given recovery type (from the set of the recovery types they allow). Several examples are provided here to illustrate the difference between recovery types such as 1:1 or M:N, and recovery schemes such as (1:1)^n or (M:N)^n (referred to as shared-mesh recovery). 1. (1:1)^n with recovery resource sharing The exponent, n, indicates the number of times a 1:1 recovery type is applied between at most n different ingress-egress node pairs. Here, at most n pairs of disjoint working and recovery LSPs/spans share a common resource at most n times. Since the working LSPs/spans are mutually disjoint, simultaneous requests for use of the shared (common) resource will only occur in case of simultaneous failures, which are less likely to happen. For instance, in the common (1:1)^2 case, if the 2 recovery LSPs in the group overlap the same common resource, then it can handle only single failures; any multiple working LSP failures will cause at least one working LSP to be denied automatic recovery. Consider for instance the following topology with the working LSPs A-B-C and F-G-H and their respective recovery LSPs A-D-E-C and F-D-E-H that share a common D-E link resource. A---------B---------C \ / \ / D-------------E / \ / \ F---------G---------H
2. (M:N)^n with recovery resource sharing The (M:N)^n scheme is documented here for the sake of completeness only (i.e., it is not mandated that GMPLS capabilities support this scheme). The exponent, n, indicates the number of times an M:N recovery type is applied between at most n different ingress-egress node pairs. So the interpretation follows from the previous case, except that here disjointness applies to the N working LSPs/spans and to the M recovery LSPs/spans while sharing at most n times M common resources. In both schemes, it results in a "group" of sum{n=1}^N N{n} working LSPs and a pool of shared recovery resources, not all of which are available to any given working LSP. In such conditions, defining a metric that describes the amount of overlap among the recovery LSPs would give some indication of the group's ability to handle simultaneous failures of multiple LSPs. For instance, in the simple (1:1)^n case, if n recovery LSPs in a (1:1)^n group overlap, then the group can handle only single failures; any simultaneous failure of multiple working LSPs will cause at least one working LSP to be denied automatic recovery. But if one considers, for instance, a (2:2)^2 group in which there are two pairs of overlapping recovery LSPs, then two LSPs (belonging to the same pair) can be simultaneously recovered. The latter case can be illustrated by the following topology with 2 pairs of working LSPs A-B-C and F-G-H and their respective recovery LSPs A-D-E-C and F-D-E-H that share two common D-E link resources. A========B========C \\ // \\ // D =========== E // \\ // \\ F========G========H Moreover, in all these schemes, (working) path disjointness can be enforced by exchanging information related to working LSPs during the recovery LSP signaling. Specific issues related to the combination of shared (discrete) bandwidth and disjointness for recovery schemes are described in Section 8.4.2.
5.5. LSP Recovery Mechanisms
5.5.1. Classification
The recovery time and ratio of LSPs/spans depend on proper recovery LSP provisioning (meaning pre-provisioning when performed before failure occurrence) and the level of overbooking of recovery resources (i.e., over-provisioning). A proper balance of these two operations will result in the desired LSP/span recovery time and ratio when single or multiple failures occur. Note also that these operations are mostly performed during the network planning phases. The different options for LSP (pre-)provisioning and overbooking are classified below to structure the analysis of the different recovery mechanisms. 1. Pre-Provisioning Proper recovery LSP pre-provisioning will help to alleviate the failure of the working LSPs (due to the failure of the resources that carry these LSPs). As an example, one may compute and establish the recovery LSP either end-to-end or segment-per-segment, to protect a working LSP from multiple failure events affecting link(s), node(s) and/or SRLG(s). The recovery LSP pre-provisioning options are classified as follows in the figure below: (1) The recovery path can be either pre-computed or computed on- demand. (2) When the recovery path is pre-computed, it can be either pre- signaled (implying recovery resource reservation) or signaled on-demand. (3) When the recovery resources are pre-signaled, they can be either pre-selected or selected on-demand. Recovery LSP provisioning phases:
(1) Path Computation --> On-demand | | --> Pre-Computed | | (2) Signaling --> On-demand | | --> Pre-Signaled | | (3) Resource Selection --> On-demand | | --> Pre-Selected Note that these different options lead to different LSP/span recovery times. The following sections will consider the above-mentioned pre-provisioning options when analyzing the different recovery mechanisms. 2. Overbooking There are many mechanisms available that allow the overbooking of the recovery resources. This overbooking can be done per LSP (as in the example mentioned above), per link (such as span protection), or even per domain. In all these cases, the level of overbooking, as shown in the below figure, can be classified as dedicated (such as 1+1 and 1:1), shared (such as 1:N and M:N), or unprotected (and thus restorable, if enough recovery resources are available). Overbooking levels: +----- Dedicated (for instance: 1+1, 1:1, etc.) | | +----- Shared (for instance: 1:N, M:N, etc.) | Level of | Overbooking -----+----- Unprotected (for instance: 0:1, 0:N) Also, when using shared recovery, one may support preemptible extra- traffic; the recovery mechanism is then expected to allow preemption of this low priority traffic in case of recovery resource contention during recovery operations. The following sections will consider the
above-mentioned overbooking options when analyzing the different recovery mechanisms.5.5.2. LSP Restoration
The following times are defined to provide a quantitative estimation about the time performance of the different LSP restoration mechanisms (also referred to as LSP re-routing): - Path Computation Time: Tc - Path Selection Time: Ts - End-to-end LSP Resource Reservation Time: Tr (a delta for resource selection is also considered, the corresponding total time is then referred to as Trs) - End-to-end LSP Resource Activation Time: Ta (a delta for resource selection is also considered, the corresponding total time is then referred to as Tas) The Path Selection Time (Ts) is considered when a pool of recovery LSP paths between a given pair of source/destination end-points is pre-computed, and after a failure occurrence one of these paths is selected for the recovery of the LSP under failure condition. Note: failure management operations such as failure detection, correlation, and notification are considered (for a given failure event) as equally time-consuming for all the mechanisms described below: 1. With Route Pre-computation (or LSP re-provisioning) An end-to-end restoration LSP is established after the failure(s) occur(s) based on a pre-computed path. As such, one can define this as an "LSP re-provisioning" mechanism. Here, one or more (disjoint) paths for the restoration LSP are computed (and optionally pre- selected) before a failure occurs. No reservation or selection of resources is performed along the restoration path before failure occurrence. As a result, there is no guarantee that a restoration LSP is available when a failure occurs. The expected total restoration time T is thus equal to Ts + Trs or to Trs when a dedicated computation is performed for each working LSP. 2. Without Route Pre-computation (or Full LSP re-routing) An end-to-end restoration LSP is dynamically established after the failure(s) occur(s). After failure occurrence, one or more (disjoint) paths for the restoration LSP are dynamically computed and
one is selected. As such, one can define this as a complete "LSP re-routing" mechanism. No reservation or selection of resources is performed along the restoration path before failure occurrence. As a result, there is no guarantee that a restoration LSP is available when a failure occurs. The expected total restoration time T is thus equal to Tc (+ Ts) + Trs. Therefore, time performance between these two approaches differs by the time required for route computation Tc (and its potential selection time, Ts).5.5.3. Pre-Planned LSP Restoration
Pre-planned LSP restoration (also referred to as pre-planned LSP re- routing) implies that the restoration LSP is pre-signaled. This in turn implies the reservation of recovery resources along the restoration path. Two cases can be defined based on whether the recovery resources are pre-selected. 1. With resource reservation and without resource pre-selection Before failure occurrence, an end-to-end restoration path is pre- selected from a set of pre-computed (disjoint) paths. The restoration LSP is signaled along this pre-selected path to reserve resources at each node, but these resources are not selected. In this case, the resources reserved for each restoration LSP may be dedicated or shared between multiple restoration LSPs whose working LSPs are not expected to fail simultaneously. Local node policies can be applied to define the degree to which these resources can be shared across independent failures. Also, since a restoration scheme is considered, resource sharing should not be limited to restoration LSPs that start and end at the same ingress and egress nodes. Therefore, each node participating in this scheme is expected to receive some feedback information on the sharing degree of the recovery resource(s) that this scheme involves. Upon failure detection/notification message reception, signaling is initiated along the restoration path to select the resources, and to perform the appropriate operation at each node crossed by the restoration LSP (e.g., cross-connections). If lower priority LSPs were established using the restoration resources, they must be preempted when the restoration LSP is activated. Thus, the expected total restoration time T is equal to Tas (post- failure activation), while operations performed before failure occurrence take Tc + Ts + Tr.
2. With both resource reservation and resource pre-selection Before failure occurrence, an end-to-end restoration path is pre- selected from a set of pre-computed (disjoint) paths. The restoration LSP is signaled along this pre-selected path to reserve AND select resources at each node, but these resources are not committed at the data plane level. So that the selection of the recovery resources is committed at the control plane level only, no cross-connections are performed along the restoration path. In this case, the resources reserved and selected for each restoration LSP may be dedicated or even shared between multiple restoration LSPs whose associated working LSPs are not expected to fail simultaneously. Local node policies can be applied to define the degree to which these resources can be shared across independent failures. Also, because a restoration scheme is considered, resource sharing should not be limited to restoration LSPs that start and end at the same ingress and egress nodes. Therefore, each node participating in this scheme is expected to receive some feedback information on the sharing degree of the recovery resource(s) that this scheme involves. Upon failure detection/notification message reception, signaling is initiated along the restoration path to activate the reserved and selected resources, and to perform the appropriate operation at each node crossed by the restoration LSP (e.g., cross-connections). If lower priority LSPs were established using the restoration resources, they must be preempted when the restoration LSP is activated. Thus, the expected total restoration time T is equal to Ta (post- failure activation), while operations performed before failure occurrence take Tc + Ts + Trs. Therefore, time performance between these two approaches differs only by the time required for resource selection during the activation of the recovery LSP (i.e., Tas - Ta).5.5.4. LSP Segment Restoration
The above approaches can be applied on an edge-to-edge LSP basis rather than end-to-end LSP basis (i.e., to reduce the global recovery time) by allowing the recovery of the individual LSP segments constituting the end-to-end LSP. Also, by using the horizontal hierarchy approach described in Section 7.1, an end-to-end LSP can be recovered by multiple recovery mechanisms applied on an LSP segment basis (e.g., 1:1 edge-to-edge LSP protection in a metro network, and M:N edge-to-edge protection in the core). These mechanisms are ideally independent and may even use different failure localization and notification mechanisms.