Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 7491

A PCE-Based Architecture for Application-Based Network Operations

Pages: 71
Informational
Part 3 of 4 – Pages 48 to 62
First   Prev   Next

Top   ToC   RFC7491 - Page 48   prevText

3.6. Pseudowire Operations and Management

Pseudowires in an MPLS network [RFC3985] operate as a form of layered network over the connectivity provided by the MPLS network. The pseudowires are carried by LSPs operating as transport tunnels, and planning is necessary to determine how those tunnels are placed in the network and which tunnels are used by any pseudowire. This section considers four use cases: multi-segment pseudowires, path-diverse pseudowires, path-diverse multi-segment pseudowires, and pseudowire segment protection. Section 3.6.5 describes the applicability of the ABNO architecture to these four use cases.

3.6.1. Multi-Segment Pseudowires

[RFC5254] describes the architecture for multi-segment pseudowires. An end-to-end service, as shown in Figure 23, can consist of a series of stitched segments shown in the figure as AC, PW1, PW2, PW3, and AC. Each pseudowire segment is stitched at a "stitching Provider Edge" (S-PE): for example, PW1 is stitched to PW2 at S-PE1. Each access circuit (AC) is stitched to a pseudowire segment at a "terminating PE" (T-PE): for example, PW1 is stitched to the AC at T-PE1.
Top   ToC   RFC7491 - Page 49
   Each pseudowire segment is carried across the MPLS network in an LSP
   operating as a transport tunnel: for example, PW1 is carried in LSP1.
   The LSPs between PE nodes may traverse different MPLS networks with
   the PEs as border nodes, or the PEs may lie within the network such
   that each LSP spans only part of the network.

              -----         -----         -----         -----
     ---     |T-PE1|  LSP1 |S-PE1|  LSP2 |S-PE3|  LSP3 |T-PE2|    +---+
    |   | AC |     |=======|     |=======|     |=======|     | AC |   |
    |CE1|----|........PW1........|..PW2........|..PW3........|----|CE2|
    |   |    |     |=======|     |=======|     |=======|     |    |   |
     ---     |     |       |     |       |     |       |     |    +---+
              -----         -----         -----         -----

                    Figure 23: Multi-Segment Pseudowire

   While the topology shown in Figure 23 is easy to navigate, the
   reality of a deployed network can be considerably more complex.  The
   topology in Figure 24 shows a small mesh of PEs.  The links between
   the PEs are not physical links but represent the potential of MPLS
   LSPs between the PEs.

   When establishing the end-to-end service between Customer Edge nodes
   (CEs) CE1 and CE2, some choice must be made about which PEs to use.
   In other words, a path computation must be made to determine the
   pseudowire segment "hops", and then the necessary LSP tunnels must be
   established to carry the pseudowire segments that will be stitched
   together.

   Of course, each LSP may itself require a path computation decision to
   route it through the MPLS network between PEs.

   The choice of path for the multi-segment pseudowire will depend on
   such issues as:

   - MPLS connectivity

   - MPLS bandwidth availability

   - pseudowire stitching capability and capacity at PEs

   - policy and confidentiality considerations for use of PEs
Top   ToC   RFC7491 - Page 50
                                   -----
                                  |S-PE5|
                                  /-----\
     ---      -----         -----/       \-----         -----      ---
    |CE1|----|T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|----|CE2|
     ---      -----\        -----\        -----        /-----      ---
                    \         |   -------   |         /
                     \      -----        \-----      /
                      -----|S-PE2|-------|S-PE4|-----
                            -----         -----

           Figure 24: Multi-Segment Pseudowire Network Topology

3.6.2. Path-Diverse Pseudowires

The connectivity service provided by a pseudowire may need to be resilient to failure. In many cases, this function is provided by provisioning a pair of pseudowires carried by path-diverse LSPs across the network, as shown in Figure 25 (the terminology is inherited directly from [RFC3985]). Clearly, in this case, the challenge is to keep the two LSPs (LSP1 and LSP2) disjoint within the MPLS network. This problem is not different from the normal MPLS path-diversity problem. ------- ------- | PE1 | LSP1 | PE2 | AC | |=======================| | AC ----...................PW1...................---- --- - / | |=======================| | \ ----- | |/ | | | | \| | | CE1 + | | MPLS Network | | + CE2 | | |\ | | | | /| | --- - \ | |=======================| | / ----- ----...................PW2...................---- AC | |=======================| | AC | | LSP2 | | ------- ------- Figure 25: Path-Diverse Pseudowires The path-diverse pseudowire is developed in Figure 26 by the "dual-homing" of each CE through more than one PE. The requirement for LSP path diversity is exactly the same, but it is complicated by the LSPs having distinct end points. In this case, the head-end router (e.g., PE1) cannot be relied upon to maintain the path diversity through the signaling protocol because it is aware of the path of only one of the LSPs. Thus, some form of coordinated path computation approach is needed.
Top   ToC   RFC7491 - Page 51
                  -------                         -------
                 |  PE1  |          LSP1         |  PE2  |
             AC  |       |=======================|       |  AC
              ---...................PW1...................---
             /   |       |=======================|       |   \
     -----  /    |       |                       |       |    \  -----
    |     |/      -------                         -------      \|     |
    | CE1 +                     MPLS Network                    + CE2 |
    |     |\      -------                         -------      /|     |
     -----  \    |  PE3  |                       |  PE4  |    /  -----
             \   |       |=======================|       |   /
              ---...................PW2...................---
             AC  |       |=======================|       |  AC
                 |       |          LSP2         |       |
                  -------                         -------

           Figure 26: Path-Diverse Pseudowires with Disjoint PEs

3.6.3. Path-Diverse Multi-Segment Pseudowires

Figure 27 shows how the services in the previous two sections may be combined to offer end-to-end diverse paths in a multi-segment environment. To offer end-to-end resilience to failure, two entirely diverse, end-to-end multi-segment pseudowires may be needed. ----- ----- |S-PE5|--------------|T-PE4| /-----\ ----- \ ----- -----/ \----- ----- \ --- |T-PE1|-------|S-PE1|-------|S-PE3|-------|T-PE2|--|CE2| --- / -----\ -----\ ----- /----- --- |CE1|< ------- | ------- | / --- \ ----- \----- \----- / |T-PE3|-------|S-PE2|-------|S-PE4|----- ----- ----- ----- Figure 27: Path-Diverse Multi-Segment Pseudowire Network Topology Just as in any diverse-path computation, the selection of the first path needs to be made with awareness of the fact that a second, fully diverse path is also needed. If a sequential computation was applied to the topology in Figure 27, the first path CE1,T-PE1,S-PE1, S-PE3,T-PE2,CE2 would make it impossible to find a second path that was fully diverse from the first.
Top   ToC   RFC7491 - Page 52
   But the problem is complicated by the multi-layer nature of the
   network.  It is not enough that the PEs are chosen to be diverse
   because the LSP tunnels between them might share links within the
   MPLS network.  Thus, a multi-layer planning solution is needed to
   achieve the desired level of service.

3.6.4. Pseudowire Segment Protection

An alternative to the end-to-end pseudowire protection service enabled by the mechanism described in Section 3.6.3 can be achieved by protecting individual pseudowire segments or PEs. For example, in Figure 27, the pseudowire between S-PE1 and S-PE5 may be protected by a pair of stitched segments running between S-PE1 and S-PE5, and between S-PE5 and S-PE3. This is shown in detail in Figure 28. ------- ------- ------- | S-PE1 | LSP1 | S-PE5 | LSP3 | S-PE3 | | |============| |============| | | .........PW1..................PW3.......... | Outgoing Incoming | : |============| |============| : | Segment Segment | : | ------- | :.......... ...........: | | : | | : | | : | | : |=================================| : | | .........PW2............................... | | |=================================| | | | LSP2 | | ------- ------- Figure 28: Fragment of a Segment-Protected Multi-Segment Pseudowire The determination of pseudowire protection segments requires coordination and planning, and just as in Section 3.6.5, this planning must be cognizant of the paths taken by LSPs through the underlying MPLS networks.

3.6.5. Applicability of ABNO to Pseudowires

The ABNO architecture lends itself well to the planning and control of pseudowires in the use cases described above. The user or application needs a single point at which it requests services: the ABNO Controller. The ABNO Controller can ask a PCE to draw on the topology of pseudowire stitching-capable PEs as well as additional information regarding PE capabilities, such as load on PEs and administrative policies, and the PCE can use a series of TEDs or other PCEs for the underlying MPLS networks to determine the paths of the LSP tunnels. At the time of this writing, PCEP does not support
Top   ToC   RFC7491 - Page 53
   path computation requests and responses concerning pseudowires, but
   the concepts are very similar to existing uses and the necessary
   extensions would be very small.

   Once the paths have been computed, a number of different provisioning
   systems can be used to instantiate the LSPs and provision the
   pseudowires under the control of the Provisioning Manager.  The ABNO
   Controller will use the I2RS Client to instruct the network devices
   about what traffic should be placed on which pseudowires and, in
   conjunction with the OAM Handler, can ensure that failure events are
   handled correctly, that service quality levels are appropriate, and
   that service protection levels are maintained.

   In many respects, the pseudowire network forms an overlay network
   (with its own TED and provisioning mechanisms) carried by underlying
   packet networks.  Further client networks (the pseudowire payloads)
   may be carried by the pseudowire network.  Thus, the problem space
   being addressed by ABNO in this case is a classic multi-layer
   network.

3.7. Cross-Stratum Optimization (CSO)

Considering the term "stratum" to broadly differentiate the layers of most concern to the application and to the network in general, the need for Cross-Stratum Optimization (CSO) arises when the application stratum and network stratum need to be coordinated to achieve operational efficiency as well as resource optimization in both application and network strata. Data center-based applications can provide a wide variety of services such as video gaming, cloud computing, and grid applications. High- bandwidth video applications are also emerging, such as remote medical surgery, live concerts, and sporting events. This use case for the ABNO architecture is mainly concerned with data center applications that make substantial bandwidth demands either in aggregate or individually. In addition, these applications may need specific bounds on QoS-related parameters such as latency and jitter.

3.7.1. Data Center Network Operation

Data centers come in a wide variety of sizes and configurations, but all contain compute servers, storage, and application control. Data centers offer application services to end-users, such as video gaming, cloud computing, and others. Since the data centers used to provide application services may be distributed around a network, the decisions about the control and management of application services, such as where to instantiate another service instance or to which
Top   ToC   RFC7491 - Page 54
   data center a new client is assigned, can have a significant impact
   on the state of the network.  Conversely, the capabilities and state
   of the network can have a major impact on application performance.

   These decisions are typically made by applications with very little
   or no information concerning the underlying network.  Hence, such
   decisions may be suboptimal from the application's point of view or
   considering network resource utilization and quality of service.

   Cross-Stratum Optimization is the process of optimizing both the
   application experience and the network utilization by coordinating
   decisions in the application stratum and the network stratum.
   Application resources can be roughly categorized into computing
   resources (i.e., servers of various types and granularities, such as
   Virtual Machines (VMs), memory, and storage) and content (e.g.,
   video, audio, databases, and large data sets).  By "network stratum"
   we mean the IP layer and below (e.g., MPLS, Synchronous Digital
   Hierarchy (SDH), OTN, WDM).  The network stratum has resources that
   include routers, switches, and links.  We are particularly interested
   in further unleashing the potential presented by MPLS and GMPLS
   control planes at the lower network layers in response to the high
   aggregate or individual demands from the application layer.

   This use case demonstrates that the ABNO architecture can allow
   cross-stratum application/network optimization for the data center
   use case.  Other forms of Cross-Stratum Optimization (for example,
   for peer-to-peer applications) are out of scope.

3.7.1.1. Virtual Machine Migration
A key enabler for data center cost savings, consolidation, flexibility, and application scalability has been the technology of compute virtualization provided through Virtual Machines (VMs). To the software application, a VM looks like a dedicated processor with dedicated memory and a dedicated operating system. VMs not only offer a unit of compute power but also provide an "application environment" that can be replicated, backed up, and moved. Different VM configurations may be offered that are optimized for different types of processing (e.g., memory intensive, throughput intensive).
Top   ToC   RFC7491 - Page 55
   VMs may be moved between compute resources in a data center and could
   be moved between data centers.  VM migration serves to balance load
   across data center resources and has several modes:

     (i) scheduled vs. dynamic;

    (ii) bulk vs. sequential;

   (iii) point-to-point vs. point-to-multipoint

   While VM migration may solve problems of load or planned maintenance
   within a data center, it can also be effective to reduce network load
   around the data center.  But the act of migrating VMs, especially
   between data centers, can impact the network and other services that
   are offered.

   For certain applications such as disaster recovery, bulk migration is
   required on the fly, which may necessitate concurrent computation and
   path setup dynamically.

   Thus, application stratum operations must also take into account the
   situation in the network stratum, even as the application stratum
   actions may be driven by the status of the network stratum.

3.7.1.2. Load Balancing
Application servers may be instantiated in many data centers located in different parts of the network. When an end-user makes an application request, a decision has to be made about which data center should host the processing and storage required to meet the request. One of the major drivers for operating multiple data centers (rather than one very large data center) is so that the application will run on a machine that is closer to the end-users and thus improve the user experience by reducing network latency. However, if the network is congested or the data center is overloaded, this strategy can backfire. Thus, the key factors to be considered in choosing the server on which to instantiate a VM for an application include: - The utilization of the servers in the data center - The network load conditions within a data center - The network load conditions between data centers - The network conditions between the end-user and data center
Top   ToC   RFC7491 - Page 56
   Again, the choices made in the application stratum need to consider
   the situation in the network stratum.

3.7.2. Application of the ABNO Architecture

This section shows how the ABNO architecture is applicable to the cross-stratum data center issues described in Section 3.7.1. Figure 29 shows a diagram of an example data center-based application. A carrier network provides access for an end-user through PE4. Three data centers (DC1, DC2, and DC3) are accessed through different parts of the network via PE1, PE2, and PE3. The Application Service Coordinator receives information from the end-user about the desired services and converts this information to service requests that it passes to the ABNO Controller. The end-users may already know which data center they wish to use, or the Application Service Coordinator may be able to make this determination; otherwise, the task of selecting the data center must be performed by the ABNO Controller, and this may utilize a further database (see Section 2.3.1.8) to contain information about server loads and other data center parameters. The ABNO Controller examines the network resources using information gathered from the other ABNO components and uses those components to configure the network to support the end-user's needs.
Top   ToC   RFC7491 - Page 57
   +----------+    +---------------------------------+
   | End-User |--->| Application Service Coordinator |
   +----------+    +---------------------------------+
         |                          |
         |                          v
         |                 +-----------------+
         |                 | ABNO Controller |
         |                 +-----------------+
         |                          |
         |                          v
         |               +---------------------+       +--------------+
         |               |Other ABNO Components|       | o o o   DC 1 |
         |               +---------------------+       |  \|/         |
         |                          |            ------|---O          |
         |                          v           |      |              |
         |            --------------------------|--    +--------------+
         |           / Carrier Network      PE1 |  \
         |          /      .....................O   \   +--------------+
         |         |      .                          |  | o o o   DC 2 |
         |         | PE4 .                      PE2  |  |  \|/         |
          ---------|----O........................O---|--|---O          |
                   |     .                           |  |              |
                   |      .                    PE3   |  +--------------+
                    \      .....................O   /
                     \                          |  /   +--------------+
                      --------------------------|--    | o o o   DC 3 |
                                                |      |  \|/         |
                                                 ------|---O          |
                                                       |              |
                                                       +--------------+

            Figure 29: The ABNO Architecture in the Context of
                Cross-Stratum Optimization for Data Centers

3.7.2.1. Deployed Applications, Services, and Products
The ABNO Controller will need to utilize a number of components to realize the CSO functions described in Section 3.7.1. The ALTO Server provides information about topological proximity and appropriate geographical location to servers with respect to the underlying networks. This information can be used to optimize the selection of peer location, which will help reduce the path of IP traffic or can contain it within specific service providers' networks. ALTO in conjunction with the ABNO Controller and the Application Service Coordinator can address general problems such as the selection of application servers based on resource availability and usage of the underlying networks.
Top   ToC   RFC7491 - Page 58
   The ABNO Controller can also formulate a view of current network load
   from the TED and from the OAM Handler (for example, by running
   diagnostic tools that measure latency, jitter, and packet loss).
   This view obviously influences not just how paths from the end-user
   to the data center are provisioned but can also guide the selection
   of which data center should provide the service and possibly even the
   points of attachment to be used by the end-user and to reach the
   chosen data center.  A view of how the PCE can fit in with CSO is
   provided in [CSO-PCE], on which the content of Figure 29 is based.

   As already discussed, the combination of the ABNO Controller and the
   Application Service Coordinator will need to be able to select (and
   possibly migrate) the location of the VM that provides the service
   for the end-user.  Since a common technique used to direct the
   end-user to the correct VM/server is to employ DNS redirection, an
   important capability of the ABNO Controller will be the ability to
   program the DNS servers accordingly.

   Furthermore, as already noted in other sections of this document, the
   ABNO Controller can coordinate the placement of traffic within the
   network to achieve load balancing and to provide resilience to
   failures.  These features can be used in conjunction with the
   functions discussed above, to ensure that the placement of new VMs,
   the traffic that they generate, and the load caused by VM migration
   can be carried by the network and do not disrupt existing services.

3.8. ALTO Server

The ABNO architecture allows use cases with joint network and application-layer optimization. In such a use case, an application is presented with an abstract network topology containing only information relevant to the application. The application computes its application-layer routing according to its application objective. The application may interact with the ABNO Controller to set up explicit LSPs to support its application-layer routing. The following steps are performed to illustrate such a use case. 1. Application Request of Application-Layer Topology Consider the network shown in Figure 30. The network consists of five nodes and six links. The application, which has end points hosted at N0, N1, and N2, requests network topology so that it can compute its application- layer routing, for example, to maximize the throughput of content replication among end points at the three sites.
Top   ToC   RFC7491 - Page 59
                 +----+       L0 Wt=10 BW=50       +----+
                 | N0 |............................| N3 |
                 +----+                            +----+
                   |   \    L4                        |
                   |    \   Wt=7                      |
                   |     \  BW=40                     |
                   |      \                           |
             L1    |       +----+                     |
             Wt=10 |       | N4 |               L2    |
             BW=45 |       +----+               Wt=12 |
                   |      /                     BW=30 |
                   |     /  L5                        |
                   |    /   Wt=10                     |
                   |   /    BW=45                     |
                 +----+                            +----+
                 | N1 |............................| N2 |
                 +----+       L3 Wt=15 BW=35       +----+

                      Figure 30: Raw Network Topology

      The request arrives at the ABNO Controller, which forwards the
      request to the ALTO Server component.  The ALTO Server consults
      the Policy Agent, the TED, and the PCE to return an abstract,
      application-layer topology.

      For example, the policy may specify that the bandwidth exposed to
      an application may not exceed 40 Mbps.  The network has
      precomputed that the route from N0 to N2 should use the path
      N0->N3->N2, according to goals such as GCO (see Section 3.4).  The
      ALTO Server can then produce a reduced topology for the
      application, such as the topology shown in Figure 31.
Top   ToC   RFC7491 - Page 60
                      +----+
                      | N0 |............
                      +----+            \
                        |   \            \
                        |    \            \
                        |     \            \
                        |      |            \   AL0M2
                  L1    |      | AL4M5       \  Wt=22
                  Wt=10 |      | Wt=17        \ BW=30
                  BW=40 |      | BW=40         \
                        |      |                \
                        |     /                  \
                        |    /                    \
                        |   /                      \
                      +----+                        +----+
                      | N1 |........................| N2 |
                      +----+   L3 Wt=15 BW=35       +----+

           Figure 31: Reduced Graph for a Particular Application

      The ALTO Server uses the topology and existing routing to compute
      an abstract network map consisting of three PIDs.  The pair-wise
      bandwidth as well as shared bottlenecks will be computed from the
      internal network topology and reflected in cost maps.

   2. Application Computes Application Overlay

      Using the abstract topology, the application computes an
      application-layer routing.  For concreteness, the application may
      compute a spanning tree to maximize the total bandwidth from N0 to
      N2.  Figure 32 shows an example of application-layer routing,
      using a route of N0->N1->N2 for 35 Mbps and N0->N2 for 30 Mbps,
      for a total of 65 Mbps.
Top   ToC   RFC7491 - Page 61
               +----+
               | N0 |----------------------------------+
               +----+        AL0M2 BW=30               |
                 |                                     |
                 |                                     |
                 |                                     |
                 |                                     |
                 | L1                                  |
                 |                                     |
                 | BW=35                               |
                 |                                     |
                 |                                     |
                 |                                     |
                 V                                     V
               +----+        L3 BW=35                +----+
               | N1 |...............................>| N2 |
               +----+                                +----+

                Figure 32: Application-Layer Spanning Tree

   3. Application Path Set Up by the ABNO Controller

      The application may submit its application routes to the ABNO
      Controller to set up explicit LSPs to support its operation.  The
      ABNO Controller consults the ALTO maps to map the application-
      layer routing back to internal network topology and then instructs
      the Provisioning Manager to set up the paths.  The ABNO Controller
      may re-trigger GCO to reoptimize network traffic engineering.

3.9. Other Potential Use Cases

This section serves as a placeholder for other potential use cases that might get documented in future documents.

3.9.1. Traffic Grooming and Regrooming

This use case could cover the following scenarios: - Nested LSPs - Packet Classification (IP flows into LSPs at edge routers) - Bucket Stuffing - IP Flows into ECMP Hash Bucket
Top   ToC   RFC7491 - Page 62

3.9.2. Bandwidth Scheduling

Bandwidth scheduling consists of configuring LSPs based on a given time schedule. This can be used to support maintenance or operational schedules or to adjust network capacity based on traffic pattern detection. The ABNO framework provides the components to enable bandwidth scheduling solutions.


(page 62 continued on part 4)

Next Section