When the DC and the WAN are operated by the same administrative entity, the Service Provider can decide to integrate the GW and WAN Edge PE functions in the same router for obvious reasons to save as relates to Capital Expenditure (CAPEX) and Operating Expenses (OPEX). This is illustrated in
Figure 2. Note that this model does not provide an explicit demarcation link between DC and WAN anymore. Although not shown in
Figure 2, note that the GWs may have local ACs.
+--+
|CE|
+--+
|
+----+
+----| PE |----+
+---------+ | +----+ | +---------+
+----+ | +---+ +---+ | +----+
|NVE1|--| | | | | |--|NVE3|
+----+ | |GW1| |GW3| | +----+
| +---+ +---+ |
| NVO-1 | WAN | NVO-2 |
| +---+ +---+ |
| | | | | |
+----+ | |GW2| |GW4| | +----+
|NVE2|--| +---+ +---+ |--|NVE4|
+----+ +---------+ | | +---------+ +----+
+--------------+
|<--EVPN-Overlay--->|<-----VPLS--->|<---EVPN-Overlay-->|
|<--PBB-VPLS-->|
Interconnect -> |<-EVPN-MPLS-->|
options |<--EVPN-Ovl-->|*
|<--PBB-EVPN-->|
* EVPN-Ovl stands for EVPN-Overlay (and it's an interconnect option).
The integrated interconnect solution meets the following requirements:
-
Control plane and data plane interworking between the EVPN-Overlay network and the L2VPN technology supported in the WAN, irrespective of the technology choice -- i.e., (PBB-)VPLS or (PBB-)EVPN, as depicted in Figure 2.
-
Multihoming, including single-active multihoming with per-service load balancing or all-active multihoming -- i.e., per-flow load-balancing -- as long as the technology deployed in the WAN supports it.
-
Support for end-to-end MAC Mobility, Static MAC protection and other procedures (e.g., proxy-arp) described in [RFC 7432] as long as EVPN-MPLS is the technology of choice in the WAN.
-
Independent inclusive multicast trees in the WAN and in the DC. That is, the inclusive multicast tree type defined in the WAN does not need to be the same as in the DC.
Regular MPLS tunnels and Targeted LDP (tLDP) / BGP sessions will be set up to the WAN PEs and RRs as per [
RFC 4761], [
RFC 4762], and [
RFC 6074], and overlay tunnels and EVPN will be set up as per [
RFC 8365]. Note that different route targets for the DC and the WAN are normally required (unless [
RFC 4762] is used in the WAN, in which case no WAN route target is needed). A single type-1 RD per service may be used.
In order to support multihoming, the GWs will be provisioned with an I-ESI (see
Section 3.4), which will be unique for each interconnection. In this case, the I-ES will represent the group of PWs to the WAN PEs and GWs. All the [
RFC 7432] procedures are still followed for the I-ES -- e.g., any MAC address learned from the WAN will be advertised to the DC with the I-ESI in the ESI field.
A MAC-VRF per EVI will be created in each GW. The MAC-VRF will have two different types of tunnel bindings instantiated in two different split-horizon groups:
-
VPLS PWs will be instantiated in the WAN split-horizon group.
-
Overlay tunnel bindings (e.g., VXLAN, NVGRE) will be instantiated in the DC split-horizon group.
Attachment circuits are also supported on the same MAC-VRF (although not shown in
Figure 2), but they will not be part of any of the above split-horizon groups.
Traffic received in a given split-horizon group will never be forwarded to a member of the same split-horizon group.
As far as BUM flooding is concerned, a flooding list will be composed of the sublist created by the inclusive multicast routes and the sublist created for VPLS in the WAN. BUM frames received from a local Attachment Circuit (AC) will be forwarded to the flooding list. BUM frames received from the DC or the WAN will be forwarded to the flooding list, observing the split-horizon group rule described above.
Note that the GWs are not allowed to have an EVPN binding and a PW to the same far end within the same MAC-VRF, so that loops and packet duplication are avoided. In case a GW can successfully establish both an EVPN binding and a PW to the same far-end PE, the EVPN binding will prevail, and the PW will be brought down operationally.
The optimization procedures described in
Section 3.5 can also be applied to this model.
This model supports single-active multihoming on the GWs. All-active multihoming is not supported by VPLS; therefore, it cannot be used on the GWs.
In this case, for a given EVI, all the PWs in the WAN split-horizon group are assigned to I-ES. All the single-active multihoming procedures as described by [
RFC 8365] will be followed for the I-ES.
The non-DF GW for the I-ES will block the transmission and reception of all the PWs in the WAN split-horizon group for BUM and unicast traffic.
In this case, there is no impact on the procedures described in [
RFC 7041] for the B-component. However, the I-component instances become EVI instances with EVPN-Overlay bindings and potentially local attachment circuits. A number of MAC-VRF instances can be multiplexed into the same B-component instance. This option provides significant savings in terms of PWs to be maintained in the WAN.
The I-ESI concept described in
Section 4.2.1 will also be used for the PBB-VPLS-based interconnect.
B-component PWs and I-component EVPN-Overlay bindings established to the same far end will be compared. The following rules will be observed:
-
Attempts to set up a PW between the two GWs within the B-component context will never be blocked.
-
If a PW exists between two GWs for the B-component and an attempt is made to set up an EVPN binding on an I-component linked to that B-component, the EVPN binding will be kept down operationally. Note that the BGP EVPN routes will still be valid but not used.
-
The EVPN binding will only be up and used as long as there is no PW to the same far end in the corresponding B-component. The EVPN bindings in the I-components will be brought down before the PW in the B-component is brought up.
The optimization procedures described in
Section 3.5 can also be applied to this interconnect option.
This model supports single-active multihoming on the GWs. All-active multihoming is not supported by this scenario.
The single-active multihoming procedures as described by [
RFC 8365] will be followed for the I-ES for each EVI instance connected to the B-component. Note that in this case, for a given EVI, all the EVPN bindings in the I-component are assigned to the I-ES. The non-DF GW for the I-ES will block the transmission and reception of all the I-component EVPN bindings for BUM and unicast traffic. When learning MACs from the WAN, the non-DF
MUST NOT advertise EVPN MAC/IP routes for those MACs.
If EVPN for MPLS tunnels (referred to as "EVPN-MPLS" hereafter) are supported in the WAN, an end-to-end EVPN solution can be deployed. The following sections describe the proposed solution as well as its impact on the procedures from [
RFC 7432].
The GWs
MUST establish separate BGP sessions for sending/receiving EVPN routes to/from the DC and to/from the WAN. Normally, each GW will set up one BGP EVPN session to the DC RR (or two BGP EVPN sessions if there are redundant DC RRs) and one session to the WAN RR (or two sessions if there are redundant WAN RRs).
In order to facilitate separate BGP processes for DC and WAN, EVPN routes sent to the WAN
SHOULD carry a different Route Distinguisher (RD) than the EVPN routes sent to the DC. In addition, although reusing the same value is possible, different route targets are expected to be handled for the same EVI in the WAN and the DC. Note that the EVPN service routes sent to the DC RRs will normally include a [
RFC 9012] BGP encapsulation extended community with a different tunnel type than the one sent to the WAN RRs.
As in the other discussed options, an I-ES and its assigned I-ESI will be configured on the GWs for multihoming. This I-ES represents the WAN EVPN-MPLS PEs to the DC but also the DC EVPN-Overlay NVEs to the WAN. Optionally, different I-ESI values are configured for representing the WAN and the DC. If different EVPN-Overlay networks are connected to the same group of GWs, each EVPN-Overlay network
MUST get assigned a different I-ESI.
Received EVPN routes will never be reflected on the GWs but instead will be consumed and re-advertised (if needed):
-
Ethernet A-D routes, ES routes, and Inclusive Multicast routes are consumed by the GWs and processed locally for the corresponding [RFC 7432] procedures.
-
MAC/IP advertisement routes will be received and imported, and if they become active in the MAC-VRF, the information will be re-advertised as new routes with the following fields:
-
The RD will be the GW's RD for the MAC-VRF.
-
The ESI will be set to the I-ESI.
-
The Ethernet-tag value will be kept from the received NLRI the received NLRI.
-
The MAC length, MAC address, IP Length, and IP address values will be kept from the received NLRI.
-
The MPLS label will be a local 20-bit value (when sent to the WAN) or a DC-global 24-bit value (when sent to the DC for encapsulations using a VNI).
-
The appropriate Route Targets (RTs) and [RFC 9012] BGP encapsulation extended community will be used according to [RFC 8365].
The GWs will also generate the following local EVPN routes that will be sent to the DC and WAN, with their corresponding RTs and [
RFC 9012] BGP encapsulation extended community values:
-
ES route(s) for the I-ESI(s).
-
Ethernet A-D routes per ES and EVI for the I-ESI(s). The A-D per-EVI routes sent to the WAN and the DC will have consistent Ethernet-Tag values.
-
Inclusive Multicast routes with independent tunnel-type value for the WAN and DC. For example, a P2MP Label Switched Path (LSP) may be used in the WAN, whereas ingress replication may be used in the DC. The routes sent to the WAN and the DC will have a consistent Ethernet-Tag.
-
MAC/IP advertisement routes for MAC addresses learned in local attachment circuits. Note that these routes will not include the I-ESI value in the ESI field. These routes will include a zero ESI or a non-zero ESI for local multihomed Ethernet Segments (ES). The routes sent to the WAN and the DC will have a consistent Ethernet-Tag.
Assuming GW1 and GW2 are peer GWs of the same DC, each GW will generate two sets of the above local service routes: set-DC will be sent to the DC RRs and will include an A-D per EVI, Inclusive Multicast, and MAC/IP routes for the DC encapsulation and RT. Set-WAN will be sent to the WAN RRs and will include the same routes but using the WAN RT and encapsulation. GW1 and GW2 will receive each other's set-DC and set-WAN. This is the expected behavior on GW1 and GW2 for locally generated routes:
-
Inclusive multicast routes: When setting up the flooding lists for a given MAC-VRF, each GW will include its DC peer GW only in the EVPN-MPLS flooding list (by default) and not the EVPN-Overlay flooding list. That is, GW2 will import two Inclusive Multicast routes from GW1 (from set-DC and set-WAN) but will only consider one of the two, giving the set-WAN route higher priority. An administrative option MAY change this preference so that the set-DC route is selected first.
-
MAC/IP advertisement routes for local attachment circuits: As above, the GW will select only one, giving the route from the set-WAN a higher priority. As with the Inclusive multicast routes, an administrative option MAY change this priority.
The procedure explained at the end of the previous section will make sure there are no loops or packet duplication between the GWs of the same EVPN-Overlay network (for frames generated from local ACs), since only one EVPN binding per EVI (or per Ethernet Tag in the case of VLAN-aware bundle services) will be set up in the data plane between the two nodes. That binding will by default be added to the EVPN-MPLS flooding list.
As for the rest of the EVPN tunnel bindings, they will be added to one of the two flooding lists that each GW sets up for the same MAC-VRF:
-
EVPN-Overlay flooding list (composed of bindings to the remote NVEs or multicast tunnel to the NVEs).
-
EVPN-MPLS flooding list (composed of MP2P or LSM tunnel to the remote PEs).
Each flooding list will be part of a separate split-horizon group: the WAN split-horizon group or the DC split-horizon group. Traffic generated from a local AC can be flooded to both split-horizon groups. Traffic from a binding of a split-horizon group can be flooded to the other split-horizon group and local ACs, but never to a member of its own split-horizon group.
When either GW1 or GW2 receives a BUM frame on an MPLS tunnel, including an ESI label at the bottom of the stack, they will perform an ESI label lookup and split-horizon filtering as per [
RFC 7432], in case the ESI label identifies a local ESI (I-ESI or any other nonzero ESI).
This model supports single-active as well as all-active multihoming.
All the [
RFC 7432] multihoming procedures for the DF election on I-ES(s), as well as the backup-path (single-active) and aliasing (all-active) procedures, will be followed on the GWs. Remote PEs in the EVPN-MPLS network will follow regular [
RFC 7432] aliasing or backup-path procedures for MAC/IP routes received from the GWs for the same I-ESI. So will NVEs in the EVPN-Overlay network for MAC/IP routes received with the same I-ESI.
As far as the forwarding plane is concerned, by default, the EVPN-Overlay network will have an analogous behavior to the access ACs in [
RFC 7432] multihomed Ethernet Segments.
The forwarding behavior on the GWs is described below:
-
Single-active multihoming; assuming a WAN split-horizon group (comprised of EVPN-MPLS bindings), a DC split-horizon group (comprised of EVPN-Overlay bindings), and local ACs on the GWs:
-
Forwarding behavior on the non-DF: The non-DF MUST block ingress and egress forwarding on the EVPN-Overlay bindings associated to the I-ES. The EVPN-MPLS network is considered to be the core network, and the EVPN-MPLS bindings to the remote PEs and GWs will be active.
-
Forwarding behavior on the DF: The DF MUST NOT forward BUM or unicast traffic received from a given split-horizon group to a member of its own split-horizon group. Forwarding to other split-horizon groups and local ACs is allowed (as long as the ACs are not part of an ES for which the node is non-DF). As per [RFC 7432] and for split-horizon purposes, when receiving BUM traffic on the EVPN-Overlay bindings associated to an I-ES, the DF GW SHOULD add the I-ESI label when forwarding to the peer GW over EVPN-MPLS.
-
When receiving EVPN MAC/IP routes from the WAN, the non-DF MUST NOT reoriginate the EVPN routes and advertise them to the DC peers. In the same way, EVPN MAC/IP routes received from the DC MUST NOT be advertised to the WAN peers. This is consistent with [RFC 7432] and allows the remote PE/NVEs to know who the primary GW is, based on the reception of the MAC/IP routes.
-
All-active multihoming; assuming a WAN split-horizon group (comprised of EVPN-MPLS bindings), a DC split-horizon group (comprised of EVPN-Overlay bindings), and local ACs on the GWs:
-
Forwarding behavior on the non-DF: The non-DF follows the same behavior as the non-DF in the single-active case, but only for BUM traffic. Unicast traffic received from a split-horizon group MUST NOT be forwarded to a member of its own split-horizon group but can be forwarded normally to the other split-horizon groups and local ACs. If a known unicast packet is identified as a "flooded" packet, the procedures for BUM traffic MUST be followed.
-
Forwarding behavior on the DF: The DF follows the same behavior as the DF in the single-active case, but only for BUM traffic. Unicast traffic received from a split-horizon group MUST NOT be forwarded to a member of its own split-horizon group but can be forwarded normally to the other split-horizon group and local ACs. If a known unicast packet is identified as a "flooded" packet, the procedures for BUM traffic MUST be followed. As per [RFC 7432] and for split-horizon purposes, when receiving BUM traffic on the EVPN-Overlay bindings associated to an I-ES, the DF GW MUST add the I-ESI label when forwarding to the peer GW over EVPN-MPLS.
-
Contrary to the single-active multihoming case, both DF and non-DF reoriginate and advertise MAC/IP routes received from the WAN/DC peers, adding the corresponding I-ESI so that the remote PE/NVEs can perform regular aliasing, as per [RFC 7432].
The example in
Figure 3 illustrates the forwarding of BUM traffic originated from an NVE on a pair of all-active multihoming GWs.
|<--EVPN-Overlay--->|<--EVPN-MPLS-->|
+---------+ +--------------+
+----+ BUM +---+ |
|NVE1+----+----> | +-+-----+ |
+----+ | | DF |GW1| | | |
| | +-+-+ | | ++--+
| | | | +--> |PE1|
| +--->X +-+-+ | ++--+
| NDF| | | |
+----+ | |GW2<-+ |
|NVE2+--+ +-+-+ |
+----+ +--------+ | +------------+
v
+--+
|CE|
+--+
GW2 is the non-DF for the I-ES and blocks the BUM forwarding. GW1 is the DF and forwards the traffic to PE1 and GW2. Packets sent to GW2 will include the ESI label for the I-ES. Based on the ESI label, GW2 identifies the packets as I-ES-generated packets and will only forward them to local ACs (CE in the example) and not back to the EVPN-Overlay network.
MAC Mobility procedures described in [
RFC 7432] are not modified by this document.
Note that an intra-DC MAC move still leaves the MAC attached to the same I-ES, so under the rules of [
RFC 7432], this is not considered a MAC Mobility event. Only when the MAC moves from the WAN domain to the DC domain (or from one DC to another) will the MAC be learned from a different ES, and the MAC Mobility procedures will kick in.
The sticky-bit indication in the MAC Mobility extended community
MUST be propagated between domains.
All the Gateway optimizations described in
Section 3.5 MAY be applied to the GWs when the interconnect is based on EVPN-MPLS.
In particular, the use of the Unknown MAC Route, as described in
Section 3.5.1, solves some transient packet-duplication issues in cases of all-active multihoming, as explained below.
Consider the diagram in
Figure 2 for EVPN-MPLS interconnect and all-active multihoming, and the following sequence:
- (a)
- MAC Address M1 is advertised from NVE3 in EVI-1.
- (b)
- GW3 and GW4 learn M1 for EVI-1 and re-advertise M1 to the WAN with I-ESI-2 in the ESI field.
- (c)
- GW1 and GW2 learn M1 and install GW3/GW4 as next hops following the EVPN aliasing procedures.
- (d)
- Before NVE1 learns M1, a packet arrives at NVE1 with destination M1. If the Unknown MAC Route had not been advertised into the DC, NVE1 would have flooded the packet throughout the DC, in particular to both GW1 and GW2. If the same VNI/VSID is used for both known unicast and BUM traffic, as is typically the case, there is no indication in the packet that it is a BUM packet, and both GW1 and GW2 would have forwarded it, creating packet duplication. However, because the Unknown MAC Route had been advertised into the DC, NVE1 will unicast the packet to either GW1 or GW2.
- (e)
- Since both GW1 and GW2 know M1, the GW receiving the packet will forward it to either GW3 or GW4.
The "DCI using ASBRs" solution described in [
RFC 8365] and the GW solution with EVPN-MPLS interconnect may be seen as similar, since they both retain the EVPN attributes between Data Centers and throughout the WAN. However, the EVPN-MPLS interconnect solution on the GWs has significant benefits compared to the "DCI using ASBRs" solution:
-
As in any of the described GW models, this solution supports the connectivity of local attachment circuits on the GWs. This is not possible in a "DCI using ASBRs" solution.
-
Different data plane encapsulations can be supported in the DC and the WAN, while a uniform encapsulation is needed in the "DCI using ASBRs" solution.
-
Optimized multicast solution, with independent inclusive multicast trees in DC and WAN.
-
MPLS label aggregation: For the case where MPLS labels are signaled from the NVEs for MAC/IP advertisement routes, this solution provides label aggregation. A remote PE MAY receive a single label per GW MAC-VRF, as opposed to a label per NVE/MAC-VRF connected to the GW MAC-VRF. For instance, in Figure 2, PE would receive only one label for all the routes advertised for a given MAC-VRF from GW1, as opposed to a label per NVE/MAC-VRF.
-
The GW will not propagate MAC Mobility for the MACs moving within a DC. Mobility intra-DC is solved by all the NVEs in the DC. The MAC Mobility procedures on the GWs are only required in case of mobility across DCs.
-
Proxy-ARP/ND function on the DC GWs can be leveraged to reduce ARP/ND flooding in the DC or/and the WAN.
PBB-EVPN [
RFC 7623] is yet another interconnect option. It requires the use of GWs where I-components and associated B-components are part of EVI instances.
EVPN will run independently in both components, the I-component MAC-VRF and B-component MAC-VRF. Compared to [
RFC 7623], the DC customer MACs (C-MACs) are no longer learned in the data plane on the GW but in the control plane through EVPN running on the I-component. Remote C-MACs coming from remote PEs are still learned in the data plane. B-MACs in the B-component will be assigned and advertised following the procedures described in [
RFC 7623].
An I-ES will be configured on the GWs for multihoming, but its I-ESI will only be used in the EVPN control plane for the I-component EVI. No unreserved ESIs will be used in the control plane of the B-component EVI, as per [
RFC 7623]. That is, the I-ES will be represented to the WAN PBB-EVPN PEs using shared or dedicated B-MACs.
The rest of the control plane procedures will follow [
RFC 7432] for the I-component EVI and [
RFC 7623] for the B-component EVI.
From the data plane perspective, the I-component and B-component EVPN bindings established to the same far end will be compared, and the I-component EVPN-Overlay binding will be kept down following the rules described in
Section 4.3.1.
This model supports single-active as well as all-active multihoming.
The forwarding behavior of the DF and non-DF will be changed based on the description outlined in
Section 4.4.3, substituting the WAN split-horizon group for the B-component, and using [
RFC 7623] procedures for the traffic sent or received on the B-component.
C-MACs learned from the B-component will be advertised in EVPN within the I-component EVI scope. If the C-MAC was previously known in the I-component database, EVPN would advertise the C-MAC with a higher sequence number, as per [
RFC 7432]. From the perspective of Mobility and the related procedures described in [
RFC 7432], the C-MACs learned from the B-component are considered local.
All the considerations explained in
Section 4.4.5 are applicable to the PBB-EVPN interconnect option.
If EVPN for Overlay tunnels is supported in the WAN, and a GW function is required, an end-to-end EVPN solution can be deployed. While multiple Overlay tunnel combinations at the WAN and the DC are possible (MPLSoGRE, NVGRE, etc.), VXLAN is described here, given its popularity in the industry. This section focuses on the specific case of EVPN for VXLAN (EVPN-VXLAN hereafter) and the impact on the [
RFC 7432] procedures.
The procedures described in
Section 4.4 apply to this section, too, only substituting EVPN-MPLS for EVPN-VXLAN control plane specifics and using [
RFC 8365] "Local Bias" procedures instead of
Section 4.4.3. Since there are no ESI labels in VXLAN, GWs need to rely on "Local Bias" to apply split horizon on packets generated from the I-ES and sent to the peer GW.
This use case assumes that NVEs need to use the VNIs or VSIDs as globally unique identifiers within a Data Center, and a Gateway needs to be employed at the edge of the Data-Center network to translate the VNI or VSID when crossing the network boundaries. This GW function provides VNI and tunnel-IP-address translation. The use case in which local downstream-assigned VNIs or VSIDs can be used (like MPLS labels) is described by [
RFC 8365].
While VNIs are globally significant within each DC, there are two possibilities in the interconnect network:
-
Globally unique VNIs in the interconnect network. In this case, the GWs and PEs in the interconnect network will agree on a common VNI for a given EVI. The RT to be used in the interconnect network can be autoderived from the agreed-upon interconnect VNI. The VNI used inside each DC MAY be the same as the interconnect VNI.
-
Downstream-assigned VNIs in the interconnect network. In this case, the GWs and PEs MUST use the proper RTs to import/export the EVPN routes. Note that even if the VNI is downstream assigned in the interconnect network, and unlike option (a), it only identifies the <Ethernet Tag, GW> pair and not the <Ethernet Tag, egress PE> pair. The VNI used inside each DC MAY be the same as the interconnect VNI. GWs SHOULD support multiple VNI spaces per EVI (one per interconnect network they are connected to).
In both options, NVEs inside a DC only have to be aware of a single VNI space, and only GWs will handle the complexity of managing multiple VNI spaces. In addition to VNI translation above, the GWs will provide translation of the tunnel source IP for the packets generated from the NVEs, using their own IP address. GWs will use that IP address as the BGP next hop in all the EVPN updates to the interconnect network.
The following sections provide more details about these two options.
Considering
Figure 2, if a host H1 in NVO-1 needs to communicate with a host H2 in NVO-2, and assuming that different VNIs are used in each DC for the same EVI (e.g., VNI-10 in NVO-1 and VNI-20 in NVO-2), then the VNIs
MUST be translated to a common interconnect VNI (e.g., VNI-100) on the GWs. Each GW is provisioned with a VNI translation mapping so that it can translate the VNI in the control plane when sending BGP EVPN route updates to the interconnect network. In other words, GW1 and GW2
MUST be configured to map VNI-10 to VNI-100 in the BGP update messages for H1's MAC route. This mapping is also used to translate the VNI in the data plane in both directions: that is, VNI-10 to VNI-100 when the packet is received from NVO-1 and the reverse mapping from VNI-100 to VNI-10 when the packet is received from the remote NVO-2 network and needs to be forwarded to NVO-1.
The procedures described in
Section 4.4 will be followed, considering that the VNIs advertised/received by the GWs will be translated accordingly.
In this case, if a host H1 in NVO-1 needs to communicate with a host H2 in NVO-2, and assuming that different VNIs are used in each DC for the same EVI, e.g., VNI-10 in NVO-1 and VNI-20 in NVO-2, then the VNIs
MUST be translated as in
Section 4.6.1. However, in this case, there is no need to translate to a common interconnect VNI on the GWs. Each GW can translate the VNI received in an EVPN update to a locally assigned VNI advertised to the interconnect network. Each GW can use a different interconnect VNI; hence, this VNI does not need to be agreed upon on all the GWs and PEs of the interconnect network.
The procedures described in
Section 4.4 will be followed, taking into account the considerations above for the VNI translation.