Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 6956

Forwarding and Control Element Separation (ForCES) Logical Function Block (LFB) Library

Pages: 111
Proposed Standard
Errata
Part 3 of 5 – Pages 41 to 69
First   Prev   Next

Top   ToC   RFC6956 - Page 41   prevText

5. LFB Class Descriptions

According to ForCES specifications, an LFB (Logical Function Block) is a well-defined, logically separable functional block that resides in an FE and is a functionally accurate abstraction of the FE's processing capabilities. An LFB class (or type) is a template that represents a fine-grained, logically separable aspect of FE processing. Most LFBs are related to packet processing in the data path. LFB classes are the basic building blocks of the FE model. Note that [RFC5810] has already defined an 'FE Protocol LFB', which is a logical entity in each FE to control the ForCES protocol. [RFC5812] has already defined an 'FE Object LFB'. Information like the FE Name, FE ID, FE State, and LFB Topology in the FE are represented in this LFB. As specified in Section 3.1, this document focuses on the base LFB library for implementing typical router functions, especially for IP forwarding functions. As a result, LFB classes in the library are all base LFBs to implement router forwarding. In this section, the terms "upstream LFB" and "downstream LFB" are used. These are used relative to the LFB that is being described. An "upstream LFB" is one whose output ports are connected to input ports of the LFB under consideration such that output (typically packets with metadata) can be sent from the "upstream LFB" to the LFB under consideration. Similarly, a "downstream LFB" whose input ports
Top   ToC   RFC6956 - Page 42
   are connected to output ports of the LFB under consideration such
   that the LFB under consideration can send information to the
   "downstream LFB".  Note that in some rare topologies, an LFB may be
   both upstream and downstream relative to another LFB.

   Also note that, as a default provision of [RFC5812], in the FE model,
   all metadata produced by upstream LFBs will pass through all
   downstream LFBs by default without being specified by input port or
   output port.  Only those metadata that will be used (consumed) by an
   LFB will be explicitly marked in the input of the LFB as expected
   metadata.  For instance, in downstream LFBs of a physical-layer LFB,
   even if there is no specific metadata expected, metadata like
   PHYPortID produced by the physical-layer LFB will always pass through
   all downstream LFBs regardless of whether or not the metadata has
   been expected by the LFBs.

5.1. Ethernet-Processing LFBs

As the most popular physical- and data-link-layer protocol, Ethernet is widely deployed. It becomes a basic requirement for a router to be able to process various Ethernet data packets. Note that different versions of Ethernet formats exist, like Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, and IEEE 802.3/802.2 SNAP. Varieties of LAN techniques based on Ethernet also exist, like various VLANs, MACinMAC, etc. Ethernet-processing LFBs defined here are intended to be able to cope with all these variations of Ethernet technology. There are also various types of Ethernet physical interface media. Among them, copper and fiber media may be the most popular ones. As a base LFB definition and a starting point, this document only defines an Ethernet physical LFB with copper media. For other media interfaces, specific LFBs may be defined in future versions of the library.

5.1.1. EtherPHYCop

EtherPHYCop LFB abstracts an Ethernet interface physical layer with media limited to copper.
5.1.1.1. Data Handling
This LFB is the interface to the Ethernet physical media. The LFB handles Ethernet frames coming in from or going out of the FE. Ethernet frames sent and received cover all packets encapsulated with different versions of Ethernet protocols, like Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, and IEEE 802.3/802.2 SNAP, including packets
Top   ToC   RFC6956 - Page 43
   encapsulated with varieties of LAN techniques based on Ethernet, like
   various VLANs, MACinMAC, etc.  Therefore, in the XML, an EthernetAll
   frame type has been introduced.

   Ethernet frames are received from the physical media port and passed
   downstream to LFBs, such as EtherMACIn LFBs, via a singleton output
   known as "EtherPHYOut".  A PHYPortID metadata, which indicates the
   physical port from which the frame came in from the external world,
   is passed along with the frame.

   Ethernet packets are received by this LFB from upstream LFBs, such as
   EtherMacOut LFBs, via the singleton input known as "EtherPHYIn"
   before being sent out to the external world.

5.1.1.2. Components
The AdminStatus component is defined for the CE to administratively manage the status of the LFB. The CE may administratively start up or shut down the LFB by changing the value of AdminStatus. The default value is set to 'Down'. An OperStatus component captures the physical port operational status. A PHYPortStatusChanged event is defined so the LFB can report to the CE whenever there is an operational status change of the physical port. The PHYPortID component is a unique identification for a physical port. It is defined as 'read-only' by the CE. Its value is enumerated by FE. The component will be used to produce a PHYPortID metadata at the LFB output and to associate it to every Ethernet packet this LFB receives. The metadata will be handed to downstream LFBs for them to use the PHYPortID. A group of components are defined for link speed management. The AdminLinkSpeed is for the CE to configure link speed for the port, and the OperLinkSpeed is for the CE to query the actual link speed in operation. The default value for the AdminLinkSpeed is set to auto- negotiation mode. A group of components are defined for duplex mode management. The AdminDuplexMode is for the CE to configure proper duplex mode for the port, and the OperDuplexMode is for CE to query the actual duplex mode in operation. The default value for the AdminDuplexMode is set to auto-negotiation mode. A CarrierStatus component captures the status of the carrier and specifies whether the port link is operationally up. The default value for the CarrierStatus is 'false'.
Top   ToC   RFC6956 - Page 44
5.1.1.3. Capabilities
The capability information for this LFB includes the link speeds that are supported by the FE (SupportedLinkSpeed) as well as the supported duplex modes (SupportedDuplexMode).
5.1.1.4. Events
Several events are generated. There is an event for changes in the status of the physical port (PhyPortStatusChanged). Such an event will notify that the physical port status has been changed, and the report will include the new status of the physical port. Another event captures changes in the operational link speed (LinkSpeedChanged). Such an event will notify the CE that the operational speed has been changed, and the report will include the new negotiated operational speed. A final event captures changes in the duplex mode (DuplexModeChanged). Such an event will notify the CE that the duplex mode has been changed and the report will include the new negotiated duplex mode.

5.1.2. EtherMACIn

EtherMACIn LFB abstracts an Ethernet port at the MAC data link layer. This LFB describes Ethernet processing functions like checking MAC address locality, deciding if the Ethernet packets should be bridged, providing Ethernet-layer flow control, etc.
5.1.2.1. Data Handling
The LFB is expected to receive all types of Ethernet packets (via a singleton input known as "EtherPktsIn"), which are usually output from some Ethernet physical-layer LFB, like an EtherPHYCop LFB, along with a metadata indicating the physical port ID of the port on which the packet arrived. The LFB is defined with two separate singleton outputs. All output packets are emitted in the original Ethernet format received at the physical port, unchanged, and cover all Ethernet types. The first singleton output is known as "NormalPathOut". It usually outputs Ethernet packets to some LFB, like an EtherClassifier LFB, for further L3 forwarding process along with a PHYPortID metadata indicating the physical port from which the packet came.
Top   ToC   RFC6956 - Page 45
   The second singleton output is known as "L2BridgingPathOut".
   Although the LFB library this document defines is basically to meet
   typical router functions, it will attempt to be forward compatible
   with future router functions.  The L2BridgingPathOut is defined to
   meet the requirement that L2 bridging functions may be optionally
   supported simultaneously with L3 processing and some L2 bridging LFBs
   that may be defined in the future.  If the FE supports L2 bridging,
   the CE can enable or disable it by means of a "L2BridgingPathEnable"
   component in the FE.  If it is enabled, by also instantiating some L2
   bridging LFB instances following the L2BridgingPathOut, FEs are
   expected to fulfill L2 bridging functions.  L2BridgingPathOut will
   output packets exactly the same as in the NormalPathOut output.

   This LFB can be set to work in a promiscuous mode, allowing all
   packets to pass through the LFB without being dropped.  Otherwise, a
   locality check will be performed based on the local MAC addresses.
   All packets that do not pass through the locality check will be
   dropped.

   This LFB can optionally participate in Ethernet flow control in
   cooperation with EtherMACOut LFB.  This document does not go into the
   details of how this is implemented.  This document also does not
   describe how the buffers that induce the flow control messages behave
   -- it is assumed that such artifacts exist, and describing them is
   out of scope in this document.

5.1.2.2. Components
The AdminStatus component is defined for the CE to administratively manage the status of the LFB. The CE may administratively start up or shut down the LFB by changing the value of AdminStatus. The default value is set to 'Down'. The LocalMACAddresses component specifies the local MAC addresses based on which locality checks will be made. This component is an array of MAC addresses and of 'read-write' access permission. An L2BridgingPathEnable component captures whether the LFB is set to work as an L2 bridge. An FE that does not support bridging will internally set this flag to false and additionally set the flag property as read-only. The default value for the component is 'false'. The PromiscuousMode component specifies whether the LFB is set to work in a promiscuous mode. The default value for the component is 'false'.
Top   ToC   RFC6956 - Page 46
   The TxFlowControl component defines whether the LFB is performing
   flow control on sending packets.  The default value is 'false'.  Note
   that the component is defined as "optional".  If an FE does not
   implement the component while a CE tries to configure the component
   to that FE, an error from the FE may be responded to the CE with an
   error code like 0x09 (E_COMPONENT_DOES_NOT_EXIST) or 0x15
   (E_NOT_SUPPORTED), depending on the FE processing.  See [RFC5810] for
   details.

   The RxFlowControl component defines whether the LFB is performing
   flow control on receiving packets.  The default value is 'false'.
   The component is defined as "optional".

   A struct component, MACInStats, defines a set of statistics for this
   LFB, including the number of received packets and the number of
   dropped packets.  Note that this statistics component is optional to
   implementers.  If a CE tries to query the component while it is not
   implemented in an FE, an error code will be responded to the CE
   indicating the error type like 0x09 (E_COMPONENT_DOES_NOT_EXIST) or
   0x15 (E_NOT_SUPPORTED), depending on the FE implementation.

5.1.2.3. Capabilities
This LFB does not have a list of capabilities.
5.1.2.4. Events
This LFB does not have any events specified.

5.1.3. EtherClassifier

The EtherClassifier LFB abstracts the process to decapsulate Ethernet packets and then classify them.
5.1.3.1. Data Handling
This LFB describes the process of decapsulating Ethernet packets and classifying them into various network-layer data packets according to information included in the Ethernet packets headers. The LFB is expected to receive all types of Ethernet packets (via a singleton input known as "EtherPktsIn"), which are usually output from an upstream LFB like EtherMACIn LFB. This input is also capable of multiplexing to allow for multiple upstream LFBs to be connected. For instance, when an L2 bridging function is enabled in the EtherMACIn LFB, some L2 bridging LFBs may be applied. In this case, after L2 processing, some Ethernet packets may have to be input to the EtherClassifier LFB for classification, while simultaneously,
Top   ToC   RFC6956 - Page 47
   packets directly output from EtherMACIn may also need to input to
   this LFB.  This input is capable of handling such a case.  Usually,
   all expected Ethernet packets will be associated with a PHYPortID
   metadata, indicating the physical port from which the packet comes.
   In some cases, for instance, in a MACinMAC case, a LogicalPortID
   metadata may be expected to associate with the Ethernet packet to
   further indicate the logical port to which the Ethernet packet
   belongs.  Note that PHYPortID metadata is always expected while
   LogicalPortID metadata is optionally expected.

   Two output LFB ports are defined.

   The first output is a group output port known as "ClassifyOut".
   Types of network-layer protocol packets are output to instances of
   the port group.  Because there may be various types of protocol
   packets at the output ports, the produced output frame is defined as
   arbitrary for the purpose of wide extensibility in the future.
   Metadata to be carried along with the packet data is produced at this
   LFB for consumption by downstream LFBs.  The metadata passed
   downstream includes PHYPortID, as well as information on Ethernet
   type, source MAC address, destination MAC address, and the logical
   port ID.  If the original packet is a VLAN packet and contains a VLAN
   ID and a VLAN priority value, then the VLAN ID and the VLAN priority
   value are also carried downstream as metadata.  As a result, the VLAN
   ID and priority metadata are defined with the availability of
   "conditional".

   The second output is a singleton output port known as "ExceptionOut",
   which will output packets for which the data processing failed, along
   with an additional ExceptionID metadata to indicate what caused the
   exception.  Currently defined exception types include:

   o  There is no matching when classifying the packet.

   Usually, the ExceptionOut port may point to nowhere, indicating
   packets with exceptions are dropped, while in some cases, the output
   may be pointed to the path to the CE for further processing,
   depending on individual implementations.

5.1.3.2. Components
An EtherDispatchTable array component is defined in the LFB to dispatch every Ethernet packet to the output group according to the logical port ID assigned by the VlanInputTable to the packet and the Ethernet type in the Ethernet packet header. Each row of the array is a struct containing a logical port ID, an EtherType and an output index. With the CE configuring the dispatch table, the LFB can be expected to classify various network-layer protocol type packets and
Top   ToC   RFC6956 - Page 48
   output them at different output ports.  It is expected that the LFB
   classify packets according to protocols like IPv4, IPv6, MPLS,
   Address Resolution Protocol (ARP), Neighbor Discovery (ND), etc.

   A VlanInputTable array component is defined in the LFB to classify
   VLAN Ethernet packets.  Each row of the array is a struct containing
   an incoming port ID, a VLAN ID, and a logical port ID.  According to
   IEEE VLAN specifications, all Ethernet packets can be recognized as
   VLAN types by defining that if there is no VLAN encapsulation in a
   packet, a case with VLAN tag 0 is considered.  Every input packet is
   assigned with a new LogicalPortID according to the packet's incoming
   port ID and the VLAN ID.  A packet's incoming port ID is defined as a
   logical port ID if a logical port ID is associated with the packet or
   a physical port ID if no logical port ID is associated.  The VLAN ID
   is exactly the VLAN ID in the packet if it is a VLAN packet, or 0 if
   it is not.  Note that a logical port ID of a packet may be rewritten
   with a new one by the VlanInputTable processing.

   Note that the logical port ID and physical port ID mentioned above
   are all originally configured by the CE, and are globally effective
   within a ForCES NE (Network Element).  To distinguish a physical port
   ID from a logical port ID in the incoming port ID field of the
   VlanInputTable, physical port ID and logical port ID must be assigned
   with separate number spaces.

   An array component, EtherClassifyStats, defines a set of statistics
   for this LFB, measuring the number of packets per EtherType.  Each
   row of the array is a struct containing an EtherType and a packet
   number.  Note that this statistics component is optional to
   implementers.

5.1.3.3. Capabilities
This LFB does not have a list of capabilities.
5.1.3.4. Events
This LFB has no events specified.

5.1.4. EtherEncap

The EtherEncap LFB abstracts the process to replace or attach appropriate Ethernet headers to the packet.
5.1.4.1. Data Handling
This LFB abstracts the process of encapsulating Ethernet headers onto received packets. The encapsulation is based on passed metadata.
Top   ToC   RFC6956 - Page 49
   The LFB is expected to receive IPv4 and IPv6 packets (via a singleton
   input port known as "EncapIn"), which may be connected to an upstream
   LFB like IPv4NextHop, IPv6NextHop, BasicMetadataDispatch, or any LFB
   that requires output packets for Ethernet encapsulation.  The LFB
   always expects from upstream LFBs the MediaEncapInfoIndex metadata,
   which is used as a search key to look up the encapsulation table
   EncapTable by the search key matching the table index.  An input
   packet may also optionally receive a VLAN priority metadata,
   indicating that the packet originally had a priority value.  The
   priority value will be loaded back to the packet when encapsulating.
   The optional VLAN priority metadata is defined with a default value
   of 0.

   Two singleton output LFB ports are defined.

   The first singleton output is known as "SuccessOut".  Upon a
   successful table lookup, the destination and source MAC addresses and
   the logical media port (L2PortID) are found in the matching table
   entry.  The CE may set the VlanID in case VLANs are used.  By
   default, the table entry for VlanID of 0 is used as per IEEE rules
   [IEEE.802-1Q].  Whatever the value of VlanID, if the input metadata
   VlanPriority is non-zero, the packet will have a VLAN tag.  If the
   VlanPriority and the VlanID are all zero, there is no VLAN tag for
   this packet.  After replacing or attaching the appropriate Ethernet
   headers to the packet is complete, the packet is passed out on the
   "SuccessOut" LFB port to a downstream LFB instance along with the
   L2PortID.

   The second singleton output is known as "ExceptionOut" and will
   output packets for which the table lookup fails, along with an
   additional ExceptionID metadata.  Currently defined exception types
   only include the following cases:

   o  The MediaEncapInfoIndex value of the packet is invalid and can not
      be allocated in the EncapTable.

   o  The packet failed lookup of the EncapTable table even though the
      MediaEncapInfoIndex is valid.

   The upstream LFB may be programmed by the CE to pass along a
   MediaEncapInfoIndex that does not exist in the EncapTable.  This
   allows for resolution of the L2 headers, if needed, to be made at the
   L2 encapsulation level, in this case, Ethernet via ARP or ND (or
   other methods depending on the link-layer technology), when a table
   miss occurs.

   For neighbor L2 header resolution (table miss exception), the
   processing LFB may pass this packet to the CE via the redirect LFB or
Top   ToC   RFC6956 - Page 50
   FE software or another LFB instance for further resolution.  In such
   a case, the metadata NextHopIPv4Addr or NextHopIPv6Addr generated by
   the next-hop LFB is also passed to the exception handling.  Such an
   IP address could be used to do activities such as ARP or ND by the
   handler to which it is passed.

   The result of the L2 resolution is to update the EncapTable as well
   as the next-hop LFB so subsequent packets do not fail EncapTable
   lookup.  The EtherEncap LFB does not make any assumptions of how the
   EncapTable is updated by the CE (or whether ARP/ND is used
   dynamically or static maps exist).

   Downstream LFB instances could be either an EtherMACOut type or a
   BasicMetadataDispatch type.  If the final packet L2 processing is on
   a per-media-port basis, resides on a different FE, or needs L2 header
   resolution, then it makes sense for the model to use a
   BasicMetadataDispatch LFB to fan out to different LFB instances.  If
   there is a direct egress port point, then it makes sense for the
   model to have a downstream LFB instance be an EtherMACOut.

5.1.4.2. Components
This LFB has only one component named EncapTable, which is defined as an array. Each row of the array is a struct containing the destination MAC address, the source MAC address, the VLAN ID with a default value of zero, and the output logical L2 port ID.
5.1.4.3. Capabilities
This LFB does not have a list of capabilities.
5.1.4.4. Events
This LFB does not have any events specified.

5.1.5. EtherMACOut

The EtherMACOut LFB abstracts an Ethernet port at the MAC data link layer. This LFB describes Ethernet packet output process. Ethernet output functions are closely related to Ethernet input functions; therefore, many components defined in this LFB are aliases of EtherMACIn LFB components.
Top   ToC   RFC6956 - Page 51
5.1.5.1. Data Handling
The LFB is expected to receive all types of Ethernet packets (via a singleton input known as "EtherPktsIn"), which are usually output from an Ethernet encapsulation LFB along with a metadata indicating the ID of the physical port that the packet will go through. The LFB is defined with a singleton output port known as "EtherPktsOut". All output packets are in Ethernet format, possibly with various Ethernet types, along with a metadata indicating the ID of the physical port that the packet is to go through. This output links to a downstream LFB that is usually an Ethernet physical LFB like the EtherPHYCop LFB. This LFB can optionally participate in Ethernet flow control in cooperation with the EtherMACIn LFB. This document does not go into the details of how this is implemented. This document also does not describe how the buffers that induce the flow control messages behave -- it is assumed that such artifacts exist, but describing them is out of the scope of this document. Note that as a base definition, functions like multiple virtual MAC layers are not supported in this LFB version. It may be supported in the future by defining a subclass or a new version of this LFB.
5.1.5.2. Components
The AdminStatus component is defined for the CE to administratively manage the status of the LFB. The CE may administratively start up or shut down the LFB by changing the value of AdminStatus. The default value is set to 'Down'. Note that this component is defined as an alias of the AdminStatus component in the EtherMACIn LFB. This infers that an EtherMACOut LFB usually coexists with an EtherMACIn LFB, both of which share the same administrative status management by the CE. Alias properties, as defined in the ForCES FE model [RFC5812], will be used by the CE to declare the target component to which the alias refers, which includes the target LFB class and instance IDs as well as the path to the target component. The MTU component defines the maximum transmission unit. The optional TxFlowControl component defines whether or not the LFB is performing flow control on sending packets. The default value is 'false'. Note that this component is defined as an alias of the TxFlowControl component in the EtherMACIn LFB. The optional RxFlowControl component defines whether or not the LFB is performing flow control on receiving packets. The default value
Top   ToC   RFC6956 - Page 52
   is 'false'.  Note that this component is defined as an alias of the
   RxFlowControl component in the EtherMACIn LFB.

   A struct component, MACOutStats, defines a set of statistics for this
   LFB, including the number of transmitted packets and the number of
   dropped packets.  This statistics component is optional to
   implementers.

5.1.5.3. Capabilities
This LFB does not have a list of capabilities.
5.1.5.4. Events
This LFB does not have any events specified.

5.2. IP Packet Validation LFBs

The LFBs are defined to abstract the IP packet validation process. An IPv4Validator LFB is specifically for IPv4 protocol validation, and an IPv6Validator LFB is specifically for IPv6.

5.2.1. IPv4Validator

The IPv4Validator LFB performs IPv4 packet validation.
5.2.1.1. Data Handling
This LFB performs IPv4 validation according to [RFC1812] and its updates. The IPv4 packet will be output to the corresponding LFB port, indicating whether the packet is unicast or multicast or whether an exception has occurred or the validation failed. This LFB always expects, as input, packets that have been indicated as IPv4 packets by an upstream LFB, like an EtherClassifier LFB. There is no specific metadata expected by the input of the LFB. Four output LFB ports are defined. All validated IPv4 unicast packets will be output at the singleton port known as "IPv4UnicastOut". All validated IPv4 multicast packets will be output at the singleton port known as "IPv4MulticastOut" port. A singleton port known as "ExceptionOut" is defined to output packets that have been validated as exception packets. An exception ID metadata is produced to indicate what has caused the exception. An exception case is the case when a packet needs further processing
Top   ToC   RFC6956 - Page 53
   before being normally forwarded.  Currently defined exception types
   include:

   o  Packet with expired TTL

   o  Packet with header length more than 5 words

   o  Packet IP head including router alert options

   o  Packet with exceptional source address

   o  Packet with exceptional destination address

   Note that although Time to Live (TTL) is checked in this LFB for
   validity, operations like TTL decrement are made by the downstream
   forwarding LFB.

   The final singleton port known as "FailOut" is defined for all
   packets that have errors and failed the validation process.  An error
   case is when a packet is unable to be further processed or forwarded
   without being dropped.  An error ID is associated with a packet to
   indicate the failure reason.  Currently defined failure reasons
   include:

   o  Packet with size reported less than 20 bytes

   o  Packet with version not IPv4

   o  Packet with header length less than 5 words

   o  Packet with total length field less than 20 bytes

   o  Packet with invalid checksum

   o  Packet with invalid source address

   o  Packet with invalid destination address

5.2.1.2. Components
This LFB has only one struct component, the IPv4ValidatorStatisticsType, which defines a set of statistics for validation process, including the number of bad header packets, the number of bad total length packets, the number of bad TTL packets, and the number of bad checksum packets. This statistics component is optional to implementers.
Top   ToC   RFC6956 - Page 54
5.2.1.3. Capabilities
This LFB does not have a list of capabilities
5.2.1.4. Events
This LFB does not have any events specified.

5.2.2. IPv6Validator

The IPv6Validator LFB performs IPv6 packet validation.
5.2.2.1. Data Handling
This LFB performs IPv6 validation according to [RFC2460] and its updates. Then the IPv6 packet will be output to the corresponding port regarding of the validation result, indicating whether the packet is a unicast or a multicast one, an exception has occurred or the validation failed. This LFB always expects, as input, packets that have been indicated as IPv6 packets by an upstream LFB, like an EtherClassifier LFB. There is no specific metadata expected by the input of the LFB. Similar to the IPv4validator LFB, the IPv6Validator LFB has also defined four output ports to emit packets with various validation results. All validated IPv6 unicast packets will be output at the singleton port known as "IPv6UnicastOut". All validated IPv6 multicast packets will be output at the singleton port known as "IPv6MulticastOut". There is no metadata produced at this LFB. A singleton port known as "ExceptionOut" is defined to output packets that have been validated as exception packets. An exception case is when a packet needs further processing before being normally forwarded. An exception ID metadata is produced to indicate what caused the exception. Currently defined exception types include: o Packet with hop limit to zero o Packet with next header set to hop-by-hop o Packet with exceptional source address o Packet with exceptional destination address
Top   ToC   RFC6956 - Page 55
   The final singleton port known as "FailOut" is defined for all
   packets that have errors and failed the validation process.  An error
   case when a packet is unable to be further processed or forwarded
   without being dropped.  A validate error ID is associated to every
   failed packet to indicate the reason.  Currently defined reasons
   include:

   o  Packet with size reported less than 40 bytes

   o  Packet with version not IPv6

   o  Packet with invalid source address

   o  Packet with invalid destination address

   Note that in the base type library, definitions for exception ID and
   validate error ID metadata are applied to both IPv4Validator and
   IPv6Validator LFBs, i.e., the two LFBs share the same metadata
   definition, with different ID assignment inside.

5.2.2.2. Components
This LFB has only one struct component, the IPv6ValidatorStatisticsType, which defines a set of statistics for the validation process, including the number of bad header packets, the number of bad total length packets, and the number of bad hop limit packets. Note that this component is optional to implementers.
5.2.2.3. Capabilities
This LFB does not have a list of capabilities.
5.2.2.4. Events
This LFB does not have any events specified.

5.3. IP Forwarding LFBs

IP Forwarding LFBs are specifically defined to abstract the IP forwarding processes. As definitions for a base LFB library, this document restricts its LFB definition scope only to IP unicast forwarding. IP multicast may be defined in future documents. The two fundamental tasks performed in IP unicast forwarding constitute looking up the forwarding information table to find next- hop information and then using the resulting next-hop details to forward packets out on specific physical output ports. This document models the forwarding processes by abstracting out the described two
Top   ToC   RFC6956 - Page 56
   steps.  Whereas this document describes functional LFB models that
   are modular, there may be multiple ways to implement the abstracted
   models.  It is not intended or expected that the provided LFB models
   constrain implementations.

   Based on the IP forwarding abstraction, two kinds of typical IP
   unicast forwarding LFBs are defined: unicast LPM lookup LFB and next-
   hop application LFB.  They are further distinguished by IPv4 and IPv6
   protocols.

5.3.1. IPv4UcastLPM

The IPv4UcastLPM LFB abstracts the IPv4 unicast Longest Prefix Match (LPM) process. This LFB also provides facilities to support users to implement equal-cost multipath (ECMP) routing or reverse path forwarding (RPF). However, this LFB itself does not provide ECMP or RPF. To fully implement ECMP or RPF, additional specific LFBs, like a specific ECMP LFB or an RPF LFB, will have to be defined.
5.3.1.1. Data Handling
This LFB performs the IPv4 unicast LPM table lookup. It always expects as input IPv4 unicast packets from one singleton input known as "PktsIn". Then, the LFB uses the destination IPv4 address of every packet as a search key to look up the IPv4 prefix table and generate a hop selector as the matching result. The hop selector is passed as packet metadata to downstream LFBs and will usually be used there as a search index to find more next-hop information. Three singleton output LFB ports are defined. The first singleton output is known as "NormalOut" and outputs IPv4 unicast packets that succeed the LPM lookup (and got a hop selector). The hop selector is associated with the packet as a metadata. Downstream from the LPM LFB is usually a next-hop application LFB, like an IPv4NextHop LFB. The second singleton output is known as "ECMPOut" and is defined to provide support for users wishing to implement ECMP. An ECMP flag is defined in the LPM table to enable the LFB to support ECMP. When a table entry is created with the flag set to true, it indicates this table entry is for ECMP only. A packet that has passed through this prefix lookup will always output from the "ECMPOut" output port, with the hop selector being its lookup result. The output will usually go directly to a downstream ECMP processing
Top   ToC   RFC6956 - Page 57
   LFB, where the hop selector can usually further generate optimized
   one or multiple next-hop routes by use of ECMP algorithms.

   A default route flag is defined in the LPM table to enable the LFB to
   support a default route as well as loose RPF.  When this flag is set
   to true, the table entry is identified as a default route, which also
   implies that the route is forbidden for RPF.  If a user wants to
   implement RPF on FE, a specific RPF LFB will have to be defined.  In
   such an RPF LFB, a component can be defined as an alias of the prefix
   table component of this LFB, as described below.

   The final singleton output is known as "ExceptionOut" of the
   IPv4UcastLPM LFB and is defined to output exception packets after the
   LFB processing, along with an ExceptionID metadata to indicate what
   caused the exception.  Currently defined exception types include:

   o  The packet failed the LPM lookup of the prefix table.

   The upstream LFB of this LFB is usually an IPv4Validator LFB.  If RPF
   is to be adopted, the upstream can be an RPF LFB, when defined.

   The downstream LFB is usually an IPv4NextHop LFB.  If ECMP is
   adopted, the downstream can be an ECMP LFB, when defined.

5.3.1.2. Components
This LFB has two components. The IPv4PrefixTable component is defined as an array component of the LFB. Each row of the array contains an IPv4 address, a prefix length, a hop selector, an ECMP flag and a default route flag. The LFB uses the destination IPv4 address of every input packet as a search key to look up this table in order extract a next-hop selector. The ECMP flag is for the LFB to support ECMP. The default route flag is for the LFB to support a default route and for loose RPF. The IPv4UcastLPMStats component is a struct component that collects statistics information, including the total number of input packets received, the IPv4 packets forwarded by this LFB, and the number of IP datagrams discarded due to no route found. Note that this component is defined as optional to implementers.
5.3.1.3. Capabilities
This LFB does not have a list of capabilities.
Top   ToC   RFC6956 - Page 58
5.3.1.4. Events
This LFB does not have any events specified.

5.3.2. IPv4NextHop

This LFB abstracts the process of selecting IPv4 next-hop action.
5.3.2.1. Data Handling
The LFB abstracts the process of next-hop information application to IPv4 packets. It receives an IPv4 packet with an associated next-hop identifier (HopSelector) and uses the identifier as a table index to look up a next-hop table to find an appropriate LFB output port. The LFB is expected to receive unicast IPv4 packets, via a singleton input known as "PktsIn", along with a HopSelector metadata, which is used as a table index to look up the NextHop table. The data processing involves the forwarding TTL decrement and IP checksum recalculation. Two output LFB ports are defined. The first output is a group output port known as "SuccessOut". On successful data processing, the packet is sent out from an LFB port from within the LFB port group as selected by the LFBOutputSelectIndex value of the matched table entry. The packet is sent to a downstream LFB along with the L3PortID and MediaEncapInfoIndex metadata. The second output is a singleton output port known as "ExceptionOut", which will output packets for which the data processing failed, along with an additional ExceptionID metadata to indicate what caused the exception. Currently defined exception types include: o The HopSelector for the packet is invalid. o The packet failed lookup of the next-hop table even though the HopSelector is valid. o The MTU for outgoing interface is less than the packet size. Downstream LFB instances could be either a BasicMetadataDispatch type (Section 5.5.1), used to fan out to different LFB instances or a media-encapsulation-related type, such as an EtherEncap type or a RedirectOut type (Section 5.4.2). For example, if there are Ethernet and other tunnel encapsulation, then a BasicMetadataDispatch LFB can
Top   ToC   RFC6956 - Page 59
   use the L3PortID metadata (Section 5.3.2.2) to dispatch packets to a
   different encapsulator.

5.3.2.2. Components
This LFB has only one component, IPv4NextHopTable, which is defined as an array. The HopSelector received is used to match the array index of IPv4NextHopTable to find out a row of the table as the next- hop information result. Each row of the array is a struct containing: o The L3PortID, which is the ID of the logical output port that is passed on to the downstream LFB instance. This ID indicates what kind of encapsulating port the neighbor is to use. This is L3- derived information that affects L2 processing and so needs to be based from one LFB to another as metadata. Usually, this ID is used for the next-hop LFB to distinguish packets that need different L2 encapsulating. For instance, some packets may require general Ethernet encapsulation while others may require various types of tunnel encapsulations. In such a case, different L3PortIDs are assigned to the packets and are passed as metadata to a downstream LFB. A BasicMetadataDispatch LFB (Section 5.5.1) may have to be applied as the downstream LFB so as to dispatch packets to different encapsulation LFB instances according to the L3PortIDs. o MTU, the Maximum Transmission Unit for the outgoing port. o NextHopIPAddr, the IPv4 next-hop address. o MediaEncapInfoIndex, the index that passes on to the downstream encapsulation LFB instance and that is used there as a search key to look up a table (typically media-encapsulation-related) for further encapsulation information. The search key looks up the table by matching the table index. Note that the encapsulation LFB instance that uses this metadata may not be the LFB instance that immediately follows this LFB instance in the processing. The MediaEncapInfoIndex metadata is attached here and is passed through intermediate LFBs until it is used by the encapsulation LFB instance. In some cases, depending on implementation, the CE may set the MediaEncapInfoIndex passed downstream to a value that will fail lookup when it gets to a target encapsulation LFB; such a lookup failure at that point is an indication that further resolution is needed. For an example of this approach, refer to Section 7.2, which discusses ARP and mentions this approach.
Top   ToC   RFC6956 - Page 60
   o  LFBOutputSelectIndex, the LFB group output port index to select
      the downstream LFB port.  This value identifies the specific port
      within the SuccessOut port group out of which packets that
      successfully use this next-hop entry are to be sent.

5.3.2.3. Capabilities
This LFB does not have a list of capabilities.
5.3.2.4. Events
This LFB does not have any events specified.

5.3.3. IPv6UcastLPM

The IPv6UcastLPM LFB abstracts the IPv6 unicast Longest Prefix Match (LPM) process. The definition of this LFB is similar to the IPv4UcastLPM LFB except that all IP addresses refer to IPv6 addresses. This LFB also provides facilities to support users to implement equal-cost multipath (ECMP) routing or reverse path forwarding (RPF). However, this LFB itself does not provide ECMP or RPF. To fully implement ECMP or RPF, additional specific LFBs, like a specific ECMP LFB or an RPF LFB, will have to be defined. This work may be done in future versions of this document.
5.3.3.1. Data Handling
This LFB performs the IPv6 unicast LPM table lookup. It always expects as input IPv6 unicast packets from one singleton input known as "PktsIn". The destination IPv6 address of an incoming packet is used as a search key to look up the IPv6 prefix table and generate a hop selector. This hop selector result is associated to the packet as a metadata and sent to downstream LFBs; it will usually be used in downstream LFBs as a search key to find more next-hop information. Three singleton output LFB ports are defined. The first singleton output is known as "NormalOut" and outputs IPv6 unicast packets that succeed the LPM lookup (and got a hop selector). The hop selector is associated with the packet as a metadata. Downstream from the LPM LFB is usually a next-hop application LFB, like an IPv6NextHop LFB. The second singleton output is known as "ECMPOut" and is defined to provide support for users wishing to implement ECMP.
Top   ToC   RFC6956 - Page 61
   An ECMP flag is defined in the LPM table to enable the LFB to support
   ECMP.  When a table entry is created with the flag set to true, it
   indicates this table entry is for ECMP only.  A packet that has
   passed through this prefix lookup will always output from the
   "ECMPOut" output port, with the hop selector being its lookup result.
   The output will usually go directly to a downstream ECMP processing
   LFB, where the hop selector can usually further generate optimized
   one or multiple next-hop routes by use of ECMP algorithms.

   A default route flag is defined in the LPM table to enable the LFB to
   support a default route as well as loose RPF.  When this flag is set
   to true, the table entry is identified as a default route, which also
   implies that the route is forbidden for RPF.

   If a user wants to implement RPF on FE, a specific RPF LFB will have
   to be defined.  In such an RPF LFB, a component can be defined as an
   alias of the prefix table component of this LFB, as described below.

   The final singleton output is known as "ExceptionOut" of the
   IPv6UcastLPM LFB and is defined to output exception packets after the
   LFB processing, along with an ExceptionID metadata to indicate what
   caused the exception.  Currently defined exception types include:

   o  The packet failed the LPM lookup of the prefix table.

   The upstream LFB of this LFB is usually an IPv6Validator LFB.  If RPF
   is to be adopted, the upstream can be an RPF LFB, when defined.

   The downstream LFB is usually an IPv6NextHop LFB.  If ECMP is
   adopted, the downstream can be an ECMP LFB, when defined.

5.3.3.2. Components
This LFB has two components. The IPv6PrefixTable component is defined as an array component of the LFB. Each row of the array contains an IPv6 address, a prefix length, a hop selector, an ECMP flag, and a default route flag. The ECMP flag is so the LFB can support ECMP. The default route flag is for the LFB to support a default route and for loose RPF, as described earlier. The IPv6UcastLPMStats component is a struct component that collects statistics information, including the total number of input packets received, the IPv6 packets forwarded by this LFB and the number of IP datagrams discarded due to no route found. Note that the component is defined as optional to implementers.
Top   ToC   RFC6956 - Page 62
5.3.3.3. Capabilities
This LFB does not have a list of capabilities.
5.3.3.4. Events
This LFB does not have any events specified.

5.3.4. IPv6NextHop

This LFB abstracts the process of selecting IPv6 next-hop action.
5.3.4.1. Data Handling
The LFB abstracts the process of next-hop information application to IPv6 packets. It receives an IPv6 packet with an associated next-hop identifier (HopSelector) and uses the identifier to look up a next- hop table to find an appropriate output port from the LFB. The LFB is expected to receive unicast IPv6 packets, via a singleton input known as "PktsIn", along with a HopSelector metadata, which is used as a table index to look up the next-hop table. Two output LFB ports are defined. The first output is a group output port known as "SuccessOut". On successful data processing, the packet is sent out from an LFB port from within the LFB port group as selected by the LFBOutputSelectIndex value of the matched table entry. The packet is sent to a downstream LFB along with the L3PortID and MediaEncapInfoIndex metadata. The second output is a singleton output port known as "ExceptionOut", which will output packets for which the data processing failed, along with an additional ExceptionID metadata to indicate what caused the exception. Currently defined exception types include: o The HopSelector for the packet is invalid. o The packet failed lookup of the next-hop table even though the HopSelector is valid. o The MTU for outgoing interface is less than the packet size. Downstream LFB instances could be either a BasicMetadataDispatch type, used to fan out to different LFB instances, or a media encapsulation related type, such as an EtherEncap type or a RedirectOut type. For example, when the downstream LFB is
Top   ToC   RFC6956 - Page 63
   BasicMetadataDispatch and Ethernet and other tunnel encapsulation
   exist downstream from BasicMetadataDispatch, then the
   BasicMetadataDispatch LFB can use the L3PortID metadata (see section
   below) to dispatch packets to the different encapsulator LFBs.

5.3.4.2. Components
This LFB has only one component named IPv6NextHopTable, which is defined as an array. The array index of IPv6NextHopTable is used for a HopSelector to find out a row of the table as the next-hop information. Each row of the array is a struct containing: o The L3PortID, which is the ID of the logical output port that is passed onto the downstream LFB instance. This ID indicates what kind of encapsulating port the neighbor is to use. This is L3- derived information that affects L2 processing and so needs to be based from one LFB to another as metadata. Usually, this ID is used for the next-hop LFB to distinguish packets that need different L2 encapsulating. For instance, some packets may require general Ethernet encapsulation while others may require various types of tunnel encapsulations. In such a case, different L3PortIDs are assigned to the packets and are passed as metadata to a downstream LFB. A BasicMetadataDispatch LFB (Section 5.5.1) may have to be applied as the downstream LFB so as to dispatch packets to different encapsulation LFB instances according to the L3PortIDs. o MTU, the Maximum Transmission Unit for the outgoing port. o NextHopIPAddr, the IPv6 next-hop address. o MediaEncapInfoIndex, the index that is passed on to the downstream encapsulation LFB instance and that is used there as a search key to look up a table (typically media-encapsulation-related) for further encapsulation information. The search key looks up the table by matching the table index. Note that the encapsulation LFB instance that uses this metadata may not be the LFB instance that immediately follows this LFB instance in the processing. The MediaEncapInfoIndex metadata is attached here and is passed through intermediate LFBs until it is used by the encapsulation LFB instance. In some cases, depending on implementation, the CE may set the MediaEncapInfoIndex passed downstream to a value that will fail lookup when it gets to a target encapsulation LFB; such a lookup failure at that point is an indication that further resolution is needed. For an example of this approach, refer to Section 7.2, which discusses ARP and mentions this approach.
Top   ToC   RFC6956 - Page 64
   o  LFBOutputSelectIndex, the LFB group output port index to select
      the downstream LFB port.  This value identifies the specific port
      within the SuccessOut port group out of which packets that
      successfully use this next-hop entry are to be sent.

5.3.4.3. Capabilities
This LFB does not have a list of capabilities.
5.3.4.4. Events
This LFB does not have any events specified.

5.4. Redirect LFBs

Redirect LFBs abstract the data packet transportation process between the CE and FE. Some packets output from some LFBs may have to be delivered to the CE for further processing, and some packets generated by the CE may have to be delivered to the FE and further to some specific LFBs for data path processing. According to [RFC5810], data packets and their associated metadata are encapsulated in a ForCES redirect message for transportation between CE and FE. We define two LFBs to abstract the process: a RedirectIn LFB and a RedirectOut LFB. Usually, in an LFB topology of an FE, only one RedirectIn LFB instance and one RedirectOut LFB instance exist.

5.4.1. RedirectIn

The RedirectIn LFB abstracts the process for the CE to inject data packets into the FE data path.
5.4.1.1. Data Handling
A RedirectIn LFB abstracts the process for the CE to inject data packets into the FE LFB topology so as to input data packets into FE data paths. From the LFB topology's point of view, the RedirectIn LFB acts as a source point for data packets coming from the CE; therefore, the RedirectIn LFB is defined with a single output LFB port (and no input LFB port). The single output port of RedirectIn LFB is defined as a group output type with the name of "PktsOut". Packets produced by this output will have arbitrary frame types decided by the CE that generated the packets. Possible frames may include IPv4, IPv6, or ARP protocol packets. The CE may associate some metadata to indicate the frame types and may also associate other metadata to indicate various information on the packets. Among them, there MUST exist a RedirectIndex metadata, which is an integer acting as an index. When
Top   ToC   RFC6956 - Page 65
   the CE transmits the metadata along with the packet to a RedirectIn
   LFB, the LFB will read the RedirectIndex metadata and output the
   packet to one of its group output port instances, whose port index is
   indicated by this metadata.  Any other metadata, in addition to
   RedirectIndex, will be passed untouched along the packet delivered by
   the CE to the downstream LFB.  This means the RedirectIndex metadata
   from CE will be "consumed" by the RedirectIn LFB and will not be
   passed to downstream LFB.  Note that a packet from the CE without a
   RedirectIndex metadata associated will be dropped by the LFB.  Note
   that all metadata visible to the LFB need to be global and IANA
   controlled.  See Section 8 ("IANA Considerations") of this document
   for more details about a metadata ID space that can be used by
   vendors and is "Reserved for Private Use".

5.4.1.2. Components
An optional statistics component is defined to collect the number of packets received by the LFB from the CE. There are no other components defined for the current version of the LFB.
5.4.1.3. Capabilities
This LFB does not have a list of capabilities.
5.4.1.4. Events
This LFB does not have any events specified.

5.4.2. RedirectOut

RedirectOut LFB abstracts the process for LFBs in the FE to deliver data packets to the CE.
5.4.2.1. Data Handling
A RedirectOut LFB abstracts the process for LFBs in the FE to deliver data packets to the CE. From the LFB topology's point of view, the RedirectOut LFB acts as a sink point for data packets going to the CE; therefore, the RedirectOut LFB is defined with a single input LFB port (and no output LFB port). The RedirectOut LFB has only one singleton input, known as "PktsIn", but is capable of receiving packets from multiple LFBs by multiplexing this input. The input expects any kind of frame type; therefore, the frame type has been specified as arbitrary, and also all types of metadata are expected. All associated metadata produced (but not consumed) by previous processed LFBs should be delivered to the CE via the ForCES protocol redirect message [RFC5810]. The CE
Top   ToC   RFC6956 - Page 66
   can decide how to process the redirected packet by referencing the
   associated metadata.  As an example, a packet could be redirected by
   the FE to the CE because the EtherEncap LFB is not able to resolve L2
   information.  The metadata "ExceptionID" created by the EtherEncap
   LFB is passed along with the packet and should be sufficient for the
   CE to do the necessary processing and resolve the L2 entry required.
   Note that all metadata visible to the LFB need to be global and IANA
   controlled.  See Section 8 ("IANA Considerations") of this document
   for more details about a metadata ID space that can be used by
   vendors and is "Reserved for Private Use".

5.4.2.2. Components
An optional statistics component is defined to collect the number of packets sent by the LFB to the CE. There are no other components defined for the current version of the LFB.
5.4.2.3. Capabilities
This LFB does not have a list of capabilities.
5.4.2.4. Events
This LFB does not have any events specified.

5.5. General Purpose LFBs

5.5.1. BasicMetadataDispatch

The BasicMetadataDispatch LFB is defined to abstract the process in which a packet is dispatched to some output path based on its associated metadata value.
5.5.1.1. Data Handling
The BasicMetadataDispatch LFB has only one singleton input known as "PktsIn". Every input packet should be associated with a metadata that will be used by the LFB to do the dispatch. This LFB contains a metadata ID and a dispatch table named MetadataDispatchTable, all configured by the CE. The metadata ID specifies which metadata is to be used for dispatching packets. The MetadataDispatchTable contains entries of a metadata value and an OutputIndex, specifying that the packet with the metadata value must go out from the LFB group output port instance with the OutputIndex. Two output LFB ports are defined.
Top   ToC   RFC6956 - Page 67
   The first output is a group output port known as "PktsOut".  A packet
   with its associated metadata having found an OutputIndex by
   successfully looking up the dispatch table will be output to the
   group port instance with the corresponding index.

   The second output is a singleton output port known as "ExceptionOut",
   which will output packets for which the data processing failed, along
   with an additional ExceptionID metadata to indicate what caused the
   exception.  Currently defined exception types only include one case:

   o  There is no matching when looking up the metadata dispatch table.

   As an example, if the CE decides to dispatch packets according to a
   physical port ID (PHYPortID), the CE may set the ID of PHYPortID
   metadata to the LFB first.  Moreover, the CE also sets the PHYPortID
   actual values (the metadata values) and assigned OutputIndex for the
   values to the dispatch table in the LFB.  When a packet arrives, a
   PHYPortID metadata is found associated with the packet, and the
   metadata value is further used as a key to look up the dispatch table
   to find out an output port instance for the packet.

   Currently, the BasicMetadataDispatch LFB only allows the metadata
   value of the dispatch table entry to be a 32-bit integer.  A metadata
   with other value types is not supported in this version.  A more
   complex metadata dispatch LFB may be defined in future versions of
   the library.  In that LFB, multiple tuples of metadata with more
   value types supported may be used to dispatch packets.

5.5.1.2. Components
This LFB has two components. One component is MetadataID and the other is MetadataDispatchTable. Each row entry of the dispatch table is a struct containing the metadata value and the OutputIndex. Note that currently, the metadata value is only allowed to be a 32-bit integer. The metadata value is also defined as a content key for the table. The concept of content key is a searching key for tables, which is defined in the ForCES FE model [RFC5812]. With the content key, the CE can manipulate the table by means of a specific metadata value rather than by the table index only. See the ForCES FE model [RFC5812] and also the ForCES protocol [RFC5810] for more details on the definition and use of a content key.
5.5.1.3. Capabilities
This LFB does not have a list of capabilities.
Top   ToC   RFC6956 - Page 68
5.5.1.4. Events
This LFB does not have any events specified.

5.5.2. GenericScheduler

This is a preliminary generic scheduler LFB for abstracting a simple scheduling process.
5.5.2.1. Data Handling
There exist various kinds of scheduling strategies with various implementations. As a base LFB library, this document only defines a preliminary generic scheduler LFB for abstracting a simple scheduling process. Users may use this LFB as a basic LFB to further construct more complex scheduler LFBs by means of "inheritance", as described in [RFC5812]. Packets of any arbitrary frame type are received via a group input known as "PktsIn" with no additional metadata expected. This group input is capable of multiple input port instances. Each port instance may be connected to a different upstream LFB output. Inside the LFB, it is abstracted that each input port instance is connected to a queue, and the queue is marked with a queue ID whose value is exactly the same as the index of corresponding group input port instance. Scheduling disciplines are applied to all queues and also all packets in the queues. The group input port property PortGroupLimits in ObjectLFB, as defined by the ForCES FE model [RFC5810], provides means for the CE to query the capability of total queue numbers the scheduler supports. The CE can then decide how many queues it may use for a scheduling application. Scheduled packets are output from a singleton output port of the LFB knows as "PktsOut" with no corresponding metadata. More complex scheduler LFBs may be defined with more complex scheduling disciplines by succeeding this LFB. For instance, a priority scheduler LFB may be defined by inheriting this LFB and defining a component to indicate priorities for all input queues.
5.5.2.2. Components
The SchedulingDiscipline component is for the CE to specify a scheduling discipline to the LFB. Currently defined scheduling disciplines only include Round Robin (RR) strategy. The default scheduling discipline is thus RR.
Top   ToC   RFC6956 - Page 69
   The QueueStats component is defined to allow the CE to query every
   queue status of the scheduler.  It is an array component, and each
   row of the array is a struct containing a queue ID.  Currently
   defined queue status includes the queue depth in packets and the
   queue depth in bytes.  Using the queue ID as the index, the CE can
   query every queue for its used length in unit of packets or bytes.
   Note that the QueueStats component is defined as optional to
   implementers.

5.5.2.3. Capabilities
The following capability is currently defined for the GenericScheduler. o The queue length limit providing the storage ability for every queue.
5.5.2.4. Events
This LFB does not have any events specified.


(page 69 continued on part 4)

Next Section