A RPL DODAG is formed of a Root, a collection of routers, and leaves that are hosts. Hosts are nodes that do not forward packets that they did not generate. RPL-aware leaves will participate in RPL to advertise their own addresses, whereas RPL-unaware leaves depend on a connected RPL router to do so. RPL interacts with 6LoWPAN ND at multiple levels, in particular at the Root and in the RPL-unaware leaves.
RPL needs a set of information to advertise a leaf node through a Destination Advertisement Object (DAO) message and establish reachability.
[
RFC 9010] details the basic interaction of 6LoWPAN ND and RPL and enables a plain 6LN that supports [
RFC 8505] to obtain return connectivity via the RPL network as a RPL-unaware leaf. The leaf indicates that it requires reachability services for the Registered Address from a Routing Registrar by setting an 'R' flag in the Extended Address Registration Option [
RFC 8505], and it provides a TID that maps to the "Path Sequence" defined in
Section 6.7.8 of
RFC 6550, and its operation is defined in
Section 7.2 of
RFC 6550.
[
RFC 9010] also enables the leaf to signal with the RPLInstanceID that it wants to participate by using the Opaque field of the EARO. On the backbone, the RPLInstanceID is expected to be mapped to an overlay that matches the RPL Instance, e.g., a Virtual LAN (VLAN) or a virtual routing and forwarding (VRF) instance.
Though, at the time of this writing, the above specification enables a model where the separation is possible, this architecture recommends co-locating the functions of 6LBR and RPL Root.
With the 6LoWPAN ND [
RFC 6775], information on the 6LBR is disseminated via an Authoritative Border Router Option (ABRO) in RA messages. [
RFC 8505] extends [
RFC 6775] to enable a registration for routing and proxy ND. The capability to support [
RFC 8505] is indicated in the 6LoWPAN Capability Indication Option (6CIO). The discovery and liveliness of the RPL Root are obtained through RPL [
RFC 6550] itself.
When 6LoWPAN ND is coupled with RPL, the 6LBR and RPL Root functionalities are co-located in order that the address of the 6LBR is indicated by RPL DODAG Information Object (DIO) messages and to associate the ROVR from the Extended Duplicate Address Request/Confirmation (EDAR/EDAC) exchange [
RFC 8505] with the state that is maintained by RPL.
Section 7 of
RFC 9010 specifies how the DAO messages are used to reconfirm the registration, thus eliminating a duplication of functionality between DAO and EDAR/EDAC messages, as illustrated in
Figure 6. [
RFC 9010] also provides the protocol elements that are needed when the 6LBR and RPL Root functionalities are not co-located.
Even though the Root of the RPL network is integrated with the 6LBR, it is logically separated from the Backbone Router (6BBR) that is used to connect the 6TiSCH LLN to the backbone. This way, the Root has all information from 6LoWPAN ND and RPL about the LLN devices attached to it.
This architecture also expects that the Root of the RPL network (proxy-)registers the 6TiSCH nodes on their behalf to the 6BBR, for whatever operation the 6BBR performs on the backbone, such as ND proxy or redistribution in a routing protocol. This relies on an extension of the 6LoWPAN ND registration described in [
RFC 8929].
This model supports the movement of a 6TiSCH device across the multi-link subnet and allows the proxy registration of 6TiSCH nodes deep into the 6TiSCH LLN by the 6LBR / RPL Root. This is why in [
RFC 8505] the Registered Address is signaled in the Target Address field of the Neighbor Solicitation (NS) message as opposed to the IPv6 Source Address, which, in the case of a proxy registration, is that of the 6LBR / RPL Root itself.
A new device, called the pledge, undergoes the join protocol to become a node in a 6TiSCH network. This usually occurs only once when the device is first powered on. The pledge communicates with the Join Registrar/Coordinator (JRC) of the network through a Join Proxy (JP), a radio neighbor of the pledge.
The JP is discovered though MAC-layer beacons. When multiple JPs from possibly multiple networks are visible, using trial and error until an acceptable position in the right network is obtained becomes inefficient. [
RFC 9032] adds a new subtype in the Information Element that was delegated to the IETF [
RFC 8137] and provides visibility into the network that can be joined and the willingness of the JP and the Root to be used by the pledge.
The join protocol provides the following functionality:
-
Mutual authentication
-
Authorization
-
Parameter distribution to the pledge over a secure channel
The Minimal Security Framework for 6TiSCH [
RFC 9031] defines the minimal mechanisms required for this join process to occur in a secure manner. The specification defines the Constrained Join Protocol (CoJP), which is used to distribute the parameters to the pledge over a secure session established through OSCORE [
RFC 8613] and which describes the secure configuration of the network stack. In the minimal setting with pre-shared keys (PSKs), CoJP allows the pledge to join after a single round-trip exchange with the JRC. The provisioning of the PSK to the pledge and the JRC needs to be done out of band, through a 'one-touch' bootstrapping process, which effectively enrolls the pledge into the domain managed by the JRC.
In certain use cases, the 'one-touch' bootstrapping is not feasible due to the operational constraints, and the enrollment of the pledge into the domain needs to occur in-band. This is handled through a 'zero-touch' extension of the Minimal Security Framework for 6TiSCH. The zero-touch extension [
ZEROTOUCH-JOIN] leverages the "[
Bootstrapping Remote Secure Key Infrastructure (BRSKI)]" [
RFC 8995] work to establish a shared secret between a pledge and the JRC without necessarily having them belong to a common (security) domain at join time. This happens through inter-domain communication occurring between the JRC of the network and the domain of the pledge, represented by a fourth entity, Manufacturer Authorized Signing Authority (MASA). Once the zero-touch exchange completes, the CoJP exchange defined in [
RFC 9031] is carried over the secure session established between the pledge and the JRC.
Figure 4 depicts the join process and where a Link-Local Address (LLA) is used, versus a Global Unicast Address (GUA).
6LoWPAN Node 6LR 6LBR Join Registrar MASA
(pledge) (Join Proxy) (Root) /Coordinator (JRC)
| | | | |
| 6LoWPAN ND |6LoWPAN ND+RPL | IPv6 network |IPv6 network |
| LLN link |Route-Over mesh|(the Internet)|(the Internet)|
| | | | |
| Layer 2 | | | |
|Enhanced Beacon| | | |
|<--------------| | | |
| | | | |
| NS (EARO) | | | |
| (for the LLA) | | | |
|-------------->| | | |
| NA (EARO) | | | |
|<--------------| | | |
| | | | |
| (Zero-touch | | | |
| handshake) | (Zero-touch handshake) | (Zero-touch |
| using LLA | using GUA | handshake) |
|<------------->|<---------------------------->|<------------>|
| | | | |
| CoJP Join Req | | | | \
| using LLA | | | | |
|-------------->| | | | |
| | CoJP Join Request | | |
| | using GUA | | |
| |----------------------------->| | | C
| | | | | | o
| | CoJP Join Response | | | J
| | using GUA | | | P
| |<-----------------------------| | |
|CoJP Join Resp | | | | |
| using LLA | | | | |
|<--------------| | | | /
| | | | |
Once the pledge successfully completes the CoJP exchange and becomes a network node, it obtains the network prefix from neighboring routers and registers its IPv6 addresses. As detailed in
Section 4.1, the combined 6LoWPAN ND 6LBR and Root of the RPL network learn information such as an identifier (device EUI-64 [
RFC 6775] or a ROVR [
RFC 8505] (from 6LoWPAN ND)) and the updated Sequence Number (from RPL), and perform 6LoWPAN ND proxy registration to the 6BBR on behalf of the LLN nodes.
Figure 5 illustrates the initial IPv6 signaling that enables a 6LN to form a global address and register it to a 6LBR using 6LoWPAN ND [
RFC 8505]. It is then carried over RPL to the RPL Root and then to the 6BBR. This flow happens just once when the address is created and first registered.
6LoWPAN Node 6LR 6LBR 6BBR
(RPL leaf) (router) (Root)
| | | |
| 6LoWPAN ND |6LoWPAN ND+RPL | 6LoWPAN ND | IPv6 ND
| LLN link |Route-Over mesh|Ethernet/serial| Backbone
| | | |
| RS (mcast) | | |
|-------------->| | |
|-----------> | | |
|------------------> | |
| RA (unicast) | | |
|<--------------| | |
| | | |
| NS(EARO) | | |
|-------------->| | |
| 6LoWPAN ND | Extended DAR | |
| |-------------->| |
| | | NS(EARO) |
| | |-------------->|
| | | | NS-DAD
| | | |------>
| | | | (EARO)
| | | |
| | | NA(EARO) |<timeout>
| | |<--------------|
| | Extended DAC | |
| |<--------------| |
| NA(EARO) | | |
|<--------------| | |
| | | |
Figure 6 illustrates the repeating IPv6 signaling that enables a 6LN to keep a global address alive and registered with its 6LBR using 6LoWPAN ND to the 6LR, RPL to the RPL Root, and then 6LoWPAN ND again to the 6BBR, which avoids repeating the Extended DAR/DAC flow across the network when RPL can suffice as a keep-alive mechanism.
6LoWPAN Node 6LR 6LBR 6BBR
(RPL leaf) (router) (Root)
| | | |
| 6LoWPAN ND |6LoWPAN ND+RPL | 6LoWPAN ND | IPv6 ND
| LLN link |Route-Over mesh| ant IPv6 link | Backbone
| | |
| | | |
| NS(EARO) | | |
|-------------->| | |
| NA(EARO) | | |
|<--------------| | |
| | DAO | |
| |-------------->| |
| | DAO-ACK | |
| |<--------------| |
| | | NS(EARO) |
| | |-------------->|
| | | NA(EARO) |
| | |<--------------|
| | | |
| | | |
As the network builds up, a node should start as a leaf to join the RPL network and may later turn into both a RPL-capable router and a 6LR, so as to accept leaf nodes recursively joining the network.
6TiSCH expects a high degree of scalability together with a distributed routing functionality based on RPL. To achieve this goal, the spectrum must be allocated in a way that allows for spatial reuse between zones that will not interfere with one another. In a large and spatially distributed network, a 6TiSCH node is often in a good position to determine usage of the spectrum in its vicinity.
With 6TiSCH, the abstraction of an IPv6 link is implemented as a pair of bundles of cells, one in each direction. IP links are only enabled between RPL parents and children. The 6TiSCH operation is optimal when the size of a bundle minimizes both the energy wasted in idle listening and the packet drops due to congestion loss, while packets are forwarded within an acceptable latency.
Use cases for distributed routing are often associated with a statistical distribution of best-effort traffic with variable needs for bandwidth on each individual link. The 6TiSCH operation can remain optimal if RPL parents can adjust, dynamically and with enough reactivity to match the variations of best-effort traffic, the amount of bandwidth that is used to communicate between themselves and their children, in both directions. In turn, the agility to fulfill the needs for additional cells improves when the number of interactions with other devices and the protocol latencies are minimized.
6top is a logical link control sitting between the IP layer and the TSCH MAC layer, which provides the link abstraction that is required for IP operations. The 6top Protocol, 6P, which is specified in [
RFC 8480], is one of the services provided by 6top. In particular, the 6top services are available over a management API that enables an external management entity to schedule cells and slotframes, and allows the addition of complementary functionality, for instance, a Scheduling Function that manages a dynamic schedule based on observed resource usage as discussed in
Section 4.4.2. For this purpose, the 6TiSCH architecture differentiates "soft" cells and "hard" cells.
"Hard" cells are cells that are owned and managed by a separate scheduling entity (e.g., a PCE) that specifies the slotOffset/channelOffset of the cells to be added/moved/deleted, in which case 6top can only act as instructed and may not move hard cells in the TSCH schedule on its own.
In contrast, "soft" cells are cells that 6top can manage locally. 6top contains a monitoring process that monitors the performance of cells and that can add and remove soft cells in the TSCH schedule to adapt to the traffic needs, or move one when it performs poorly. To reserve a soft cell, the higher layer does not indicate the exact slotOffset/channelOffset of the cell to add, but rather the resulting bandwidth and QoS requirements. When the monitoring process triggers a cell reallocation, the two neighbor devices communicating over this cell negotiate its new position in the TSCH schedule.
In the case of soft cells, the cell management entity that controls the dynamic attribution of cells to adapt to the dynamics of variable rate flows is called a Scheduling Function (SF).
There may be multiple SFs that react more or less aggressively to the dynamics of the network.
An SF may be seen as divided between an upper bandwidth-adaptation logic that is unaware of the particular technology used to obtain and release bandwidth and an underlying service that maps those needs in the actual technology. In the case of TSCH using the 6top Protocol as illustrated in
Figure 7, this means mapping the bandwidth onto cells.
+------------------------+ +------------------------+
| Scheduling Function | | Scheduling Function |
| Bandwidth adaptation | | Bandwidth adaptation |
+------------------------+ +------------------------+
| Scheduling Function | | Scheduling Function |
| TSCH mapping to cells | | TSCH mapping to cells |
+------------------------+ +------------------------+
| 6top cells negotiation | <- 6P -> | 6top cells negotiation |
+------------------------+ +------------------------+
Device A Device B
The SF relies on 6top services that implement the [
RFC 8480] to negotiate the precise cells that will be allocated or freed based on the schedule of the peer. For instance, it may be that a peer wants to use a particular timeslot that is free in its schedule, but that timeslot is already in use by the other peer to communicate with a third party on a different cell. 6P enables the peers to find an agreement in a transactional manner that ensures the final consistency of the nodes' state.
[
RFC 9033] is one of the possible Scheduling Functions. MSF uses the rendezvous slot from [
RFC 8180] for network discovery, neighbor discovery, and any other broadcast.
For basic unicast communication with any neighbor, each node uses a receive cell at a well-known slotOffset/channelOffset, which is derived from a hash of their own MAC address. Nodes can reach any neighbor by installing a transmit (shared) cell with slotOffset/channelOffset derived from the neighbor's MAC address.
For child-parent links, MSF continuously monitors the load between parents and children. It then uses 6P to install or remove unicast cells whenever the current schedule appears to be under-provisioned or over-provisioned.
An implementation of a [
RFC 6550] Objective Function (OF), such as the [
RFC 6552] that is used in the [
RFC 8180] to support RPL over a static schedule, may leverage for its internal computation the information maintained by 6top.
An OF may require metrics about reachability, such as the Expected Transmission Count (ETX) metric [
RFC 6551]. 6top creates and maintains an abstract neighbor table, and this state may be leveraged to feed an OF and/or store OF information as well. A neighbor table entry may contain a set of statistics with respect to that specific neighbor.
The neighbor information may include the time when the last packet has been received from that neighbor, a set of cell quality metrics, e.g., received signal strength indication (RSSI) or link quality indicator (LQI), the number of packets sent to the neighbor, or the number of packets received from it. This information can be made available through 6top management APIs and used, for instance, to compute a Rank Increment that will determine the selection of the preferred parent.
6top provides statistics about the underlying layer so the OF can be tuned to the nature of the TSCH MAC layer. 6top also enables the RPL OF to influence the MAC behavior, for instance, by configuring the periodicity of IEEE Std 802.15.4 Extended Beacons (EBs). By augmenting the EB periodicity, it is possible to change the network dynamics so as to improve the support of devices that may change their point of attachment in the 6TiSCH network.
Some RPL control messages, such as the DODAG Information Object (DIO), are ICMPv6 messages that are broadcast to all neighbor nodes. With 6TiSCH, the broadcast channel requirement is addressed by 6top by configuring TSCH to provide a broadcast channel, as opposed to, for instance, piggybacking the DIO messages in Layer 2 Enhanced Beacons (EBs), which would produce undue timer coupling among layers and packet size issues, and could conflict with the policy of production networks where EBs are mostly eliminated to conserve energy.
Nodes in a TSCH network must be time synchronized. A node keeps synchronized to its time source neighbor through a combination of frame-based and acknowledgment-based synchronization. To maximize battery life and network throughput, it is advisable that RPL ICMP discovery and maintenance traffic (governed by the Trickle timer) be somehow coordinated with the transmission of time synchronization packets (especially with Enhanced Beacons).
This could be achieved through an interaction of the 6top sublayer and the RPL Objective Function, or could be controlled by a management entity.
Time distribution requires a loop-free structure. Nodes caught in a synchronization loop will rapidly desynchronize from the network and become isolated. 6TiSCH uses a RPL DAG with a dedicated global Instance for the purpose of time synchronization. That Instance is referred to as the Time Synchronization Global Instance (TSGI). The TSGI can be operated in either of the three modes that are detailed in Section
3.1.3 of [
RFC 6550], "Instances, DODAGs, and DODAG Versions". Multiple uncoordinated DODAGs with independent Roots may be used if all the Roots share a common time source such as the Global Positioning System (GPS).
In the absence of a common time source, the TSGI should form a single DODAG with a virtual Root. A backbone network is then used to synchronize and coordinate RPL operations between the Backbone Routers that act as sinks for the LLN. Optionally, RPL's periodic operations may be used to transport the network synchronization. This may mean that 6top would need to trigger (override) the Trickle timer if no other traffic has occurred for such a time that nodes may get out of synchronization.
A node that has not joined the TSGI advertises a MAC-level Join Priority of 0xFF to notify its neighbors that is not capable of serving as time parent. A node that has joined the TSGI advertises a MAC-level Join Priority set to its DAGRank() in that Instance, where DAGRank() is the operation specified in Section
3.5.1 of [
RFC 6550], "Rank Comparison".
The provisioning of a RPL Root is out of scope for both RPL and this architecture, whereas RPL enables the propagation of configuration information down the DODAG. This applies to the TSGI as well; a Root is configured, or obtains by unspecified means, the knowledge of the RPLInstanceID for the TSGI. The Root advertises its DagRank in the TSGI, which must be less than 0xFF, as its Join Priority in its IEEE Std 802.15.4 EBs.
A node that reads a Join Priority of less than 0xFF should join the neighbor with the lesser Join Priority and use it as time parent. If the node is configured to serve as time parent, then the node should join the TSGI, obtain a Rank in that Instance, and start advertising its own DagRank in the TSGI as its Join Priority in its EBs.
6TiSCH enables IPv6 best-effort (stochastic) transmissions over a MAC layer that is also capable of scheduled (deterministic) transmissions. A window of time is defined around the scheduled transmission where the medium must, as much as practically feasible, be free of contending energy to ensure that the medium is free of contending packets when the time comes for a scheduled transmission. One simple way to obtain such a window is to format time and frequencies in cells of transmission of equal duration. This is the method that is adopted in IEEE Std 802.15.4 TSCH as well as the Long Term Evolution (LTE) of cellular networks.
The 6TiSCH architecture defines a global concept that is called a Channel Distribution and Usage (CDU) matrix to describe that formatting of time and frequencies.
A CDU matrix is defined centrally as part of the network definition. It is a matrix of cells with a height equal to the number of available channels (indexed by channelOffsets) and a width (in timeslots) that is the period of the network scheduling operation (indexed by slotOffsets) for that CDU matrix. There are different models for scheduling the usage of the cells, which place the responsibility of avoiding collisions either on a central controller or on the devices themselves, at an extra cost in terms of energy to scan for free cells (more in
Section 4.4).
The size of a cell is a timeslot duration, and values of 10 to 15 milliseconds are typical in 802.15.4 TSCH to accommodate for the transmission of a frame and an ack, including the security validation on the receive side, which may take up to a few milliseconds on some device architecture.
A CDU matrix iterates over a well-known channel rotation called the hopping sequence. In a given network, there might be multiple CDU matrices that operate with different widths, so they have different durations and represent different periodic operations. It is recommended that all CDU matrices in a 6TiSCH domain operate with the same cell duration and are aligned so as to reduce the chances of interferences from the Slotted ALOHA operations. The knowledge of the CDU matrices is shared between all the nodes and used in particular to define slotframes.
A slotframe is a MAC-level abstraction that is common to all nodes and contains a series of timeslots of equal length and precedence. It is characterized by a slotframe_ID and a slotframe_size. A slotframe aligns to a CDU matrix for its parameters, such as number and duration of timeslots.
Multiple slotframes can coexist in a node schedule, i.e., a node can have multiple activities scheduled in different slotframes. A slotframe is associated with a priority that may be related to the precedence of different 6TiSCH topologies. The slotframes may be aligned to different CDU matrices and thus have different widths. There is typically one slotframe for scheduled traffic that has the highest precedence and one or more slotframe(s) for RPL traffic. The timeslots in the slotframe are indexed by the slotOffset; the first cell is at slotOffset 0.
When a packet is received from a higher layer for transmission, 6top inserts that packet in the outgoing queue that matches the packet best (Differentiated Services [
RFC 2474] can therefore be used). At each scheduled transmit slot, 6top looks for the frame in all the outgoing queues that best matches the cells. If a frame is found, it is given to the TSCH MAC for transmission.
The 6TiSCH architecture introduces the concept of chunks (
Section 2.1) to distribute the allocation of the spectrum for a whole group of cells at a time. The CDU matrix is formatted into a set of chunks, possibly as illustrated in
Figure 8, each of the chunks identified uniquely by a chunk-ID. The knowledge of this formatting is shared between all the nodes in a 6TiSCH network. It could be conveyed during the join process, codified into a profile document, or obtained using some other mechanism. This is as opposed to Static Scheduling, which refers to the preprogrammed mechanism specified in [
RFC 8180] and which existed before the distribution of the chunk formatting.
+-----+-----+-----+-----+-----+-----+-----+ +-----+
chan.Off. 0 |chnkA|chnkP|chnk7|chnkO|chnk2|chnkK|chnk1| ... |chnkZ|
+-----+-----+-----+-----+-----+-----+-----+ +-----+
chan.Off. 1 |chnkB|chnkQ|chnkA|chnkP|chnk3|chnkL|chnk2| ... |chnk1|
+-----+-----+-----+-----+-----+-----+-----+ +-----+
...
+-----+-----+-----+-----+-----+-----+-----+ +-----+
chan.Off. 15 |chnkO|chnk6|chnkN|chnk1|chnkJ|chnkZ|chnkI| ... |chnkG|
+-----+-----+-----+-----+-----+-----+-----+ +-----+
0 1 2 3 4 5 6 M
The 6TiSCH architecture envisions a protocol that enables chunk ownership appropriation whereby a RPL parent discovers a chunk that is not used in its interference domain, claims the chunk, and then defends it in case another RPL parent would attempt to appropriate it while it is in use. The chunk is the basic unit of ownership that is used in that process.
As a result of the process of chunk ownership appropriation, the RPL parent has exclusive authority to decide which cell in the appropriated chunk can be used by which node in its interference domain. In other words, it is implicitly delegated the right to manage the portion of the CDU matrix that is represented by the chunk.
Initially, those cells are added to the heap of free cells, then dynamically placed into existing bundles, into new bundles, or allocated opportunistically for one transmission.
Note that a PCE is expected to have precedence in the allocation, so that a RPL parent would only be able to obtain portions that are not in use by the PCE.
6TiSCH uses four paradigms to manage the TSCH schedule of the LLN nodes: Static Scheduling, Neighbor-to-Neighbor Scheduling, Remote Monitoring and Scheduling Management, and Hop-by-Hop Scheduling. Multiple mechanisms are defined that implement the associated Interaction Models, and they can be combined and used in the same LLN. Which mechanism(s) to use depends on application requirements.
In the simplest instantiation of a 6TiSCH network, a common fixed schedule may be shared by all nodes in the network. Cells are shared, and nodes contend for slot access in a Slotted ALOHA manner.
A static TSCH schedule can be used to bootstrap a network, as an initial phase during implementation or as a fall-back mechanism in case of network malfunction. This schedule is preestablished, for instance, decided by a network administrator based on operational needs. It can be preconfigured into the nodes, or, more commonly, learned by a node when joining the network using standard IEEE Std 802.15.4 Information Elements (IE). Regardless, the schedule remains unchanged after the node has joined a network. RPL is used on the resulting network. This "minimal" scheduling mechanism that implements this paradigm is detailed in [
RFC 8180].
In the simplest instantiation of a 6TiSCH network described in
Section 4.4.1, nodes may expect a packet at any cell in the schedule and will waste energy idle listening. In a more complex instantiation of a 6TiSCH network, a matching portion of the schedule is established between peers to reflect the observed amount of transmissions between those nodes. The aggregation of the cells between a node and a peer forms a bundle that the 6top sublayer uses to implement the abstraction of a link for IP. The bandwidth on that link is proportional to the number of cells in the bundle.
If the size of a bundle is configured to fit an average amount of bandwidth, peak traffic is dropped. If the size is configured to allow for peak emissions, energy is wasted idle listening.
As discussed in more detail in
Section 4.3, the [
RFC 8480] specifies the exchanges between neighbor nodes to reserve soft cells to transmit to one another, possibly under the control of a Scheduling Function (SF). Because this reservation is done without global knowledge of the schedule of the other nodes in the LLN, scheduling collisions are possible.
And as discussed in
Section 4.3.2, an optional SF is used to monitor bandwidth usage and to perform requests for dynamic allocation by the 6top sublayer. The SF component is not part of the 6top sublayer. It may be co-located on the same device or may be partially or fully offloaded to an external system. The [
RFC 9033] provides a simple SF that can be used by default by devices that support dynamic scheduling of soft cells.
Monitoring and relocation is done in the 6top sublayer. For the upper layer, the connection between two neighbor nodes appears as a number of cells. Depending on traffic requirements, the upper layer can request 6top to add or delete a number of cells scheduled to a particular neighbor, without being responsible for choosing the exact slotOffset/channelOffset of those cells.
Remote Monitoring and Schedule Management refers to a DetNet/SDN model whereby an NME and a scheduling entity, associated with a PCE, reside in a central controller and interact with the 6top sublayer to control IPv6 links and Tracks (
Section 4.5) in a 6TiSCH network. The composite centralized controller can assign physical resources (e.g., buffers and hard cells) to a particular Track to optimize the reliability within a bounded latency for a well-specified flow.
The work in the 6TiSCH Working Group focused on nondeterministic traffic and did not provide the generic data model necessary for the controller to monitor and manage resources of the 6top sublayer. This is deferred to future work, see
Appendix A.1.2.
With respect to centralized routing and scheduling, it is envisioned that the related component of the 6TiSCH architecture would be an extension of the [
RFC 8655], which studies Layer 3 aspects of Deterministic Networks and covers networks that span multiple Layer 2 domains.
The DetNet architecture is a form of Software-Defined Networking (SDN) architecture and is composed of three planes: a (User) Application Plane, a Controller Plane (where the PCE operates), and a Network Plane, which can represent a 6TiSCH LLN.
[
RFC 7426] proposes a generic representation of the SDN architecture that is reproduced in
Figure 9.
o--------------------------------o
| |
| +-------------+ +----------+ |
| | Application | | Service | |
| +-------------+ +----------+ |
| Application Plane |
o---------------Y----------------o
|
*-----------------------------Y---------------------------------*
| Network Services Abstraction Layer (NSAL) |
*------Y------------------------------------------------Y-------*
| |
| Service Interface |
| |
o------Y------------------o o---------------------Y------o
| | Control Plane | | Management Plane | |
| +----Y----+ +-----+ | | +-----+ +----Y----+ |
| | Service | | App | | | | App | | Service | |
| +----Y----+ +--Y--+ | | +--Y--+ +----Y----+ |
| | | | | | | |
| *----Y-----------Y----* | | *---Y---------------Y----* |
| | Control Abstraction | | | | Management Abstraction | |
| | Layer (CAL) | | | | Layer (MAL) | |
| *----------Y----------* | | *----------Y-------------* |
| | | | | |
o------------|------------o o------------|---------------o
| |
| CP | MP
| Southbound | Southbound
| Interface | Interface
| |
*------------Y---------------------------------Y----------------*
| Device and resource Abstraction Layer (DAL) |
*------------Y---------------------------------Y----------------*
| | | |
| o-------Y----------o +-----+ o--------Y----------o |
| | Forwarding Plane | | App | | Operational Plane | |
| o------------------o +-----+ o-------------------o |
| Network Device |
+---------------------------------------------------------------+
The PCE establishes end-to-end Tracks of hard cells, which are described in more detail in
Section 4.6.1.
The DetNet work is expected to enable end-to-end deterministic paths across heterogeneous networks. This can be, for instance, a 6TiSCH LLN and an Ethernet backbone.
This model fits the 6TiSCH extended configuration, whereby a 6BBR federates multiple 6TiSCH LLNs in a single subnet over a backbone that can be, for instance, Ethernet or Wi-Fi. In that model, 6TiSCH 6BBRs synchronize with one another over the backbone, so as to ensure that the multiple LLNs that form the IPv6 subnet stay tightly synchronized.
If the backbone is deterministic, then the Backbone Router ensures that the end-to-end deterministic behavior is maintained between the LLN and the backbone. It is the responsibility of the PCE to compute a deterministic path end to end across the TSCH network and an IEEE Std 802.1 TSN Ethernet backbone, and it is the responsibility of DetNet to enable end-to-end deterministic forwarding.
A node can reserve a
Section 4.5 to one or more destination(s) that are multiple hops away by installing soft cells at each intermediate node. This forms a Track of soft cells. A Track SF above the 6top sublayer of each node on the Track is needed to monitor these soft cells and trigger relocation when needed.
This hop-by-hop reservation mechanism is expected to be similar in essence to [
RFC 3209] and/or [
RFC 4080] and [
RFC 5974]. The protocol for a node to trigger hop-by-hop scheduling is not yet defined.
The architecture introduces the concept of a Track, which is a directed path from a source 6TiSCH node to one or more destination 6TiSCH node(s) across a 6TiSCH LLN.
A Track is the 6TiSCH instantiation of the concept of a deterministic path as described in [
RFC 8655]. Constrained resources such as memory buffers are reserved for that Track in intermediate 6TiSCH nodes to avoid loss related to limited capacity. A 6TiSCH node along a Track not only knows which bundles of cells it should use to receive packets from a previous hop but also knows which bundle(s) it should use to send packets to its next hop along the Track.
A Track is associated with Layer 2 bundles of cells with related schedules and logical relationships that ensure that a packet that is injected in a Track will progress in due time all the way to destination.
Multiple cells may be scheduled in a Track for the transmission of a single packet, in which case the normal operation of IEEE Std 802.15.4 Automatic Repeat-reQuest (ARQ) can take place; the acknowledgment may be omitted in some cases, for instance, if there is no scheduled cell for a possible retry.
There are several benefits for using a Track to forward a packet from a source node to the destination node:
-
Track Forwarding, as further described in Section 4.6.1, is a Layer 2 forwarding scheme, which introduces less process delay and overhead than a Layer 3 forwarding scheme. Therefore, LLN devices can save more energy and resources, which is critical for resource-constrained devices.
-
Since channel resources, i.e., bundles of cells, have been reserved for communications between 6TiSCH nodes of each hop on the Track, the throughput and the maximum latency of the traffic along a Track are guaranteed, and the jitter is minimized.
-
By knowing the scheduled timeslots of incoming bundle(s) and outgoing bundle(s), 6TiSCH nodes on a Track could save more energy by staying in sleep state during inactive slots.
-
Tracks are protected from interfering with one another if a cell is scheduled to belong to at most one Track, and congestion loss is avoided if at most one packet can be presented to the MAC to use that cell. Tracks enhance the reliability of transmissions and thus further improve the energy consumption in LLN devices by reducing the chances of retransmission.
A Serial (or simple) Track is the 6TiSCH version of a circuit: a bundle of cells that are programmed to receive (RX-cells) is uniquely paired with a bundle of cells that are set to transmit (TX-cells), representing a Layer 2 forwarding state that can be used regardless of the network-layer protocol. A Serial Track is thus formed end-to-end as a succession of paired bundles: a receive bundle from the previous hop and a transmit bundle to the next hop along the Track.
For a given iteration of the device schedule, the effective channel of the cell is obtained by looping through a well-known hopping sequence beginning at Epoch time and starting at the cell's channelOffset, which results in a rotation of the frequency that is used for transmission. The bundles may be computed so as to accommodate both variable rates and retransmissions, so they might not be fully used in the iteration of the schedule.
The art of Deterministic Networks already includes packet replication and elimination techniques. Example standards include the Parallel Redundancy Protocol (PRP) and the High-availability Seamless Redundancy (HSR) [
IEC62439]. Similarly, and as opposed to a Serial Track that is a sequence of nodes and links, a Complex Track is shaped as a directed acyclic graph towards one or more destination(s) to support multipath forwarding and route around failures.
A Complex Track may branch off over noncongruent branches for the purpose of multicasting and/or redundancy, in which case, it reconverges later down the path. This enables the Packet Replication, Elimination, and Ordering Functions (PREOF) defined by DetNet. Packet ARQ, Replication, Elimination, and Overhearing (PAREO) adds radio-specific capabilities of Layer 2 ARQ and promiscuous listening to redundant transmissions to compensate for the lossiness of the medium and meet industrial expectations of a RAW network. Combining PAREO and PREOF, a Track may extend beyond the 6TiSCH network into a larger DetNet network.
In the art of TSCH, a path does not necessarily support PRE, but it is almost systematically multipath. This means that a Track is scheduled so as to ensure that each hop has at least two forwarding solutions, and the forwarding decision is to try the preferred one and use the other in case of Layer 2 transmission failure as detected by ARQ. Similarly, at each 6TiSCH hop along the Track, the PCE may schedule more than one timeslot for a packet, so as to support Layer 2 retries (ARQ). It is also possible that the field device only uses the second branch if sending over the first branch fails.
Ultimately, DetNet should enable extending a Track beyond the 6TiSCH LLN as illustrated in
Figure 10. In that example, a Track is laid out from a field device in a 6TiSCH network to an IoT gateway that is located on an 802.1 Time-Sensitive Networking (TSN) backbone. A 6TiSCH-aware DetNet service layer handles the Packet Replication, Elimination, and Ordering Functions over the DODAG that forms a Track.
The Replication function in the 6TiSCH Node sends a copy of each packet over two different branches, and the PCE schedules each hop of both branches so that the two copies arrive in due time at the gateway. In case of a loss on one branch, hopefully the other copy of the packet still makes it in due time. If two copies make it to the IoT gateway, the Elimination function in the gateway ignores the extra packet and presents only one copy to upper layers.
+-=-=-+
| IoT |
| G/W |
+-=-=-+
^ <=== Elimination
Track branch | |
+-=-=-=-+ +-=-=-=-=+ Subnet backbone
| |
+-=|-=+ +-=|-=+
| | | Backbone | | | Backbone
o | | | Router | | | Router
+-=/-=+ +-=|-=+
o / o o-=-o-=-=/ o
o o-=-o-=/ o o o o o
o \ / o o LLN o
o v <=== Replication
o
The 6TiSCH architecture provides the means to avoid waste of cells as well as overflows in the transmit bundle of a Track, as follows:
A TX-cell that is not needed for the current iteration may be reused opportunistically on a per-hop basis for routed packets. When all of the frames that were received for a given Track are effectively transmitted, any available TX-cell for that Track can be reused for upper-layer traffic for which the next-hop router matches the next hop along the Track. In that case, the cell that is being used is effectively a TX-cell from the Track, but the short address for the destination is that of the next-hop router.
It results in a frame that is received in an RX-cell of a Track with a destination MAC address set to this node, as opposed to the broadcast MAC address that must be extracted from the Track and delivered to the upper layer. Note that a frame with an unrecognized destination MAC address is dropped at the lower MAC layer and thus is not received at the 6top sublayer.
On the other hand, it might happen that there are not enough TX-cells in the transmit bundle to accommodate the Track traffic, for instance, if more retransmissions are needed than provisioned. In that case, and if the frame transports an IPv6 packet, then it can be placed for transmission in the bundle that is used for Layer 3 traffic towards the next hop along the Track. The MAC address should be set to the next-hop MAC address to avoid confusion.
It results in a frame that is received over a Layer 3 bundle that may be in fact associated with a Track. In a classical IP link such as an Ethernet, off-Track traffic is typically in excess over reservation to be routed along the non-reserved path based on its QoS setting. But with 6TiSCH, since the use of the Layer 3 bundle may be due to transmission failures, it makes sense for the receiver to recognize a frame that should be re-Tracked and to place it back on the appropriate bundle if possible. A frame is re-Tracked by scheduling it for transmission over the transmit bundle associated with the Track, with the destination MAC address set to broadcast.
By forwarding, this document means the per-packet operation that allows delivery of a packet to a next hop or an upper layer in this node. Forwarding is based on preexisting state that was installed as a result of a routing computation, see
Section 4.7. 6TiSCH supports three different forwarding models: (GMPLS) Track Forwarding, (classical) IPv6 Forwarding, and (6LoWPAN) Fragment Forwarding.
Forwarding along a Track can be seen as a Generalized Multiprotocol Label Switching (GMPLS) operation in that the information used to switch a frame is not an explicit label but is rather related to other properties of the way the packet was received, a particular cell in the case of 6TiSCH. As a result, as long as the TSCH MAC (and Layer 2 security) accepts a frame, that frame can be switched regardless of the protocol, whether this is an IPv6 packet, a 6LoWPAN fragment, or a frame from an alternate protocol such as WirelessHART or ISA100.11a.
A data frame that is forwarded along a Track normally has a destination MAC address that is set to broadcast or a multicast address depending on MAC support. This way, the MAC layer in the intermediate nodes accepts the incoming frame and 6top switches it without incurring a change in the MAC header. In the case of IEEE Std 802.15.4, this means effectively to broadcast, so that along the Track the short address for the destination of the frame is set to 0xFFFF.
There are two modes for a Track: an IPv6 native mode and a protocol-independent tunnel mode.
In native mode, the Protocol Data Unit (PDU) is associated with flow-dependent metadata that refers uniquely to the Track, so the 6top sublayer can place the frame in the appropriate cell without ambiguity. In the case of IPv6 traffic, this flow may be identified using a 6-tuple as discussed in [
RFC 8939]. In particular, implementations of this document should support identification of DetNet flows based on the IPv6 Flow Label field.
The flow follows a Track that is identified using a RPL Instance (see
Section 3.1.3 of
RFC 6550), signaled in a RPL Packet Information (more in
Section 11.2.2.1 of
RFC 6550) and the source address of a packet going down the DODAG formed by a local instance. One or more flows may be placed in a same Track and the Track identification (TrackID plus owner) may be placed in an IP-in-IP encapsulation. The forwarding operation is based on the Track and does not depend on the flow therein.
The Track identification is validated at egress before restoring the destination MAC address (DMAC) and punting to the upper layer.
Figure 11 illustrates the Track Forwarding operation that happens at the 6top sublayer, below IP.
| Packet flowing across the network ^
+--------------+ | |
| IPv6 | | |
+--------------+ | |
| 6LoWPAN HC | | |
+--------------+ ingress egress
| 6top | sets +----+ +----+ restores
+--------------+ DMAC to | | | | DMAC to
| TSCH MAC | brdcst | | | | dest
+--------------+ | | | | | |
| LLN PHY | +-------+ +--...-----+ +-------+
+--------------+
Ingress Relay Relay Egress
Stack Layer Node Node Node Node
In tunnel mode, the frames originate from an arbitrary protocol over a compatible MAC that may or may not be synchronized with the 6TiSCH network. An example of this would be a router with a dual radio that is capable of receiving and sending WirelessHART or ISA100.11a frames with the second radio by presenting itself as an access point or a Backbone Router, respectively. In that mode, some entity (e.g., PCE) can coordinate with a WirelessHART Network Manager or an ISA100.11a System Manager to specify the flows that are transported.
+--------------+
| IPv6 |
+--------------+
| 6LoWPAN HC |
+--------------+ set restore
| 6top | +DMAC+ +DMAC+
+--------------+ to|brdcst to|nexthop
| TSCH MAC | | | | |
+--------------+ | | | |
| LLN PHY | +-------+ +--...-----+ +-------+
+--------------+ | ingress egress |
| |
+--------------+ | |
| LLN PHY | | |
+--------------+ | Packet flowing across the network |
| TSCH MAC | | |
+--------------+ | DMAC = | DMAC =
|ISA100/WiHART | | nexthop v nexthop
+--------------+
Source Ingress Egress Destination
Stack Layer Node Node Node Node
In that case, the TrackID that identifies the Track at the ingress 6TiSCH router is derived from the RX-cell. The DMAC is set to this node, but the TrackID indicates that the frame must be tunneled over a particular Track, so the frame is not passed to the upper layer. Instead, the DMAC is forced to broadcast, and the frame is passed to the 6top sublayer for switching.
At the egress 6TiSCH router, the reverse operation occurs. Based on tunneling information of the Track, which may for instance indicate that the tunneled datagram is an IP packet, the datagram is passed to the appropriate link-layer with the destination MAC restored.
Tunneling information coming with the Track configuration provides the destination MAC address of the egress endpoint as well as the tunnel mode and specific data depending on the mode, for instance, a service access point for frame delivery at egress.
If the tunnel egress point does not have a MAC address that matches the configuration, the Track installation fails.
If the Layer 3 destination address belongs to the tunnel termination, then it is possible that the IPv6 address of the destination is compressed at the 6LoWPAN sublayer based on the MAC address. Restoring the wrong MAC address at the egress would then also result in the wrong IP address in the packet after decompression. For that reason, a packet can be injected in a Track only if the destination MAC address is effectively that of the tunnel egress point. It is thus mandatory for the ingress router to validate that the MAC address used at the 6LoWPAN sublayer for compression matches that of the tunnel egress point before it overwrites it to broadcast. The 6top sublayer at the tunnel egress point reverts that operation to the MAC address obtained from the tunnel information.
As the packets are routed at Layer 3, traditional QoS and Active Queue Management (AQM) operations are expected to prioritize flows.
| Packet flowing across the network ^
+--------------+ | |
| IPv6 | | +-QoS+ +-QoS+ |
+--------------+ | | | | | |
| 6LoWPAN HC | | | | | | |
+--------------+ | | | | | |
| 6top | | | | | | |
+--------------+ | | | | | |
| TSCH MAC | | | | | | |
+--------------+ | | | | | |
| LLN PHY | +-------+ +--...-----+ +-------+
+--------------+
Source Ingress Egress Destination
Stack Layer Node Router Router Node
Considering that, per
Section 4 of
RFC 4944, 6LoWPAN packets can be as large as 1280 bytes (the IPv6 minimum MTU) and that the non-storing mode of RPL implies source routing, which requires space for routing headers, and that an IEEE Std 802.15.4 frame with security may carry in the order of 80 bytes of effective payload, an IPv6 packet might be fragmented into more than 16 fragments at the 6LoWPAN sublayer.
This level of fragmentation is much higher than that traditionally experienced over the Internet with IPv4 fragments, where fragmentation is already known as harmful.
In the case of a multihop route within a 6TiSCH network, hop-by-hop recomposition occurs at each hop to reform the packet and route it. This creates additional latency and forces intermediate nodes to store a portion of a packet for an undetermined time, thus impacting critical resources such as memory and battery.
[
RFC 8930] describes a framework for forwarding fragments end-to-end across a 6TiSCH route-over mesh. Within that framework, [
VIRTUAL-REASSEMBLY] details a virtual reassembly buffer mechanism whereby the datagram tag in the 6LoWPAN fragment is used as a label for switching at the 6LoWPAN sublayer.
Building on this technique, [
RFC 8931] introduces a new format for 6LoWPAN fragments that enables the selective recovery of individual fragments and allows for a degree of flow control based on an Explicit Congestion Notification (ECN).
| Packet flowing across the network ^
+--------------+ | |
| IPv6 | | +----+ +----+ |
+--------------+ | | | | | |
| 6LoWPAN HC | | learn learn |
+--------------+ | | | | | |
| 6top | | | | | | |
+--------------+ | | | | | |
| TSCH MAC | | | | | | |
+--------------+ | | | | | |
| LLN PHY | +-------+ +--...-----+ +-------+
+--------------+
Source Ingress Egress Destination
Stack Layer Node Router Router Node
In that model, the first fragment is routed based on the IPv6 header that is present in that fragment. The 6LoWPAN sublayer learns the next-hop selection, generates a new datagram tag for transmission to the next hop, and stores that information indexed by the incoming MAC address and datagram tag. The next fragments are then switched based on that stored state.
| Packet flowing across the network ^
+--------------+ | |
| IPv6 | | |
+--------------+ | |
| 6LoWPAN HC | | replay replay |
+--------------+ | | | | | |
| 6top | | | | | | |
+--------------+ | | | | | |
| TSCH MAC | | | | | | |
+--------------+ | | | | | |
| LLN PHY | +-------+ +--...-----+ +-------+
+--------------+
Source Ingress Egress Destination
Stack Layer Node Router Router Node
A bitmap and an ECN echo in the end-to-end acknowledgment enable the source to resend the missing fragments selectively. The first fragment may be resent to carve a new path in case of a path failure. The ECN echo set indicates that the number of outstanding fragments should be reduced.
All packets inside a 6TiSCH domain must carry the RPLInstanceID that identifies the 6TiSCH topology (e.g., a Track) that is to be used for routing and forwarding that packet. The location of that information must be the same for all packets forwarded inside the domain.
For packets that are routed by a PCE along a Track, the tuple formed by 1) (typically) the IPv6 source or (possibly) destination address in the IPv6 header and 2) a local RPLInstanceID in the RPI that serves as TrackID, identify uniquely the Track and associated transmit bundle.
For packets that are routed by RPL, that information is the RPLInstanceID that is carried in the RPL Packet Information (RPI), as discussed in
Section 11.2 of
RFC 6550, "Loop Avoidance and Detection". The RPI is transported by a RPL Option in the IPv6 Hop-By-Hop Options header [
RFC 6553].
A compression mechanism for the RPL packet artifacts that integrates the compression of IP-in-IP encapsulation and the Routing Header type 3 [
RFC 6554] with that of the RPI in a 6LoWPAN dispatch/header type is specified in [
RFC 8025] and [
RFC 8138].
Either way, the method and format used for encoding the RPLInstanceID is generalized to all 6TiSCH topological Instances, which include both RPL Instances and Tracks.
6TiSCH supports the PREOF operations of elimination and reordering of packets along a complex Track, but has no requirement about tagging a sequence number in the packet for that purpose. With 6TiSCH, the schedule can tell when multiple receive timeslots correspond to copies of a same packet, in which case the receiver may avoid listening to the extra copies once it has received one instance of the packet.
The semantics of the configuration enable correlated timeslots to be grouped for transmit (and receive, respectively) with 'OR' relations, and then an 'AND' relation can be configurable between groups. The semantics are such that if the transmit (and receive, respectively) operation succeeded in one timeslot in an 'OR' group, then all the other timeslots in the group are ignored. Now, if there are at least two groups, the 'AND' relation between the groups indicates that one operation must succeed in each of the groups.
On the transmit side, timeslots provisioned for retries along a same branch of a Track are placed in the same 'OR' group. The 'OR' relation indicates that if a transmission is acknowledged, then retransmissions of that packet should not be attempted for the remaining timeslots in that group. There are as many 'OR' groups as there are branches of the Track departing from this node. Different 'OR' groups are programmed for the purpose of replication, each group corresponding to one branch of the Track. The 'AND' relation between the groups indicates that transmission over any of branches must be attempted regardless of whether a transmission succeeded in another branch. It is also possible to place cells to different next-hop routers in the same 'OR' group. This allows routing along multipath Tracks, trying one next hop and then another only if sending to the first fails.
On the receive side, all timeslots are programmed in the same 'OR' group. Retries of the same copy as well as converging branches for elimination are converged, meaning that the first successful reception is enough and that all the other timeslots can be ignored. An 'AND' group denotes different packets that must all be received and transmitted over the associated transmit groups within their respected 'AND' or 'OR' rules.
As an example, say that we have a simple network as represented in
Figure 16, and we want to enable PREOF between an ingress node I and an egress node E.
+-+ +-+
-- |A| ------ |C| --
/ +-+ +-+ \
/ \
+-+ +-+
|I| |E|
+-+ +-+
\ /
\ +-+ +-+ /
-- |B| ------- |D| --
+-+ +-+
The assumption for this particular problem is that a 6TiSCH node has a single radio, so it cannot perform two receive and/or transmit operations at the same time, even on two different channels.
Say we have six possible channels, and at least ten timeslots per slotframe.
Figure 17 shows a possible schedule whereby each transmission is retried two or three times, and redundant copies are forwarded in parallel via A and C on the one hand, and B and D on the other, providing time diversity, spatial diversity though different physical paths, and frequency diversity.
slotOffset 0 1 2 3 4 5 6 7 9
+----+----+----+----+----+----+----+----+----+
channelOffset 0 | | | | | | |B->D| | | ...
+----+----+----+----+----+----+----+----+----+
channelOffset 1 | |I->A| |A->C|B->D| | | | | ...
+----+----+----+----+----+----+----+----+----+
channelOffset 2 |I->A| | |I->B| |C->E| |D->E| | ...
+----+----+----+----+----+----+----+----+----+
channelOffset 3 | | | | |A->C| | | | | ...
+----+----+----+----+----+----+----+----+----+
channelOffset 4 | | |I->B| | |B->D| | |D->E| ...
+----+----+----+----+----+----+----+----+----+
channelOffset 5 | | |A->C| | | |C->E| | | ...
+----+----+----+----+----+----+----+----+----+
This translates into a different slotframe that provides the waking and sleeping times for every node, and the channelOffset to be used when awake.
Figure 18 shows the corresponding slotframe for node A.
slotOffset 0 1 2 3 4 5 6 7 9
+----+----+----+----+----+----+----+----+----+
operation |rcv |rcv |xmit|xmit|xmit|none|none|none|none| ...
+----+----+----+----+----+----+----+----+----+
channelOffset | 2 | 1 | 5 | 1 | 3 |N/A |N/A |N/A |N/A | ...
+----+----+----+----+----+----+----+----+----+
The logical relationship between the timeslots is given by
Table 2:
Node |
rcv slotOffset |
xmit slotOffset |
I |
N/A |
(0 OR 1) AND (2 OR 3) |
A |
(0 OR 1) |
(2 OR 3 OR 4) |
B |
(2 OR 3) |
(4 OR 5 OR 6) |
C |
(2 OR 3 OR 4) |
(5 OR 6) |
D |
(4 OR 5 OR 6) |
(7 OR 8) |
E |
(5 OR 6 OR 7 OR 8) |
N/A |
Table 2