Today, most automated driving vehicles rely on a single controller, which is the vehicle itself: sensing and controlling features of its own
[3]. Since 3GPP Rel-14, LTE-based support for V2V features have been developed and tested through collaborative participation from automotive and communication industry. However, it is still challenging to use automated driving functionalities in general unstructured settings, if the controlling features are based on a single controller, having no idea on how neighbouring vehicles will behave.
This consequently requires that the automated driving system should allocate an extra safety margin into the planned trajectory which in turn causes traffic flow to be reduced and causes inefficiency to happen in a large scale network of vehicle networks where non-V2X vehicles and V2X-enabled vehicles possibly coexist. This is not even a problem for such an automated driving of road vehicles - the same applies to the operations of
"automated manoeuvring robots" in unstructured settings. Without cooperation, the field of perception of a vehicle/robots is limited to the local coverage of the onboard sensors - not only for the relative distance, relative angle.
As a technology enabler solution against such problems of guaranteeing safety and traffic efficiency, it is being studied to share the sensor information
[9] and manoeuver sharing
[8] in SAE. Tactile Internet for V2N (potentially with assistance from edge cloud instead of general cloud servers) or V2V can enable an ultra-fast and reliable exchange of highly detailed sensor data sets between nearby vehicles, along with haptic information on trajectory
[3]. Also, it would be one of the key factors for so-called
"cooperative perception and manoeuvring" functionalities
[10]: planning cooperative manoeuvers among multiple automated driving vehicle (or robots), such as plan creation, target point generation and target point risk assessment. It is by the Tactile Internet connectivity that vehicles can perform a cooperative perception of the driving environment based on fast fusion of high definition local and remote maps collected by the onboard sensors of the surrounding vehicles (e.g., video streaming from camera, radar, or lidar). This allows to augment the sensing range of each vehicle and to extend the time horizon for situation prediction, with huge benefits for safety
[3]. The onboard sensors in today automated driving vehicles generate data flows up to 8 Gbit/s
[3]. All these requirements call for new network architectures interconnecting vehicles and infrastructure utilizing ultralow-latency networks based on the Tactile Internet for cooperative driving services
[3].
This use case is related to the support of (1) cooperative perception and manoeuvring and (2) extension of sensing range for cooperative automated driving scenarios using Tactile Internet, with some examples of moving robots (e.g., local delivery robots). Manoeuvring and perception obtained via haptic and multi-modal communications (also known as skillset sharing) are very timely shared between the controller and controlee.
Four robots S1, S2, C1 and C2 are working on delivery tasks from a geographic point to another, respectively.
Robots UEs S1 and S2 are automated driving robots with a standalone steer/control, manoeuvring in a crowded village.
Robots UEs C1 and C2 are automated driving robots with steer/control and manoeuver/skillset sharing functionalities, manoeuvring in another crowded village.
The roads that robots are using in these villages are not structured road (i.e., no lane separator, no lane marks, etc.) and they are under the same conditions for robots to move.
The total time that S1 and S2 should spend is much greater than the total time that C1 and C2 should spend.
The total energy consumption (e.g., to accelerate from a low speed level to X) for S1 and S2 is greater than that for C1 and C2.
Figure 5.4.4-1 provides a simplified explanation on the behaviours of speed changes for examples without (left section) and with (right section) real-time multi-modal communication for interactive haptic control and feedback (skillset sharing).
V2X performance requirements found in
TS 22.185,
TS 22.186. (e)CAV requirements in
TS 22.104. VIAPA requirements in
TS 22.263
[PR 5.4.6-1]
5G system shall be able to support real-time multi-modal communication for interactive haptic control and feedback with KPIs as summarized in
Table 5.4.6-1.
Use Cases |
Characteristic parameter (KPI) |
Influence quantity |
Remarks (NOTE 1) |
Max allowed end-to-end latency (NOTE 2) |
Service bit rate: user-experienced data rate |
Reliability |
Message size (byte) |
# of UEs |
UE Speed |
Service Area |
Skillset sharing low- dynamic robotics
(including teleoperation) Controller to controlee | 5-10ms | 0.8 - 200 kbit/s (with compression) | [99,999%] | n DoFs: (2n)-(8n)
(n=1,3,6) | - | Stationary or Pedestrian | 100 km² | Haptic
(position, velocity) |
Skillset sharing low- dynamic robotics
(including teleoperation)
Controlee to controller | 5-10ms | 0.8 - 200 kbit/s (with compression) | [99,999%] | n DoFs: (2n)-(8n)
(n=1,10,100) | - | Stationary or Pedestrian | 100 km² | Haptic feedback |
10ms | 1-100 Mbit/s | [99,999%] | 1500 | - | Stationary or Pedestrian | 100 km² | Video |
10ms | 5-512 kbit/s | [99,9%] | 50 | - | Stationary or Pedestrian | 100 km² | Audio |
Highly dynamic/ mobile robotics
Controller to controlee | 1-5ms | 16 kbit/s -2 Mbit/s
(without haptic compression encoding);
0.8 - 200 kbit/s
(with haptic compression encoding) | [99,999%] (with compression)
[99,9%] (w/o compression) | n DoFs: (2n)-(8n)
(n=1,3,6) | - | high-dynamic | TBD | Haptic
(position, velocity) |
Highly dynamic/ mobile robotics
Controlee to controller | [1-5ms] | 0.8 - 200 kbit/s | [99,999%] (with compression)
[99,9%] (w/o compression) | n DoFs: (2n)-(8n)
(n=1,10,100) | - | high-dynamic | TBD | Haptic feedback |
1-10ms | 1-10 Mbit/s | [99,999%] | 2000-4000 | - | high-dynamic | 4 km² | Video |
1-10ms | 100-500 kbit/s | [99,9%] | 100 | - | high-dynamic | 4 km² | Audio |
NOTE 1:
Haptic feedback is typically haptic signal, such as force level, torque level, vibration and texture.
NOTE 2:
The latency requirements are expected to be satisfied even when multi-modal communication for skillset sharing is via indirect network connection (i.e., relayed by one UE to network relay).
|