This clause introduces the case of cognitive immersive service. In this case, media and other interactions are sent uplink in order for the cognitive server to create semantical perception.
The following use cases are relevant to this scenario.
-
UC#4: AR guided assistant at remote location (industrial services)
-
UE#5: Police Critical Mission with AR
-
UE#14: AR Streaming with Localization Registry
-
UE#16: AR remote cooperation
-
UC#20: AR IoT control
In this scenario, a media captured in a UE may be sent to a cognitive server to request semantical perception. The server processes and outputs the perception results, then responds the outputs to the UE. For example, a UE regularly scans his/her environments and sends the captured media such as video, depth-maps, sensor output, and XR Spatial Description data (if SLAM and XR Spatial Compute processing is involved) to the cognitive server. The server identifies each component in the environments and sends back to the UE the identified perception outputs so that the UE may render in textual or visual overlays. The server may also send to the UE XR Spatial Description data, such as spatial anchors and trackables, in order to facilitate rendering.