Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 26.998  Word version:  18.0.0

Top   Top   Up   Prev   Next
0…   4…   4.2…   4.2.2…   4.2.2.2   4.2.2.3   4.2.2.4   4.2.3…   4.3…   4.4…   4.5…   4.6…   4.6.4…   4.6.5…   4.6.8…   5   6…   6.2…   6.2.4…   6.2.4.2   6.2.5…   6.3…   6.3.4…   6.3.4.2   6.3.5…   6.4…   6.4.4   6.4.5…   6.5…   6.5.4   6.5.5   6.5.6…   6.6…   6.6.4   6.6.5…   7…   8…   8.9   9   A…   A.2   A.3…   A.4   A.5   A.6   A.7…

 

A  Collection of Glass-type AR/MR Use Casesp. 102

A.1  Use Case 16: AR remote cooperationp. 102

Use Case Name
AR remote cooperation
Description
As described in Annex A.9 of TR 26.928, a remote expert makes AR actions (e.g. overlaying graphics and drawing of instructions) to the received local video streams. This use case highlights that both parties can share their own video streams and overlay 2D/3D objects on top of these video streams compared with the scenario from TR 26.928.
For example, a car technician contacts the technical support department of the car components manufacture by phone when he has some difficulty in repairing a consumers' car. The technical support department can arrange an engineer to help him remotely via real-time communication supporting AR.
The car technician makes a video call with the remote engineer, uses his camera to capture the damaged parts of the car and shares them with the remote engineer in-call. And he marks possible points of failure by drawing instructions on the top of these video contents in order that the remote engineer can see the marks and make a detailed discussion. Also, they have respectively FOVs on their sides to check the failure. Likewise, the remote engineer can also overlay graphics and animated objects based on these shared video contents to adjust or correct the technician's operations. Furthermore, if the maintenance procedures are complex, the remote engineer can show the maintenance procedures step by step which are captured in real-time to the local technician. Therefore, the local technician can follow the operations. Finally, they find out the problems and fix them. It looks like that the remote engineer is beside the technician, discusses and solves the problems together.
In the extension to this use case, it the remote engineer enables front-facing and back-facing cameras at the same time, the car technician can see a small video stream, which is captured by the front-facing camera of the remote engineer to achieve more attentive experiences.
Categorization
Type:
AR, MR
Degrees of Freedom:
3DoF+, 6DoF
Delivery:
Interactive, Conversational
Device:
XR5G-P1, XR5G-A2, XR5G-A3, XR5G-A4, XR5G-A5, others
Preconditions
<provides conditions that are necessary to run the use case, for example support for functionalities on the end device or network> Both parties on the device with the following features
  • Support for conversational audio and video
  • Collect and delivery of AR actions and viewer information
  • Enabling of the front-facing and back-facing cameras at the same time
The network with the following features
  • Rendering of overlying AR actions and viewer information
  • Rendering of virtual and real superposition of different video contents
Requirements and QoS/QoE Considerations
<provides a summary on potential requirements as well as considerations on KPIs/QoE as well as QoS requirements>
QoS:
  • conversational QoS requirements
  • sufficient bandwidth to delivery compressed 2D/3D objects
QoE:
  • Synchronized rendering of overlay AR actions and pose information
  • Synchronized rendering of audio and video
  • Fast and accurate positioning information
Feasibility and Industry Practices
<How could the use case be implemented based on technologies available today or expected to be available in a foreseeable timeline, at most within 3 years?
  • What are the technology challenges to make this use case happen?
  • Do you have any implementation information?
    • Demos
    • Proof of concept
    • Existing services
    • References
  • Could a reduced experience of the use case be implemented in an earlier timeframe or is it even available today?
>
Enhancements in media processing for multiple video streams both from different parties and/or the same party together with all kinds of AR actions may be performed in the network (e.g. by a media gateway) and in order to enable richer real-time experiences. Accordingly, the extensive hardware capabilities (e.g. multi-GPU) are required.
Potential Standardization Status and Needs
<identifies potential standardization needs>
  • MTSI audio and video call between both parties
  • Standardized format for AR actions (e.g. static and/or dynamic 2D/3D objects) and posture information
  • Delivery protocols for AR actions and posture information
  • Rendering of more than one video stream
Up

Up   Top   ToC