Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 26.998  Word version:  18.0.0

Top   Top   Up   Prev   Next
0…   4…   4.2…   4.2.2…   4.2.2.2   4.2.2.3   4.2.2.4   4.2.3…   4.3…   4.4…   4.5…   4.6…   4.6.4…   4.6.5…   4.6.8…   5   6…   6.2…   6.2.4…   6.2.4.2   6.2.5…   6.3…   6.3.4…   6.3.4.2   6.3.5…   6.4…   6.4.4   6.4.5…   6.5…   6.5.4   6.5.5   6.5.6…   6.6…   6.6.4   6.6.5…   7…   8…   8.9   9   A…   A.2   A.3…   A.4   A.5   A.6   A.7…

 

6.4.4  Procedures and call flowsp. 71

Figure 6.4.4-1 illustrates the generic procedure diagram for cognitive immersive services for both STAR-based and EDGAR-based UEs.
Copy of original 3GPP image for 3GPP TS 26.998, Fig. 6.4.4-1: Generic procedure for cognitive immersive service
Up
Prerequisites and assumptions:
  • The AR/MR Scene Manager includes immersive media rendering and scene graph handling functionalities.
  • The Media Player includes immersive content delivery and immersive media decoding functionalities.
  • The AR/MR Application in the UE is run by the user.
  • The UE initialises AR registration (starts analysing the surroundings where a user/UE is located), it namely:
    1. captures its surroundings via camera(s)
    2. analyses where the device is located
    3. registers the device into the analysed surroundings.
  • AR/MR Application and AR/MR Application Provider have exchanged some information, such as device capability or content configuration, for content rendering. The exchange procedures for device capability and content configuration are FFS.
  • AR/MR Application Provider has established a Provisioning Session and its detailed configurations has been exchanged.
  • AR/MR Application Provider has completed to set up ingesting immersive contents.
Procedures:
Step 1.
The Scene Server context is established, and scene content is ingested by the Media AS.
Step 2.
Service Announcement is triggered by AR/MR Application. Service Access Information including Media Client entry or a reference to the Service Access Information is provided through the M8d interface.
Step 3.
Desired media content is selected.
Step 4.
Optionally, the Service Access information is acquired or updated.
Step 5.
The AR/MR Application initializes the Scene Manager with the entry point (full scene description) URL.
Step 6.
The Media Client establishes the transport session for the scene session between the Scene Manager in the UE and the Scene Server.
Step 7.
The Media Client requests and receives the full scene description. The entry point (scene description) is processed by the AR/MR Scene Manager, and a scene session is created.
Step 8.
The AR/MR Scene Manager requests the creation of a new AR/MR session from the AR Runtime.
Step 9.
The AR Runtime creates a new AR/MR session.
Scene session loop, steps 10~24, send the interaction and pose information and receives and renders the updated scenes accordingly:
Step 10.
The latest sensor data (e.g. captured media) is acquired by the AR/MR Scene Manager and shared with the Media Client. The Media Client sends this information to the Media AS and AR/MR Application.
Step 11.
The AR/MR Application performs cognitive processing according to the sensor data from the UE. Depending on the outcome, the current scene may be updated or replaced.
Step 12.
When needed, one of the following steps:
Step 12a.
The Scene Server sends a new scene entry point to the AR/MR Scene Manager through the Media AS and Media Client (go to step 7), or
Step 12b.
The Scene Server sends a scene update (updating streams/objects) to the AR/MR Scene Manager through the Media AS and Media Client.
Step 13.
The AR/MR Scene requests to create additional streaming sessions if needed for new media objects in the scene.
Step 14.
The Media Session Handle establishes the additional streaming sessions based on the received request.
Streaming session, steps 15~18 establish the transport sessions for media objects and configure the media pipelines
Step 15.
For the required media content, the Media Client establishes the transport session(s) to acquire delivery manifest(s) information.
Step 16.
The Media Client requests and receives the delivery manifest(s) from the Media AS.
Step 17.
The Media Client processes the delivery manifest(s). It determines for example the number of needed transport sessions for media acquisition. The Media Client is expected to be able to use the delivery manifest(s) information to initialize the media pipelines for each media stream.
Step 18.
The AR/MR Scene Manager and Media Client configures the rendering and delivery media pipelines.
Step 19.
The Media Client establishes the transport session(s) to acquire the media content.
Media session loop includes steps 20~24 which are for streaming, decoding and rendering media components:
Step 20.
The Media Client requests the media data according to the delivery manifest processed, possibly taking into account pose information (e.g., viewport dependent streaming).
Step 21.
The Media Client receives the media data and triggers the media rendering pipeline(s), including the possible registration of AR content into the real world accordingly (depending on the device type).
Step 22.
The Media Client decodes and processes the media data. For encrypted media data, the Media Client may also perform decryption.
Step 23.
The Media Client passes the media data to the AR/MR Scene Manager.
Step 24.
The XR Spatial Compute Pipeline as specified in clause 4.3.3.
Step 25.
The AR scene data and XR Spatial Compute data are combined for composition and rendering.
Up

Up   Top   ToC