Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 26.928  Word version:  18.0.0

Top   Top   Up   Prev   Next
0…   4…   4.1.2…   4.2…   4.3…   4.4…   4.5…   4.6…   4.6.7   4.7…   4.9…   5…   6…   7…   8   A…   A.4…   A.7…   A.10…   A.13   A.14   A.15   A.16   A.17   A.18…

 

A.7  Use Case 6: Immersive Game Spectator Modep. 92

Use Case Description:
Immersive Game Spectator Mode
The world championship in Fortnite ™ are happening and the 100 best players meet. Millions of people want to follow the game online and connect to the live game streaming. Many of them connect over a 5G connected HMD and follow the game. The users can change their in-game position by using controllers and body movement. Two types of positions are possible:
  • Getting the exact view of one of the participants
  • A spectactor view independent of the player view
Other users follow on a 2D screen.
In an extension of the game, the spectators "interact" with the players and the scene in a sense that the players hear cheering, get rewarded by presence of spectators, similar to a stadium experience.
The Twitch.TV ™ experience is also available for standalone 5G connected devices.
Categorization
Type:
VR
Degrees of Freedom:
6DoF
Delivery:
Streaming, Split
Device:
2D screen or HMD with a controller
Preconditions
  • Application is installed that permits to follow the game
  • The application uses existing HW capabilities on the device, including A/V decoders, rendering functionalities as well as sensors. Inside-out Tracking is available.
  • A serving architecture is available that provides access to the game
  • The game is rendered in the network
Requirements and QoS/QoE Considerations
  • Required QoS:
    • Depends on the architecture, but similar considerations as for the Use Case 5 in A.6
  • Required QoE:
    • Being timely close to the live gaming experience, in the extension, presence needs to provide a live participation experience.
    • fast reaction to manual controller information,
    • reaction to head movement within immersive limits,
    • providing sufficient AV experience to enable presence. https://xinreality.com/wiki/Presence
Feasibility
Twitch shows that games are watched live with incredible statistics (https://sullygnome.com/):
  • Fortnite ™ has 1,412,048,240 watching hours over 365 days, this means it is more than 160,000 years
Spectator Mode in VR Games
Similar considerations as for use case 5 in clause A.6.
Potential Standardization Status and Needs
The following aspects may require standardization work:
  • Coded Representation of Audio/Video Formats
  • Content Delivery Protocols
  • Decoding, rendering and sensor APIs
  • Network conditions that fulfill the QoS and QoE Requirement
  • Architectures and interfaces that permit such experiences
Up

A.8  Use Case 7: Real-time 3D Communicationp. 94

Use Case Description:
Real-time 3D Communication
Alice uses her mobile phone to start a video call with Bob. After the call starts, Alice sees a button on her screen that reads "3D". Alice clicks on the button to turn on the 3D mode on the video call app. Bob is able to see Alice's head in 3D and he uses his finger to rotate the view and look around Alice's head. Bob may not be able to see the full head or may see a reconstructed model of it (e.g. based on a pre-captured model). Alice is able to apply a selected set of 3D AR effects to her 3D head (e.g. putting a hat or glasses).
Categorization
Type:
3D Real-time communication, AR
Degrees of Freedom:
3DoF+
Delivery:
Conversational
Device:
Phone, AR glasses
Preconditions
  • Alice's phone is equipped with 3D capture capabilities, such as front depth camera
  • Bob's phone can receive a proper 3D object in real-time and apply the facial expressions during the rendering
Requirements and QoS/QoE Considerations
  • QoS:
    • conversational QoS requirements
    • sufficient bandwidth to delivery compressed 3D objects, e.g. point cloud compression
  • QoE:
    • Quality of the 3D object representation, level of details
    • Quality of facial expressions
The following requirements are considered:
  • High quality, very low delay 3D reconstruction of Head/Face, e.g. resolution of the 3D head representation measured in number of points or polygons.
Feasibility
Advances in image and video processing together with the proliferation of front-facing depth sensors are going to enable real-time reconstruction of the call participants. To run in real-time, extensive hardware capabilities are required, such as multi-GPU or Tensor Processing Unit (TPU) processing. These operations may be performed in the network, e.g. by a media gateway or a dedicated processing engine.
The representation of the call participant's head can be done in Point Cloud format to avoid the expensive Mesh reconstruction operation.
Potential Standardization Status and Needs
The following aspects may require standardization work:
  • Extension of the MTSI service to support dynamic 3D objects and their formats
Up

A.9  Use Case 8: AR guided assistant at remote location (industrial services)p. 95

Use Case Description:
AR guided assistant at remote location (industrial services)
  • Pedro is sent to fix a machine in a remote location.
  • Fixing the machine requires support from a remote expert.
  • Pedro puts his AR 5G glasses on and turns them on. He connects to the remote expert, who uses a tablet or a touch-screen computer, or uses AR glasses, headphones, as well as a gesture acquisition device that is connected and coordinated with his glasses.
  • The connection supports conversational audio and Pedro and the expert start a conversation.
  • Pedro's AR 5G glasses support accurate positioning and Pedro's position is shared live with the expert such that he can direct Pedro in the location.
  • The AR 5G glasses are equipped with a camera that also has depth capturing capability.
  • Pedro activates the camera such that the expert can see what Pedro is viewing.
  • The expert can provide guidance to Pedro via audio but also via overlaying graphics to the received video content, by activation of appropriate automatic object detection from his application, and via drawing of instructions as text and/or graphics and via overlaying additional video instructions. In the case that the expert uses AR glasses, the expert can also identify the depth of the video sent by Pedro and more accurately place the overlay text or graphics.
  • The overlaid text and/or graphics are sent to Pedro's glasses and they are rendered to Pedro such that he receives the visual guidance from the expert on where to find the machine and how to fix it.
  • Note: the video uplink from Pedro's glasses might be "jumpy" as Pedro moves his head. A second camera and corresponding video uplink to show an overview video of Pedro and the machinery or alternatively a detailed video of the machinery functioning, is a help to the expert when performing this type of service.
Categorization
Type:
AR
Degrees of Freedom:
2D video with dynamic AR rendering of graphics (6DoF)
Delivery:
Local, Streaming, Interactive, Conversational
Device:
5G AR Glasses, 5G touchscreen computer or tablet
Preconditions
Pedro has AR Glasses with the following features
  • 5G connectivity
  • Support for conversational audio
  • Positioning (possibly even indoor)
  • Camera with depth capturing
  • Rendering of overlay graphics
  • Rendering of overlay video
The remote expert has a tablet or touch-screen device (with peripheries) with the following features
  • Securily connected to Pedro
  • Headphones
  • Gesture acquisition
  • Composition tools to support Pedro
  • Access to a second stationary camera that is provides synchronized video to Pedro's uplink traffic
Requirements and QoS/QoE Considerations
QoS:
  • conversational QoS requirements
  • sufficient bandwidth to delivery compressed 3D objects, e.g. point cloud compression
  • Accurate user location (indoor/outdoor) (to find machine or user location)
QoE:
  • For Pedro:
    • Fast and accurate rendering of overlay graphics and video
    • Synchronized rendering of audio and video/graphics
  • For remote expert:
    • High-quality depth video captured from Pedro's device
    • Synchronized and good video signal from second camera
    • Synchronized voice communication from Pedro
    • Accurate positioning information
Feasibility
Potential Standardization Status and Needs
  • 5G connectivity: Release-15 and Release-16 3GPP standardization
  • 5G positioning: ongoing 3GPP standardization - API required for sharing with low latency
  • MTSI regular audio between Pedro and expert
  • MTSI 2D video call from Pedro to expert, potentially a second video source as help for the expert.
  • Pedro received video + graphics (manuals, catalogs, manual indications from the expert, object detection) + overlaid video rendering either in the network or locally
  • Synchronization of different capturing devices
  • Coded Representations of 3D depth signals and delivery in MTSI context
Up

Up   Top   ToC