Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 22.876  Word version:  19.1.0

Top   Top   Up   Prev   Next
0…   5…   5.2…   6…   6.2…   7…   7.2…   7.3…   8…   8.2…   9…

 

7.3  5GS assisted distributed joint inference for 3D object detectionp. 30

7.3.1  Descriptionp. 30

Distributed joint inference is to leverage multiple nodes (e.g. UEs) to provide inference results so that an aggregation of those inference results can lead to a better performance.
When a 3rd party vehicle wants to obtain relevant information of a certain vehicle 1 (e.g. position, width, length, height, profile, orientation), the data collected by the 3rd party vehicle itself is limited. For example, as shown in Figure 7.3.1-1, the 3rd party vehicle, which is directly behind vehicle 1, can only obtain relevant data on the tail of vehicle 1 through sensors, and can identify the width and height of the vehicle 1 through the inference of the local 3D object detection model, but there is no way to know the length of the vehicle 1, or even a more precise vehicle profile, orientation, etc. In addition, although the location of UE1 can be known through equipment such as the radar of the 3rd party vehicle, limited by the singleness of the data, the positioning accuracy based on the information obtained by a single vehicle is limited.
Copy of original 3GPP image for 3GPP TS 22.876, Fig. 7.3.1-1: Joint inference among multiple vehicles for 3D object detection
Up
All of the above problems need to be solved through multi-vehicle joint inference. The performance to use the joint inference is shown in the Figure 7.3.1-2. Its clear shows that despite the green vehicle generating false orientations and location by its local model, the global map (i.e., the red box) can correct the orientation and location error for the green vehicle based on the aggregated results of three vehicles (i.e. blue box, green box and yellow box) [23].
Copy of original 3GPP image for 3GPP TS 22.876, Fig. 7.3.1-2: Distributed joint learning leads to a better inference performance
Up

7.3.2  Pre-conditionsp. 31

As shown in Figure 7.3.2-1, when a vehicle accident occurs somewhere and the road is congested. Alice's auto-driving vehicle wants to know the complete situation of the accident (i.e. the exact location and shape of the accident vehicle including the length, width and height of the vehicle), so as to use the inference result for auto-driving decision in real time. Alice's vehicle needs to find and establish connections with vehicles located in different position to the accident vehicle, and collect inference results to perform the accurate 3D object detection of the accident vehicle.
Though the accident vehicle cannot move due to the collision to a barricade afront, the electronic device can still work as normal.
Copy of original 3GPP image for 3GPP TS 22.876, Fig. 7.3.2-1: Joint inference among multiple vehicles for accident vehicle detection
Up

7.3.3  Service Flowsp. 32

  1. Alice's vehicle wants to know the complete situation of the accident vehicle. So, her car sends the request to 5G system to select the vehicles located in different positions /direction and a certain distance to the accident vehicle
  2. Based on the candidate UE list located in different position to the accident vehicle, the Alice's vehicle starts to establish direction device connection to each of the selected vehicles and transmit the 3D object detection model to them via direct device connection.
  3. Alice's vehicle receives the inference result the 3D object detection model made by the selected vehicles and further aggregate the results to acquire a highly accurate 3D object reconstruction of the accident vehicles.
  4. Alice's vehicle may also share the aggregated result to other vehicles or the application server so that they can use it to assist their own intelligent driving as well.
Up

7.3.4  Post-conditionsp. 32

Thanks to candidate UE list provided by the 5G system and inference results provided by other vehicles, Alice's vehicle can get the situation of the accident scene accurately and make a path planning to avoid road congestion effectively.

7.3.5  Existing features partly or fully covering the use case functionalityp. 32

In clause 6.40.2 of TS 22.261, there is a requirement for FL scenario, i.e. the 5GS to assist 3rd party to determine FL members. But it is between the 5GS NF and the 3rd party. For the distributed joint inference use case, the communication is between the 3rd party UE and the UE1 or other UEs. The existing 5G system cannot support to help find the suitable UEs request by the 3rd party UE via direct device connection.
Subject to user consent, operator policy and regulatory requirements, the 5G system shall be able to expose information (e.g. candidate UEs) to an authorized 3rd party to assist the 3rd party to determine member(s) of a group of UEs (e.g. UEs of a FL group).
Up

7.3.6  Potential New Requirements needed to support the use casep. 32

7.3.6.1  Potential Functionality Requirementsp. 32

[P.R.7.3-001]
Subject to user consent, operator policy, and 3rd party's request, 5GS shall be able to provide and configure the QoS applied to a group of UEs communicating via direct device connection (e.g. part of a joint AIML inference task).
[P.R.7.3-002]
Subject to user consent, operator policy and 3rd party's request, the 5G system shall be able to provide information of certain UEs (e.g. located in a specific location) to an authorized 3rd party (e.g. to assist a joint AIML task using direct device communication).
Up

7.3.6.2  Potential KPI Requirementsp. 32

According to [24], some typical 3D objection model size and transmission KPI are listed in the Table below.
Model Type Max allowed DL end-to-end latency Experienced data rate
(PC5)
Model size Communication service availability
PointPillar1s0.14 Gbit/s18 MByte99.99 %
SECOND1s0.16 Gbit/s20 MByte99.99 %
PV-RCNN1s0.4 Gbit/s50 MByte99.99 %
Voxel R-CNN (Car)1s0.22 Gbit/s28 MByte99.99 %
Up

Up   Top   ToC