Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x
Top   in Index   Prev   Next

TR 38.743
Study on enhancements for Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN

V19.0.0 (Wzip)2024/09  13 p.
Rapporteur:
Mr. Chen, Jiajun
ZTE Corporation

Content for  TR 38.743  Word version:  19.0.0

Here   Top

 

1  Scopep. 6

The present document provides the description and investigation of new AI/ML based use cases, i.e., Network Slicing and Coverage and Capacity Optimization, and its corresponding solutions, and initial analysis of Rel-18 leftovers.

2  Referencesp. 6

The following documents contain provisions which, through reference in this text, constitute provisions of the present document.
  • References are either specific (identified by date of publication, edition number, version number, etc.) or non-specific.
  • For a specific reference, subsequent revisions do not apply.
  • For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document.
[1]
TR 21.905: "Vocabulary for 3GPP Specifications".
[2]
TS 38.300: "NR; NR and NG-RAN Overall Description".
[3]
TS 38.401: "NG-RAN; Architecture description".
Up

3  Definitions of terms, symbols and abbreviationsp. 6

3.1  Termsp. 6

For the purposes of the present document, the terms given in TR 21.905 and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905.

3.2  Symbolsp. 6

For the purposes of the present document, the following symbols apply:

3.3  Abbreviationsp. 6

For the purposes of the present document, the abbreviations given in TR 21.905 and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905.

4  Use cases and Solutionsp. 7

4.1  AI/ML based Network Slicingp. 7

4.1.1  Use case descriptionp. 7

Support of network slicing in NG-RAN is defined in TS 38.300.
The NG-RAN plays a key role in taking mobility, load balancing and Radio Resources Management decisions for the purpose of meeting target requirements derived from the SLA of each supported network slice.
AI/ML function can analyze metrics related to network and UE level performance related to perform optimal resource management and mobility decisions for network slicing to meet the requirements.
Up

4.1.2  Solutions and standard impactsp. 7

4.1.2.1  Locations for AI/ML Model Training and AI/ML Model Inferencep. 7

The following solutions can be considered for supporting AI/ML-based network slicing:
  • AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB.
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
In case of CU-DU split architecture, the following solutions are possible:
  • AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB-CU.
  • AI/ML Model Training and Model Inference are both located in the gNB-CU.

4.1.2.2  Input data of AI/ML based Network Slicingp. 7

To predict the optimized network slicing decisions, a gNB may need the following information as input data for AI/ML-based network slicing:
From local node:
  • Measured/Predicted radio resource status per slice
  • Measured/Predicted slice available capacity
  • Legacy predicted UE trajectory
From neighbouring gNBs:
  • Measured/Predicted radio resource status per slice
  • Measured/Predicted slice available capacity
From the UE:
  • UE measurement report (e.g., UE RSRP, RSRQ, SINR measurement, etc), including cell level and beam level UE measurements
Up

4.1.2.3  Output data of AI/ML based Network Slicingp. 7

AI/ML-based network slicing model in a gNB can generate the following information as output:
  • Predicted radio resource status per slice
  • Predicted slice available capacity
  • Resource management decisions for resources within RRM policies (used by gNB internally)
  • Slice aware mobility decisions (used by gNB internally)

4.1.2.4  Feedback of AI/ML based Network Slicingp. 8

To optimize the performance of AI/ML-based network slicing model, the following feedback can be considered to be collected from gNBs:
  • Measured Radio resource status per slice
  • Measured Slice available capacity
  • Legacy UE performance feedback for those UEs handed over from the source gNB
  • Finer granularity UE performance feedback for those UEs handed over from the source gNB to determine UE performance for a certain slice in use by a certain UE.
Up

4.1.2.5  Potential standard impactsp. 8

Following standard impacts are listed for subsequent Rel-19 normative work compared with what was specified during Rel-18.
Xn interface:
  • Enhanced existing procedure to collect predicted information between gNBs:
    • Predicted radio resource status per slice
    • Predicted slice available capacity

4.2  AI/ML based Coverage and Capacity Optimizationp. 8

4.2.1  Use case descriptionp. 8

The objective of NR Coverage and Capacity Optimization (CCO) function is to detect and resolve or mitigate CCO issues. An NG-RAN node may autonomously adjust within and switch among coverage configurations. When a change is executed, a NG-RAN node may notify its neighbour NG-RAN nodes with the list of cells and SSBs with modified coverage included.
In the legacy CCO solution, a reactive approach is used: when the gNB (gNB-CU in case of CU-DU split architecture) detects a CCO issue which negatively impacts network and UE performance after it has already occurred, the gNB (gNB-DU in case of CU-DU split architecture) attempts to resolve or mitigate it.
With an AI/ML based CCO, a more proactive approach is used to prevent (or limiting at an early stage) the rise of a CCO issue with the consequent degradation of network (and UE) performance.
Up

4.2.2  Solutions and standard impactsp. 8

4.2.2.1  Locations for AI/ML Model Training and AI/ML Model Inferencep. 8

The following solutions can be considered for supporting AI/ML-based CCO:
  • AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB.
  • AI/ML Model Training and AI/ML Model Inference are both located in the gNB.
In case of CU-DU split architecture, the following solutions are possible:
  • AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB-CU.
  • AI/ML Model Training and Model Inference are both located in the gNB-CU.

4.2.2.2  Input data of AI/ML based CCOp. 9

For a proactive prediction and resolution of a CCO issue, a gNB may need the following information as input data for AI/ML-based CCO:
From local node:
  • Measured/Predicted radio resource status
  • Current CCO State
From neighbouring gNBs:
  • Measured/Predicted radio resource status
From the UE:
  • UE measurement report (e.g., UE RSRP, RSRQ, SINR measurement, etc), including cell level and beam level UE measurements
  • SON Reports (e.g., RLF, CEF, RA)

4.2.2.3  Output data of AI/ML based CCOp. 9

AI/ML-based CCO model in a gNB can generate following information as output:
  • Predicted CCO issue (including predicted affected cells/beams)
  • Future CCO State

4.2.2.4  Feedback of AI/ML based CCOp. 9

To optimize the performance of AI/ML-based CCO model, following feedback can be considered to be collected from gNBs:
  • Measured radio resource status
  • Legacy UE performance feedback for those UEs handed over from the source gNB
  • SON Reports (e.g., RLF, CEF, RA)

4.2.2.5  Potential standard impactsp. 9

Following standard impacts are listed for subsequent Rel-19 normative work compared with what was specified during Rel-18 based on the CCO framework described in TS 38.300 and TS 38.401.
Xn interface:
Enhance existing procedure to collect information between gNBs:
  • Future CCO State together with an associated coverage modification cause
  • Timing Information
F1 interface:
  • Predicted CCO Issue (including predicted affected cells/beams) from gNB-CU to gNB-DU.
  • Future CCO State from gNB-DU to gNB-CU
  • Timing Information
Up

5  Rel-18 Leftovers and solutionsp. 10

5.1  Mobility optimization for NR-DCp. 10

5.1.1  Use case descriptionp. 10

Mobility in NR-DC can be optimized by means of AI/ML.
Mobility Optimization for NR-DC is studied by assuming inference at the MN only. The main use case is limited to Dual Connectivity only and Conditional Dual Connectivity procedures are out of scope.

5.1.2  Potential Standard impactsp. 10

The Dual Connectivity procedures (e.g., SN Addition, MN-initiated SN Change) are enhanced to trigger the collection of measured UE performance.

5.2  Split architecture support for Rel-18 use casesp. 10

5.2.1  Use case descriptionp. 10

The split architecture should be enhanced to support the Rel-18 use cases, e.g, Load Balancing, Energy Saving, and Mobility Optimization.
In case of CU-DU architecture, the following solutions are possible:
  • AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the gNB-CU(-CP);
  • AI/ML Model Training and Model Inference are both located in the gNB-CU(-CP).

5.2.2  Potential Standard impactsp. 10

The following standard impacts are listed for subsequent Rel-19 normative work compared with what was specified during Rel-18:
  • The details of signaling measured UE performance metrics from gNB-DU to gNB-CU-CP and/or from gNB-CU-UP to gNB-CU-CP need further discussion during normative phase.
  • Measured Energy Cost from gNB-DU to gNB-CU.

5.3  Energy saving enhancementsp. 11

5.3.1  Use case descriptionp. 11

A description of the AI/ML-based Network Energy Saving use case as such is presented in TR 37.817. Based on the conclusions in TR 37.817, a metric called Energy Cost was introduced during Rel-18 normative work; this metric is defined as an index representing the energy consumption at the NG-RAN node and can be exchanged between NG-RAN nodes over the Xn interface upon request with a per NG-RAN node granularity. However, a node-level Energy Cost metric might be too coarse to provide a good understanding of the energy impact of AI/ML-based Network Energy Saving actions in NG-RAN nodes handling multiple cells, so possible solutions to improve Energy Cost granularity could be beneficial.
Up

5.3.2  Solutions and standard impactsp. 11

During Rel-18 normative work, the Energy Cost (EC) metric was defined on an NG-RAN node-level granularity even though the AI/ML Energy Saving actions that we assumed in Rel-18 were on a cell-level granularity, e.g., cell switch-off actions. Enhancements to the node-level EC to a finer granularity could be beneficial, assuming that such enhancements enable network-level energy saving.
The following approaches to improve the EC granularity were discussed:
  1. EC per group of Cells based on energy consumption measurements associated to the hardware serving the group of cells
  2. EC per Cell based on energy consumption estimations
  3. EC per one or more HO event based on energy consumption estimations
  4. EC per gNB-DU based on energy consumption measurements associated to the hardware serving the gNB-DU
Up

5.4  Continuous MDT collection targeting the same UE across RRC statesp. 11

5.4.1  Use Case Descriptionp. 11

The problem of continuous data collection for management-based MDT can be described as follows: a UE in the NG-RAN can be configured with management-based Logged MDT when in RRC_Idle and RRC_Inactive states and with management-based Immediate MDT when in RRC_Connected state. Differently from signalling-based MDT, in management-based MDT, a UE is not uniquely identified in the MDT activation. Therefore, when a UE transits to RRC_Connected state from RRC_Idle/RRC_Inactive (during which Logged MDT data have been collected) or when a UE is handed over between gNBs, the network does not have standardized means to select again the same UE for continuous MDT for subsequent MDT data collection.
The Data Collection continuity in this scenario can be split into two tasks as below:
  • Problem A (measurement continuity): how to ensure that the same UE collecting MDT measurements during the same RRC state and across different RRC states.
  • Problem B (trace correlation): how to ensure that the TCE which eventually receives the MDT reports can associate the received logged and immediate MDT measurements to a continuous data collection period from the same UE.
Up

5.4.2  Potential Standard impactsp. 12

Potential solutions addressing the problems above can be discussed during normative phase.

5.5  Multiple-hop UE trajectory across gNBsp. 12

5.5.1  Use case descriptionp. 12

In Rel-18, the cell-based UE trajectory prediction is limited to the first-hop target NG-RAN node. Multi-hop predicted UE trajectory across gNBs consists of a list of cells belonging to one or more gNBs where the UE is expected to connect and these cells are listed in chronological order.

6  Conclusionp. 12

The new use cases and the Rel-18 leftover cases follow Rel-18 AI/ML framework.
The following new use cases are recommended by RAN3 to be specified in the Rel-19 normative phase:
  • AI/ML-based Network Slicing
  • AI/ML-based Coverage and Capacity Optimization
Recommended solutions and standard impacts for each use case are based on clause 4.1.2 and clause 4.2.2.
For each use case above, it is recommended to take the corresponding clause "Solutions and standard impacts" (clause 4.1.2 and clause 4.2.2) as basis during the normative work.
The following Rel-18 leftovers are recommended by RAN3 to be specified in Rel-19 normative phase:
  • Mobility Optimization for NR-DC
  • Split architecture support for Rel-18 use cases
  • Continuous MDT collection targeting the same UE across RRC states
The corresponding descriptions and potential standard impacts for each Rel-18 leftovers above shall be taken as baseline during the normative phase.
Up

$  Change historyp. 13


Up   Top