Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TS 28.105  Word version:  18.4.0

Top   Top   Up   Prev   Next
1…   4…   4a…   6…   7…   8…

 

4a  AI/ML management functionality and service framework |R18|p. 14

4a.0  ML model lifecyclep. 14

AI/ML techniques are widely used in 5GS (including 5GC, NG-RAN, and management system), the generic AI/ML operational workflow in the lifecycle of an ML model, is depicted in Figure 4a.0-1.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.0-1: ML model lifecycle
Figure 4a.0-1: ML model lifecycle
(⇒ copy of original 3GPP image)
Up
The ML model lifecycle includes training, emulation, deployment, and inference. These steps are briefly described below:
  • ML model training: training, including initial training and re-training, of an ML model or a group of ML models. It also includes validation of the ML model to evaluate the performance when the ML model performs on the training data and validation data. If the validation result does not meet the expectation (e.g., the variance is not acceptable), the ML model needs to be re-trained.
  • ML model testing: testing of a validated ML model to evaluate the performance of the trained ML model when it performs on testing data. If the testing result meets the expectations, the ML model may proceed to the next step If the testing result does not meet the expectations, the ML model needs to be re-trained.
  • AI/ML inference emulation: running an ML model for inference in an emulation environment. The purpose is to evaluate the inference performance of the ML model in the emulation environment prior to applying it to the target network or system.
  • ML model deployment: ML model deployment includes the ML model loading process (a.k.a. a sequence of atomic actions) to make a trained ML model available for use at the target AI/ML inference function.
    ML model deployment may not be needed in some cases, for example when the training function and inference function are co-located.
  • AI/ML inference: performing inference using a trained ML model by the AI/ML inference function. The AI/ML inference may also trigger model re-training or update based on e.g., performance monitoring and evaluation.
Up

4a.1  Functionality and service framework for ML model trainingp. 15

An ML training Function playing the role of ML training MnS producer, may consume various data for ML model training purpose.
As illustrated in Figure 4a.1-1 the ML model training capability is provided via ML training MnS in the context of SBMA to the authorized consumer(s) by ML training MnS producer.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.1-1: Functional overview and service framework for ML model training
Up
The internal business logic of ML model training leverages the current and historical relevant data, including those listed below to monitor the networks and/or services where relevant to the ML model, prepare the data, trigger and conduct the training:
  • Performance Measurements (PM) as per TS 28.552, TS 32.425 and Key Performance Indicators (KPIs) as per TS 28.554.
  • Trace/MDT/RLF/RCEF data, as per TS 32.422.
  • QoE and service experience data as per TS 28.405.
  • Analytics data offered by NWDAF as per TS 23.288.
  • Alarm information and notifications as per TS 28.532.
  • CM information and notifications.
  • MDA reports from MDA MnS producers as per TS 28.104.
  • Management data from non-3GPP systems.
  • Other data that can be used for training.
Up

4a.2  AI/ML functionalities management scenarios (relation with managed AI/ML features)p. 16

The ML training function and/or AI/ML inference function can be located in the RAN domain MnS consumer (e.g. cross-domain management system) or the domain-specific management system (i.e. a management function for RAN or CN), or Network Function.
For MDA, the ML training function can be located inside or outside the MDAF. The AI/ML inference function is in the MDAF.
For NWDAF, the ML training function can be located in the MTLF of the NWDAF or the management system, the AI/ML inference function is in the AnLF.
For RAN, the ML training function and AI/ML inference function can both be located in the gNB, or the ML training function can be located in the management system and AI/ML inference function is located in the gNB.
Therefore, there might exist several location scenarios for ML training function and AI/ML inference function.
Scenario 1:
The ML training function and AI/ML inference function are both located in the 3GPP management system (e.g. RAN domain management function). For instance, for RAN domain-specific MDA, the ML training function and AI/ML inference functions for MDA can be located in the RAN domain-specific MDAF. As depicted in Figure 4a.2-1.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.2-1: Management for RAN domain specific MDAF
Up
Similarly, for CN domain-specific MDA the ML training function and AI/ML inference function can be located in CN domain-specific MDAF.
Scenario 2:
For RAN AI/ML capabilities the ML training function is located in the 3GPP RAN domain-specific management function while the AI/ML inference function is located in gNB. See Figure 4a.2-2.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.2-2: Management where the ML model training is located in RAN domain management function and AI/ML inference is located in gNB
Up
Scenario 3:
The ML training function and AI/ML inference function are both located in the gNB. See Figure 4a.2-3.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.2-3: Management where the ML model training and AI/ML inference are both located in gNB
Up
Scenario 4:
For NWDAF, the ML training function and AI/ML inference function are both located in the NWDAF. See Figure 4a.2-4.
Copy of original 3GPP image for 3GPP TS 28.105, Fig. 4a.2-4: Management where the ML model training and AI/ML inference are both located in CN
Up

5Void


Up   Top   ToC