AI/ML techniques are widely used in 5GS (including 5GC, NG-RAN, and management system), the generic AI/ML operational workflow in the lifecycle of an ML entity, is depicted in
Figure 4a.0-1.
The workflow involves 4 main operational phases; namely training, emulation, deployment, and inference phase. The main tasks for each phase are briefly described below:
Training phase:
-
ML model training: training, including initial training and re-training, of an ML model or a group of ML models. It also includes validation of the ML entity to evaluate the performance when the ML entity performs on the training data and validation data. If the validation result does not meet the expectation (e.g., the variance is not acceptable), the ML model associated with that entity needs to be re-trained. The ML model training is the initial phase of the workflow.
-
ML testing: testing of the validated ML entity to evaluate the performance of the trained ML model when it performs on testing data. If the testing result meets the expectation, the ML entity may proceed to the next phase, otherwise the ML model associated with that entity may need to be re-trained.
Emulation phase:
-
ML emulation: running an ML entity for inference in an emulation environment. The purpose is to evaluate the inference performance of the ML entity in the emulation environment prior to applying it to the target network or system.
Deployment phase:
-
ML entity loading: the process (a.k.a. a sequence of atomic actions) of making a trained ML entity available for use at the target AI/ML inference function.
The deployment phase may not be needed in some cases, for example when the training function and inference function are co-located.
Inference phase:
-
AI/ML inference: performing inference using a trained ML entity by the AI/ML inference function.
An ML training Function playing the role of ML training MnS producer, may consume various data for ML training purpose.
As illustrated in
Figure 4a.1-1 the ML training capability is provided via ML training MnS in the context of SBMA to the authorized consumer(s) by ML training MnS producer.
The internal business logic of ML training leverages the current and historical relevant data, including those listed below to monitor the networks and/or services where relevant to the ML model, prepare the data, trigger and conduct the training:
-
Performance Measurements (PM) as per TS 28.552, TS 32.425 and Key Performance Indicators (KPIs) as per TS 28.554.
-
Trace/MDT/RLF/RCEF data, as per TS 32.422.
-
QoE and service experience data as per TS 28.405.
-
Analytics data offered by NWDAF as per TS 23.288.
-
Alarm information and notifications as per TS 28.532.
-
CM information and notifications.
-
MDA reports from MDA MnS producers as per TS 28.104.
-
Management data from non-3GPP systems.
-
Other data that can be used for training.
The ML training function and/or AI/ML inference function can be located in the RAN domain MnS consumer (e.g. cross-domain management system) or the domain-specific management system (i.e. a management function for RAN or CN), or Network Function.
For MDA, the ML training function can be located inside or outside of MDAF. The AI/ML inference function is in the MDAF.
For NWDAF, the ML training function can be located in NWDAF or management system, the AI/ML inference function is in the NWDAF.
For RAN, the ML training function and AI/ML inference function can both be located in the gNB, or the ML training function can be located in the management system and AI/ML inference function is located in the gNB.
Therefore, there might exist several location scenarios for ML training function and AI/ML inference function.
Scenario 1:
The ML training function and AI/ML inference function are both located in the 3GPP management system (e.g. RAN domain management function). For instance, for RAN domain-specific MDA, the ML training function and AI/ML inference functions for MDA can be located in the RAN domain-specific MDAF. As depicted in
Figure 4a.2-1.
Similarly, for CN domain-specific MDA the ML training function and AI/ML inference function can be located in CN domain-specific MDAF or in the cross-domain MDAF.
Scenario 2:
The ML training function is located in the 3GPP RAN domain-specific management function while the AI/ML inference function is located in gNB. See
Figure 4a.2-2.
Scenario 3:
The ML training function and AI/ML inference function are both located in the gNB. See
Figure 4a.2-3.