The present document specifies the Artificial Intelligence / Machine Learning (AI/ML) management capabilities and services for 5GS where AI/ML is used, including management and orchestration (e.g., MDA, see TS 28.104) and 5G networks (e.g. NWDAF, see TS 23.288) and NG-RAN (see TS 38.300 and TS 38.401).
The following documents contain provisions which, through reference in this text, constitute provisions of the present document.
References are either specific (identified by date of publication, edition number, version number, etc.) or non-specific.
For a specific reference, subsequent revisions do not apply.
For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document.
For the purposes of the present document, the terms given in TR 21.905 and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905.
ML model:
a manageable representation of an ML model algorithm.
ML model training:
a process performed by an ML training function to take training data, run it through an ML model algorithm, derive the associated loss and adjust the parameterization of that ML model iteratively based on the computed loss and generate the trained ML model.
ML model initial training:
a process of an initial version of an ML model.
ML model re-training:
a process of training a previous version of an ML model and generate a new version.
ML model joint training:
a process of training a group of ML models.
ML training function:
a logical function with ML model training capabilities.
ML model testing:
a process of testing an ML model using testing data.
ML testing function:
a logical function with ML model testing capabilities.
AI/ML inference:
a process of running a set of input data through a trained ML model to produce set of output data, such as predictions.
AI/ML inference function:
a logical function that employs trained ML model(s) to conduct inference.
AI/ML inference emulation:
running the inference process to evaluate the performance of an ML model in an emulation environment before deploying it into the target environment.
ML model deployment:
a process of making a trained ML model available for use in the target environment.
For the purposes of the present document, the abbreviations given in TR 21.905 and TS 28.533. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 and TS 28.533.
AI