For the purposes of the present document, the terms given in
TR 21.905 and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in
TR 21.905.
ML model:
According to
TS 28.105, mathematical algorithm that can be
"trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information.
ML model lifecycle:
The lifecycle of an ML model aka ML model operational workflow consists of a sequence of ML operations for a given ML task / job (such job can be an analytics task or a VAL automation task). This definition is aligned with the 3GPP definition on ML model lifecycle according to
TS 28.105.
ML model training:
According to
TS 28.105, ML model training includes capabilities of an ML training function or service to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss.
ML model inference:
According to
TS 28.105, ML model inference includes capabilities of an ML model inference function that employs an ML model and/or AI decision entity to conduct inference.
AI/ML intermediate model:
For federated learning, members need to train models for multiple rounds, intermediate models indicate the model which do not meet the required training rounds and/or meet the requirements of the federation training.
AIMLE service:
An AIMLE service is an AIMLE capability which aims assisting in performing or enabling one or more AIML operations.
AI/ML client:
an application layer entity (also referred as ML client) which is an AI/ML endpoint, and performs client-side operations (e.g. related to the ML model lifecycle). Such AI/ML client can be a VAL client or AIML enabler client and may be configured e.g., to provide ML model training and inference locally e.g., at the VAL UE side.
AIMLE client set identifier:
an identifier of the set of selected AIMLE clients.
AI/ML server:
an application layer entity which is an AI/ML endpoint, and performs server-side operations (e.g. related to the ML model lifecycle). Such AI/ML server can be a VAL server or AIML enablement server.
FL member:
An FL member or participant is an entity which has a role in the FL process. An FL member can be an FL client performing ML model training, or an FL server performing aggregation/collaboration for the FL process.
FL client:
An FL member which locally trains the ML model as requested by the FL server. Such FL client functionality can be at the network (e.g. AIMLE server with FL client capability) or at the device side (e.g. AIMLE client with FL client capability).
FL server:
An FL member which generates global ML model by aggregating local model information from FL clients.
Split AI/ML operation pipeline:
A Split AI/ML operation pipeline is a workflow for ML model inference in which AI/ML endpoints are organized and collaborate to process ML models in sequential stages, where processing at each stage involves ML model inference on the output of the previous stage.
For the purposes of the present document, the abbreviations given in
TR 21.905 and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in
TR 21.905.
FL
Federated Learning
VFL
Vertical FL