Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TS 23.482  Word version:  19.0.0

Top   Top   None   None   Next
0…   5…   6…   7…   8…   9…   A…

 

0  Introductionp. 12

Data analytics is a useful tool for the operator to help optimizing the service offering by predicting events related to the network or slice or UE conditions. 3GPP introduced data analytics function (NWDAF) [2] to support network data analytics services in 5G Core network, management data analytics service (MDAS) [3] to provide data analytics at the OAM, and application data analytics service (ADAES) [4].
In this direction, the support for AI/ML services in 3GPP core network has been studied for providing AI/ML enabled analytics in NWDAF, as well as for assisting the ASP/3rd party AI/ML application service provider for the AI/ML model distribution, transfer, training for various applications (e.g., video/speech recognition, robot control, automotive).
Considering vertical-specific applications and edge applications as the major consumers of 3GPP-provided data analytics services, the AIML enablement (AIMLE) service plays role on the exposure of AI/ML services from different 3GPP domains to the vertical/ASP in a unified manner on top of 3GPP core network and OAM; and on defining, at a SEAL layer, value-add support services for assisting AI/ML services provided by the VAL layer, while being complementary to AI/ML support solutions provided in other 3GPP domains.
This technical specification provides architecture and procedures for enabling AIMLE service over 3GPP networks.
Up

1  Scopep. 13

The present document specifies the procedures and information flows necessary for AIML Enablement SEAL Service.

2  Referencesp. 13

The following documents contain provisions which, through reference in this text, constitute provisions of the present document.
  • References are either specific (identified by date of publication, edition number, version number, etc.) or non-specific.
  • For a specific reference, subsequent revisions do not apply.
  • For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document.
[1]
TR 21.905: "Vocabulary for 3GPP Specifications".
[2]
TS 23.288: "Architecture enhancements for 5G System (5GS) to support network data analytics services".
[3]
TS 28.104: "Management and orchestration; Management Data Analytics".
[4]
TS 23.436: "Functional architecture and information flows for Application Data Analytics Enablement Service".
[5]
TS 23.434: "Service Enabler Architecture Layer for Verticals (SEAL); Functional architecture and information flows".
[6]
TS 23.501: "System Architecture for the 5G System; Stage 2".
[7]
TS 23.401: "General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access".
[8]
TS 28.105: "Management and orchestration; Artificial Intelligence/ Machine Learning (AI/ML) management".
[9]
TS 23.502: "Procedures for the 5G System (5GS)".
[10]
TS 22.261: "Service requirements for the 5G system".
[11]
TS 26.531: "Data Collection and Reporting; General Description and Architecture".
[12]
TS 23.558: "Architecture for enabling Edge Applications".
[13]
TS 23.273: "5G System (5GS) Location Services (LCS); Stage 2".
Up

3  Definitions of terms, symbols and abbreviationsp. 13

3.1  Termsp. 13

For the purposes of the present document, the terms given in TR 21.905 and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905.
ML model:
According to TS 28.105, mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information.
ML model lifecycle:
The lifecycle of an ML model aka ML model operational workflow consists of a sequence of ML operations for a given ML task / job (such job can be an analytics task or a VAL automation task). This definition is aligned with the 3GPP definition on ML model lifecycle according to TS 28.105.
ML model training:
According to TS 28.105, ML model training includes capabilities of an ML training function or service to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss.
ML model inference:
According to TS 28.105, ML model inference includes capabilities of an ML model inference function that employs an ML model and/or AI decision entity to conduct inference.
AI/ML intermediate model:
For federated learning, members need to train models for multiple rounds, intermediate models indicate the model which do not meet the required training rounds and/or meet the requirements of the federation training.
AIMLE service:
An AIMLE service is an AIMLE capability which aims assisting in performing or enabling one or more AIML operations.
AI/ML client:
an application layer entity (also referred as ML client) which is an AI/ML endpoint, and performs client-side operations (e.g. related to the ML model lifecycle). Such AI/ML client can be a VAL client or AIML enabler client and may be configured e.g., to provide ML model training and inference locally e.g., at the VAL UE side.
AIMLE client set identifier:
an identifier of the set of selected AIMLE clients.
AI/ML server:
an application layer entity which is an AI/ML endpoint, and performs server-side operations (e.g. related to the ML model lifecycle). Such AI/ML server can be a VAL server or AIML enablement server.
FL member:
An FL member or participant is an entity which has a role in the FL process. An FL member can be an FL client performing ML model training, or an FL server performing aggregation/collaboration for the FL process.
FL client:
An FL member which locally trains the ML model as requested by the FL server. Such FL client functionality can be at the network (e.g. AIMLE server with FL client capability) or at the device side (e.g. AIMLE client with FL client capability).
FL server:
An FL member which generates global ML model by aggregating local model information from FL clients.
Split AI/ML operation pipeline:
A Split AI/ML operation pipeline is a workflow for ML model inference in which AI/ML endpoints are organized and collaborate to process ML models in sequential stages, where processing at each stage involves ML model inference on the output of the previous stage.
Up

3.2  Symbolsp. 14

For the purposes of the present document, the following symbols apply:

3.3  Abbreviationsp. 14

For the purposes of the present document, the abbreviations given in TR 21.905 and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905.
FL
Federated Learning
VFL
Vertical FL

4  Architectural requirementsp. 14

4.1  General requirementsp. 14

[AR-4.1-a]
The AI/ML enablement layer shall be able to support one or more VAL applications.
[AR-4.1-b]
Supported AI/ML enablement capabilities shall be offered as APIs to the VAL applications.
[AR-4.1-c]
The AIML enablement layer shall support interaction with 3GPP network system to consume network and AI/ML support services.
[AR-4.1-d]
The AIMLE client shall be capable to communicate with one or more AIMLE servers of the same AIMLE service provider.

4.2  AIML capability related requirementsp. 15

[AR-4.2-a]
The AIMLE server shall be capable of provisioning and exposing ML client information.
[AR-4.2-b]
The AIMLE server shall be capable of supporting the registration, discovery, and selection of AIMLE clients which participate as ML members in AIML service lifecycle.
[AR-4.2-c]
The AIMLE layer shall be capable of supporting ML service lifecycle operations (e.g., ML model training).
[AR-4.2-d]
The AIMLE server shall be capable of supporting discovery and provisioning of AIML models.

Up   Top   ToC