Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 23.700-82  Word version:  19.1.0

Top   Top   None   None   Next
0…   5…

 

0  Introductionp. 10

Data analytics is a useful tool for the operator to help optimizing the service offering by predicting events related to the network or slice or UE conditions. 3GPP introduced data analytics function (NWDAF) [2] to support network data analytics services in 5G Core network, management data analytics service (MDAS) [3] to provide data analytics at the OAM, and application data analytics service (ADAES) [4].
In this direction, the support for AI/ML services in 3GPP system has been studied for providing AI/ML enabled analytics in NWDAF, as well as for assisting the ASP/3rd party AI/ML application service provider for the AI/ML model distribution, transfer, training for various applications (e.g., video/speech recognition, robot control, automotive).
Considering vertical-specific applications and edge applications as the major consumers of 3GPP-provided data analytics services, the application enablement layer can play role on the exposure of AI/ML services from different 3GPP domains to the vertical/ASP in a unified manner; and on defining, at an overarching layer, value-add support services for assisting AI/ML services provided by either the VAL layer or the application enablement layer (for enhancing the SEAL ADAES services).
This technical report identifies the key issues and corresponding application architecture and related solutions with recommendations for the normative work.
Up

1  Scopep. 11

The present document is a technical report which identifies the application enabling layer architecture, capabilities, and services to support AI/ML services at the application layer.
The aspects of the study include the investigation of application enablement impacts, the application enablement layer architecture enhancements and solutions needed to provide assistance in AI/ML operations (model distribution, transfer and training) at the VAL layer as well as at the application enablement layer (e.g., SEAL ADAES, EDGEAPP).
The study takes into consideration the work done for AI/ML in TS 23.288 and Rel-18 AIMLsys (TS 23.501 and TS 23.502), TS 28.104 and may consider other related work outside 3GPP.
Up

2  Referencesp. 11

The following documents contain provisions which, through reference in this text, constitute provisions of the present document.
  • References are either specific (identified by date of publication, edition number, version number, etc.) or non-specific.
  • For a specific reference, subsequent revisions do not apply.
  • For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document.
[1]
TR 21.905: "Vocabulary for 3GPP Specifications".
[2]
TS 23.288: "Architecture enhancements for 5G System (5GS) to support network data analytics services".
[3]
TS 28.104: "Management and orchestration; Management Data Analytics".
[4]
TS 23.436: "Functional architecture and information flows for Application Data Analytics Enablement Service".
[5]
TS 23.501: "System Architecture for the 5G System; Stage 2".
[6]
TS 23.502: "Procedures for the 5G System (5GS)".
[7]
TS 23.434: "Service Enabler Architecture Layer for Verticals (SEAL); Functional architecture and information flows".
[8]
TS 23.401: "General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access".
[9]
TS 28.105: "Management and orchestration; Artificial Intelligence/ Machine Learning (AI/ML) management".
[10]
TS 22.261: "Service requirements for the 5G system".
[11]
TR 26.927: "Study on Artificial Intelligence and Machine learning in 5G media services".
→ to date, still a draft
[12]
TR 22.874: "Study on traffic characteristics and performance requirements for AI/ML model transfer".
[13]
TS 26.531: "Data Collection and Reporting; General Description and Architecture".
[14]
TS 23.558: "Architecture for enabling Edge Applications".
Up

3  Definitions of terms, symbols and abbreviationsp. 12

3.1  Termsp. 12

For the purposes of the present document, the terms given in TR 21.905 and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905.
ML model:
According to TS 28.105, mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information.
ML model lifecycle:
The lifecycle of an ML model includes data collection, data processing, model training, model verification, model, instantiation and deployment, model monitoring and termination of ML model components.
ML model training:
According to TS 28.105, ML model training includes capabilities of an ML training function or service to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss.
ML model inference:
According to TS 28.105, ML model training includes capabilities of an ML model inference function that employs an ML model and/or AI decision entity to conduct inference.
AI/ML enablement:
an application enablement framework consisting of one or more AI/ML enabler capabilities based on implementation. Such function can be deployed as (or within) an enablement layer server (e.g. SEAL or ADAES) or client.
AI/ML client:
an application layer entity (also referred as ML client) which is an AI/ML endpoint, and performs client-side operations (e.g. related to the ML model lifecycle). Such AI/ML client can be a VAL client or AIML enabler client and may be configured e.g. to provide ML model training and inference locally e.g. at the VAL UE side.
AI/ML server:
an application layer entity which is an AI/ML endpoint, and performs server-side operations (e.g. related to the ML model lifecycle). Such AI/ML server can be a VAL server or AIML enabler server.
FL member:
An FL member or participant is an entity which has a role in the FL process. An FL member can be an FL client performing ML model training, or an FL server performing aggregation/collaboration for the FL process. The FL member in this study is assumed to be either a functionality at the VAL UE or at the network/server side (AIML enablement server, VAL server).
FL client:
An FL member which locally trains the ML model as requested by the FL server.
FL server:
An FL member which generates global ML model by aggregating local model information from FL clients.
VAL client:
According to TS 23.434, VAL client is an entity that provides the client side functionalities corresponding to the vertical applications.
VAL server:
According to TS 23.434, VAL server is a generic name for the server application function of a specific VAL service.
Up

3.2  Symbolsp. 12

For the purposes of the present document, the following symbols apply:

3.3  Abbreviationsp. 12

For the purposes of the present document, the abbreviations given in TR 21.905 and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905.
ADAES
Application Data Analytics Enablement Server
A-ADRF
Application layer - Analytical Data Repository Function
A-DCCF
Application layer - Data Collection and Coordination Function
AIMLE
AI/ML enablement
AnLF
Analytics logical function
ASP
Application Service Provider
A-DCCF
Application layer - Data Collection and Coordination Function
MDAS
Management Domain Analytics Service
MFAF
Messaging Framework Adaptor Function
MTLF
Model Training logical function
MTME
ML Model Training and Management Entity
MLT
ML training Function
NEF
Network Exposure Function
NWDAF
Network Data Analytics Function
OAM
Operation, Administration and Maintenance
SEAL
Service Enabler Architecture Layer
SEALDD
SEAL Data Delivery
VFL
Vertical Federated Learning
VAL
Vertical Application Layer
Up

4  Analysis of AI/ML support in 3GPPp. 13

4.1  AI/ML related use cases and requirements in 3GPPp. 13

4.1.1  Descriptionp. 13

3GPP SA1 has identified use cases (TR 22.874) and requirements (in TS 22.261) for the support of AI/ML model distribution, transfer, training for various applications (e.g., video/speech recognition, robot control, automotive), also covering the support for Distributed AI training/inference based on direct device connection.
The use cases provided in TS 22.261 are covering:
  • AI/ML operation splitting between AI/ML endpoints: The AI/ML operation/model is split into multiple parts according to the current task and environment.
  • AI/ML model/data distribution and sharing over 5G system: multi-functional mobile terminals might need to switch the AI/ML model in response to task and environment variations. Hence, 5GS needs to support the distribution and sharing of the model.
  • Distributed/Federated Learning over 5G system: The cloud server trains a global model by aggregating local models partially trained by each end devices.
Based on these use cases, there are Stage 1 requirements which are relevant for application enablement.
Up

4.1.2  Analysisp. 13

Table 4.1.2-1 lists the 3GPP specifed AI/ML service requirements which may have impact on application enablement layer. The requirements are grouped by functional areas.
Sl. Requirements Application enabler layer relevance
1Direct Network Connection
1.1Based on operator policy, 5G system shall be able to provide means to predict and expose predicted network condition changes (i.e. bitrate, latency, reliability) per UE, to an authorized third party.The exposure of predicted network condition changes may require enhancements of the enablement layer (which may consume NEF / OAM services and provide abstraction on top).
1.2Subject to user consent, operator policy and regulatory constraints, the 5G system shall be able to support a mechanism to expose monitoring and status information of an AI-ML session to a 3rd party AI/ML application.The AI-ML session is an application layer session between two or more AI/ML endpoints. Such monitoring / exposure has impact on application enablement layer.
2Direct Device Connection
2.1Based on user consent, operator policy and trusted 3rd party request, the 5G system shall be able to dynamically add or remove specific UEs to/from the same service (e.g. a AI-ML federated learning task) when communicating via direct device connection. The 5GS support to add or remove group members from the same "application service" is a task within AIML enablement scope since the ML/FL members of the group are application layer entities (e.g. application client at VAL UE side).
2.2Based on user consent, operator policy and trusted 3rd party request, the 5G system shall be able to support means to monitor the QoS characteristics (e.g. data rate, latency) of traffic transmitted via direct device connection or relayed by a UE, and 5G network expose the monitored information to the 3rd party. The AI-ML session is an application layer session between two or more AI/ML endpoints (VAL UEs in direct connection). QoS monitoring, prediction negotiation and exposure to the 3rd party has impact on application enablement layer since it involves monitoring/predicting and exposing application or e2e QoS parameters for the AI-ML sessions.
2.3Subject to user consent, operator policy and trusted 3rd party request, the 5G system shall be able to provide means the network to predict and expose QoS information changes for UEs' traffic using direct or indirect network connection (e.g., bitrate, latency, reliability).The 5G system shall be able to support a mechanism for a trusted third-party to negotiate with the 5G system for a suitable QoS for direct device connections of multiple UEs exchanging data with each other (e.g. a group of UEs using the same AI-ML service).
2.4Subject to user consent, regulation, trusted 3rd party's request and operator policy, the 5G network shall be able to expose information to assist the 3rd party to determine candidate UEs for data transmission via direct device connection (e.g. for AIML model transfer for a specific application).The assistance of 5GS to the 3rd party (ASP/vertical) to determine candidate UEs lies within SA6 scope for the case since the ML/FL member UEs are application layer entities (e.g. application client at VAL UE side).
2.5Subject to user consent, operator policy, regulation and trusted 3rd party's request, the 5G network shall be able to expose information of certain UEs using the same service to the 3rd party (e.g. to assist a joint AIML task of UEs in a specific area using direct device communication)The assistance of 5GS to the 3rd party (ASP/vertical) to expose information of members of a certain service may be within SA6 scope for the case where the ML/FL members of the group are application layer entities (e.g. application client at VAL UE side).
Up

4.2  AI/ML related stage 2 work in 3GPPp. 15

4.2.1  Descriptionp. 15

4.2.1.1  3GPP SA2p. 15

3GPP SA2 started from Rel-16 to investigate the AI/ML support focusing on the main two aspects.
1) AI-ML support in NWDAF
Network analytics and AI/ML is deployed in the 5G core network via the introducing of NWDAF consider the support of various analytics types that can be distinguished using different Analytics IDs, e.g., "UE Mobility", "NF Load", etc. as elaborated in TS 23.288. Each NWDAF may support one or more Analytics IDs and may have the role of: (i) AI/ML inference called NWDAF AnLF, or (ii) AI/ML training called NWDAF MTLF or (iii) both.
Copy of original 3GPP image for 3GPP TS 23.700-82, Fig. 4.2.1.1-1: NWDAF architecture for analytics generation based on trained models
Up
Figure 4.2.1.1-1 illustrates the various NWDAF deployment flavours and their respective input data sources and the potential consumer for analytics output results. Specifically, NWDAF relies on various sources of data input including data from 5G core NFs, AFs, 5G core repositories, e.g., NRF, UDM, etc., and OAM data, including performance measurements (PMs), KPIs, configuration management data and alarms. An NWDAF may provide in turn analytics output results to 5G core NF, AFs, and OAM. Optionally, DCCF and MFAF may be involved to distribute and collect repeated data towards or from various data sources.
For FL use cases, in Release 18, 3GPP defined federated learning amongst different NWDAF MTLFs where ML model training is running in multiple local MTLFs.
2) Network assistance for AI-ML services
As part of AIMLSys, enhancements to 5GC (as specified in clause 5.46 of TS 23.501) are specified for assisting the AI/ML operations in the application layer (between one or more AI/ML users and AI/ML server). In this direction, NEF may assist the AI/ML application server in scheduling available UE(s) to participate in the AI/ML operation (e.g. Federated Learning). Also, 5GC may assist the selection of UEs to serve as FL clients, by providing a list of target member UE(s), then subscribing to the NEF to be notified about the subset list of UE(s) (i.e. list of candidate UE(s)) that fulfil certain filtering criteria.
Up

4.2.1.2  3GPP SA4p. 15

SA4 has an ongoing study on AI and ML for Media (3GPP TR 26.927 [11]). This study aims to identify relevant interoperability requirements and implementation constraints of AI/ML in 5G media services. This study includes media-based AI/ML use cases and architecture considerations related to media services.

4.2.1.3  3GPP SA5p. 15

3GPP SA5 in Rel-17 has provided a work on AI/ML management (TS 28.105) focusing on AI/ML capabilities (e.g. ML training MnS/MF) at OAM side. TS 28.105 specifies the AI/ML management capabilities and services for 5GS where AI/ML is used, including management and orchestration (e.g. MDA, see TS 28.104). In this work, an ML training Function (MLT) playing the role of ML training MnS producer is introduced which may consume various data from 5GS for ML training purpose and provides the training outputs to other management functions in OAM.
Up

4.2.2  Analysisp. 16

Both 3GPP SA2 and 3GPP SA5 have investigated the use of AI/ML for either enhancing core and management domain operations, or for providing assistance for 3rd party AI/ML services. Current solutions from other WGs do not cover:
  • Support application layer aspects on top of 5GC/OAM, where the ML models and the ML/FL member are application layer entities at the server or UE side.
  • Support for vertical use cases, by providing support capabilities for the application-layer AI/ML endpoints at the vertical/enterprise domains.
  • Support for AI/ML operations, where the ML entities belong to different domains (e.g. edge/cloud, vertical premises) and networks (PLMNs/NPNs).
  • Support cases where the VAL UE plays role in the AI/ML model lifecycle/ workflow. As example, the VAL UE can be selected as FL client which is not considered in other 3GPP workgroups.
  • Support AIML application service continuity (e.g. undisrupted FL service continuity)and adapting to dynamic changes (e2e QoS downgrades, app mobility, ML member availability change).
  • Support for ML model splitting and distribution.
Up

Up   Top   ToC