Tech-
invite
3GPP
space
IETF
space
21
22
23
24
25
26
27
28
29
31
32
33
34
35
36
37
38
4‑5x
Content for
TS 28.105
Word version: 18.4.0
1…
4…
4a…
6…
7…
8…
6
AI/ML management use cases and requirements
6.1
ML model lifecycle management capabilities
6.2b
ML model training
6.2c
ML model testing
6.3
AI/ML inference emulation
6.4
ML model deployment
6.5
AI/ML inference
...
6
AI/ML management use cases and requirements
p. 19
6.1
ML model lifecycle management capabilities
p. 19
6.2
Void
6.2a
Void
6.2b
ML model training
|R18|
p. 28
6.2b.1
Description
p. 28
6.2b.2
Use cases
p. 28
6.2b.2.1
ML model training requested by consumer
p. 28
6.2b.2.2
ML model training initiated by producer
p. 30
6.2b.2.3
ML model selection
p. 30
6.2b.2.4
Managing ML model training processes
p. 30
6.2b.2.5
Handling errors in data and ML decisions
p. 30
6.2b.2.6
ML model joint training
p. 31
6.2b.2.7
ML model validation performance reporting
p. 32
6.2b.2.8
Training data effectiveness reporting
p. 32
6.2b.2.9
Performance management for ML model training
p. 32
6.2b.2.9.1
Overview
p. 32
6.2b.2.9.2
Performance indicator selection for MLmodel training
p. 32
6.2b.2.9.3
ML model performance indicators query and selection for ML
p. 33
6.2b.2.9.4
MnS consumer policy-based selection of ML model performance indicators for ML model training
p. 33
6.2b.3
Requirements for ML model training
p. 33
6.2c
ML model testing
|R18|
p. 36
6.2c.1
Description
p. 36
6.2c.2
Use cases
p. 36
6.2c.2.1
Consumer-requested ML model testing
p. 36
6.2c.2.2
Producer-initiated ML model testing
p. 36
6.2c.2.3
Joint testing of multiple ML models
p. 37
6.2c.2.4
Performance management for ML model testing
p. 37
6.2c.2.4.1
Overview
p. 37
6.2c.2.4.2
Performance indicator selection for ML model testing
p. 37
6.2c.2.4.3
ML model performance indicators query and selection for ML model testing
p. 37
6.2c.2.4.4
MnS consumer policy-based selection of ML model performance indicators for ML model testing
p. 38
6.2c.3
Requirements for ML model testing
p. 38
6.3
AI/ML inference emulation
|R18|
p. 38
6.3.1
Description
p. 38
6.3.2
Use cases
p. 39
6.3.2.1
AI/ML inference emulation
p. 39
6.3.3
Requirements for Managing AI/ML inference emulation
p. 39
6.4
ML model deployment
|R18|
p. 39
6.4.1
ML model loading
p. 39
6.4.1.1
Description
p. 39
6.4.1.2
Use cases
p. 39
6.4.1.2.1
Consumer requested ML model loading
p. 39
6.4.1.2.2
Control of producer-initiated ML model loading
p. 40
6.4.1.2.3
ML model registration
p. 40
6.4.1.3
Requirements for ML model loading
p. 40
6.5
AI/ML inference
|R18|
p. 41
6.5.1
AI/ML inference performance management
p. 41
6.5.1.1
Description
p. 41
6.5.1.2
Use cases
p. 41
6.5.1.2.1
AI/ML inference performance evaluation
p. 41
6.5.1.2.2
AI/ML performance measurements selection based on MnS consumer policy
p. 41
6.5.1.3
Requirements for AI/ML inference performance management
p. 42
6.5.2
AI/ML update control
p. 42
6.5.2.1
Description
p. 42
6.5.2.2
Use cases
p. 43
6.5.2.2.1
Availability of new capabilities or ML models
p. 43
6.5.2.2.2
Triggering ML model update
p. 43
6.5.2.3
Requirements for AIML update control
p. 43
6.5.3
AI/ML inference capabilities management
p. 44
6.5.3.1
Description
p. 44
6.5.3.2
Use cases
p. 44
6.5.3.2.1
Identifying capabilities of ML models
p. 44
6.5.3.2.2
Mapping of the capabilities of ML models
p. 45
6.5.3.3
Requirements for AI/ML inference capabilities management
p. 45
6.5.4
AI/ML inference capability configuration management
p. 45
6.5.4.1
Description
p. 45
6.5.4.2
Use cases
p. 46
6.5.4.2.1
Managing NG-RAN AI/ML-based distributed Network Energy Saving
p. 46
6.5.4.2.2
Managing NG-RAN AI/ML-based distributed Mobility Optimization
p. 46
6.5.4.2.3
Managing NG-RAN AI/ML-based distributed Load Balancing
p. 46
6.5.4.3
Requirements for AI/ML inference management
p. 46
6.5.5
Executing AI/ML Inference
p. 47
6.5.5.1
Description
p. 47
6.5.5.2
Use cases
p. 47
6.5.5.2.1
AI/ML Inference History - tracking inferences and context
p. 47
6.5.5.3
Requirements for Executing AI/ML Inference
p. 48