This use-case is illustrative for a specific example but is readily extensible for more advanced scenarios involving disaster scenarios, rescue operations or even autonomous vehicle operations. The focus of this scenario is on machine-to-machine AI/ML transfer learning and inference systems that are networked together using 5G networks to provide smart automotive applications and services.
The scenario assumes that there exist multiple AI/ML systems available, in a high-reliability, low-latency, high-bandwidth system optimized for machine-to-machine interaction. The proposed AI/ML systems may continuously exchange and share AI-ML model layers in a distributed and or federated network as determined by the system in response to a change of events, conditions or emergency situations, to improve some or all of the ML system prediction accuracy. These systems may optimize the AI/ML inference latency by executing different layers on various AI/ML networked systems.
There are two main types of ML models and model processing considered in this use-case:
-
Large ML models updates using non-real time training - These ML models are trained and optimized with millions or billions of parameters using extensive computing resources to achieve the highest accuracy possible based on specific sets of input training data. This training is performed over a long period of time and the resulting fully-trained ML model is the baseline model installed at initial production for the devices in the use-cases described below. These fully trained models may be updated with externally provided AI-ML model data and may also improve themselves based on external sensor data.
-
Partial transfer and exchange of AI-ML model data - This use case describes different applications where different types of ML systems are networked to exchange parts of AI-ML model data to improve prediction accuracy. In some systems, local ML models can continuously improve their baseline model using data gathered from the existing environment and other sensor input. These systems upload improved model data at relatively slow rates to a larger cloud-based network where further processing takes place to further refine the full ML model. The specific method used for continuous improvement is outside the scope of this use case.
Different example scenarios are described below.
AI/ML Systems Emergency Response to Disabled Vehicle
Figure 6.5.1-1 illustrates a vehicle that has broken down hidden by a blind curve in the road. This breakdown is considered high-severity for the purposes of illustrating this use-case.
On-board vehicle sensors detect the following: mechanical failure, speed reduction, GPS coordinates and a variety of technical parameters are used by on-board ML inferencing systems to immediately diagnose the failure and notify surrounding networks of the severity of the failure.
Different non-real time AI models can use the gathered failure training data to refine prediction and prevention models updates.
Traffic Surveillance Camera and MEC Metro Traffic Safety System
Surveillance cameras use ML models to analyze image sensor data to detect and categorize the type of accident using ML inference with continuous AI-ML model improvement. This processed accident information is uploaded along with the model updates to the local near-edge MEC for AI/ML system model prediction and algorithm processing to suggest the immediate next course of action. These AI/ML systems suggest an automated warning broadcast. The automated AI/ML systems relying on trained ML models to predict the position of the vehicle is especially dangerous due to the hidden blind curve and informs local Emergency Response Unit dispatch to send help to set up a traffic bypass earlier on the road. The local near-edge MEC models are capable of continuous model improvement and return this model data back to the surveillance camera for further inference accuracy improvement.
The uploading of data, possibly pre-processed by the local AI/ML model, from the surveillance camera to the MEC may happen with low-latency depending on the severity of the accident while returned data to update the local model may happen on a much slower timeframe.
Approaching Intelligent Vehicle Reaction
Figure 6.5.1-2 illustrates a vehicle rapidly approaching the car illustrated in Figure 1. The vehicle's sensors are unable to detect the hazard and must rely upon other AI/ML systems for a warning.
An approaching autonomous vehicle receives multiple warning notifications and AI/ML systems in the vehicle use trained models predict and suggest appropriate automatic manoeuvres to move the vehicle to the far lane to avoid any danger. Sharing of data with other systems allows for constantly refining their models; this sharing of data does not have non-real requirements.
The scenarios described above assume the following pre-conditions:
-
All existing AI/ML systems have existing pre-trained default AI-ML models that will perform predictions at a baseline accuracy and are capable of updating/re-fining the AI-ML models for continuously improving accuracy for the specific purpose.
Broken Autonomous Vehicle |
Approaching Autonomous Vehicle |
Road Surveillance Camera |
Smart City Metro Traffic Control |
Smart City Emergency Services Network |
AI/ML Vehicle model has detected the fault. All comm. systems are available & active. | AI/ML Vehicle models are all functioning properly. All comms systems ready &. active. | Camera is powered, active and connected to the network. | Software and comms are running properly. | Software, Comms and emergency assets are running properly. |
Appropriate AI/ML systems shall provide security and data protection as dictated by law and other appropriate policies.
The service flows described in
Figure 6.5.3-1 are all machine-to-machine AI/ML intelligent system interactions designed for the sharing of information to with a range of demands from lowest-latency for critical situations (e.g. alarm messages) and relaxed latency for less critical problems. Shared information (possibly including an extensive set of data) can also be exchanged with relaxed latency to support training and updating of the AI/ML model. Each level of service may be owned and operated by a different organization.
This use-case assumes that no more than [10%] of the total ML model size needs to be shared or to updated in a single-low latency transmission.
Vehicle to other AI Systems
Examples of AI-ML model data exchange are:
-
Vehicle health and inspection monitoring ML models - Local ML models sense and react to crash, fire, temperature and electrical sensor information. These models are continuously improving and optimizing but not necessarily sharing data if the system is nominal. However, once an emergency situation has occurred, emergency information can be sent to the appropriate emergency response services in the Smart City Core network, local cameras and road sensors. Extended information can also be shared with relaxed latency demands to support model updates/training.
-
Vehicle sensing for vehicle dynamics under ML model sensing and control. Local ML models sense and react to skids, slides, acceleration and breaking. These models are continuously improving and optimizing but not necessarily sharing data if the system is nominal. However, once an emergency situation has occurred, emergency information is immediately broadcasted to appropriate emergency response services and other system applying related ML models to inform them. Extended information can also be shared with relaxed latency demands to support improve the systems prediction accuracy.
Traffic Surveillance Camera to Metro Traffic Control
Camera senses accident using local AI/ML models and sends notification to traffic control. Traffic control may use multiple cameras to cover the same incident and, thereby, improve its inference accuracy. Extended information exchanged with low-latency demands can be used to re-train and redistribute reference AI model layers accordingly.
Examples of AI-ML Model data exchange are:
-
Camera and Road sensors detect emergency situations - Local camera and road sensor ML models are constantly improving their prediction of emergency situations. Once an emergency situation has occurred emergency information data is shared with appropriate systems to improve the response for all emergency responders. Extended information can also be shared with relaxed latency demands to support improve the systems prediction accuracy.
Metro Traffic Control to Metro Emergency Services Network
AI/ML models use automation and prediction to send appropriate EMS vehicles to the accident site.
Examples of AI-ML Model data exchange are:
-
Camera, Road and Weather sensor use ML models. ML models sensors are continuously improving and optimizing prediction results. Once an emergency situation is detected, emergency information from these systems can be shared with emergency vehicles to, e.g. improve their routing time through traffic based on the situation.
Stalled Vehicle to Manufacturer
Stalled vehicle sends data (possibly pre-processed, e.g. AI/ML model data) to automobile manufacturer to help diagnose the problem. Manufacturer uses the information to improve product quality, reliability and performance. Federated and/or Distributed Learning techniques can be used to improve on-board vehicle AI/ML model.
Stalled Vehicle to Metro Traffic Control (MTC)
Stalled vehicle sends emergency information to MTC to help warn others. E.g., The specific blind curve location along with other regional data is used to improve all MTC responses. Extended information can also be shared with relaxed latency demands to improve MTC response models.
talled Vehicle to Repair Service
Stalled vehicle sends data (possibly pre-processed, e.g. AI/ML model data) to repair service to ensure appropriate response vehicles provide proper tools and equipment. Repair service response use Federated and/or Distributed Learning techniques for constant model improvement.
Approaching Vehicle communication with Metro Traffic Control
MTC warns approaching vehicle of upcoming danger and depending on the SAE level of driving automation, the vehicle responds appropriately. Extended information can also be shared with relaxed latency demands to support improve the systems prediction accuracy.
All systems identified in this scenario use independent, networked AI/ML Distributed or Federated Learning algorithms to aggregate information from multiple sources to improve layer of the system AI/ML model. Independent AI/ML training systems ensure that improved models are distributed back to each end-system to improve the overall safety and robustness of the next response.
None.
Table 6.5.6-1 provides example ML models (sizes and DL data rates) that are considered for the use-cases above and
Table 6.5.6-2 outlines latency requirements. It is assumed that a maximum of 10% of the (full) model size is exchanged among the participating systems.
DNN model |
32 bits per parameter |
Full Model (MBytes) |
Exchanged data (Mbytes)
[10% of full model] |
Max DL user data rate (Mbit/s) |
1.0 MobileNet-224 [54] | 16.8 | 1.68 | 13.4 |
SSD-ResNet34 [55] | 81 | 8.1 | 64.8 |
SSD-MobileNet-v1 [56] | 27.3 | 2.37 | 21.8 |
MASK R-CNN [57] | 245 | 24.5 | ~100 |
DLRM [58] | 400 | 40.0 | 10 |
User application |
Potential Latency Requirements
Exchanged data - Download latency |
Vehicle Detects Fault and Pulls Over to Avoid Accident | ~500ms - 1sec |
Roadside Camera Detects Road Hazard and Warns Smart City Network | ~500ms - 1sec |
On-Coming Traffic Detects Hazard and Avoids Accident | ~500ms - 1sec |
Video recognition | ~500ms - 1sec |
Smart City Detect Accident and Issues Local Warning | ~up to few sec |
Car Manufacturer/Insurance warned about fault | Up to few mins |
Requirements:
[P.R.6.5-001]
The 5G system shall be able to support downloading of data with a maximum size of ~2-40 MB to update the local AI/ML model with latency up to 500ms - 1s.
[P.R.6.5-002]
The 5G system shall be able to support downloading of data with a maximum size of ~2-40 MB to update the local AI/ML model with (user experienced) DL data rate of up to 100 Mbit/s.
[P.R.6.5-003]
The 5G system shall be able support downloading of data to update the local AI/ML model with communication service availability up to 99.999 %.