Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 22.826  Word version:  17.2.0

Top   Top   Up   Prev   Next
1…   5…   5.2.3…   5.3…   5.3.3   5.3.4…   5.4…   6…

 

5  Use casesp. 11

5.1  Introductionp. 11

Use cases generated as part of this study are categorized as follow:
  • Use cases covering the delivery of critical care in the context of a hospital or a medical facility where the medical team and the patients are collocated. Those use cases are captured in the "static - local" or "moving - local" sections of this document depending on whether devices or people are moving while the care is delivered. In these use cases, devices and people consume indoor communication services delivered by non-public networks. It is expected that, in this context, 5G non-public networks provides communication services with similar functionalities to Local Area Networks (5GLANs) to medical equipment.
  • Use cases covering the delivery of critical care where medical specialists and patients are located at different places. This, in particular, covers medical services delivered by first rescuers. Those use cases are captured in the "static-remote" and "moving-remote" sections of this document depending on whether devices or people are moving while the care is delivered. In this context, devices and people consume communication services delivered by PLMNs where a mobile network operator can use network slicing as a means to provide a virtual private network, or private slice.
Up

5.2  Static - Localp. 11

5.2.1  Description of Modalityp. 11

5.2.1.1  Overviewp. 11

Use cases under this modality take place e.g. into hybrid operating rooms. Hybrid operating rooms (OR) are in general equipped with advanced imaging systems such as e.g. fixed C-arms (x-ray generator and intensifiers), CT scans (Computer Tomography) and MRI scans (Magnetic Resonance Imaging). The whole idea is that advanced imaging enables minimally-invasive surgery that is intended to be less traumatic for the patient as it minimizes incisions and allows to perform surgery procedure through one or several small cuts. This is as an example useful for cardio-vascular surgery or neurosurgery to place deep brain stimulation electrodes.
Due to its many benefits for the patients, image guided surgery is now the main stream for many specialties from cardiology to gastroenterology or ophthalmology. This is the underlying force for a very dynamic market predicted to reach $4,163 million by 2025 and experiencing a sustained growth of 11.2% from 2018 to 2025 (see [6]).
But, as of now, a lack of real interfaces between technologies and devices inside operating rooms is putting progress at risk. In fact, devices and software must be able to work together to create a truly digitally integrated system in the operating room. Multiple vendors are proposing integrated OR proprietary solutions but they are often limited to their particular standpoint, depending on the category of equipment they usually provide: OR tables and lighting providers, anaesthesia and monitoring equipment, endoscopes or microscopes, medical imaging (X-Ray, ultrasounds), video monitors and streaming. No category dominates others with the capacity to impose a particular solution that could be adopted by all. This roadblock to full digitalization is addressed by standards like e.g. DICOM supplement 202: RTV which leverages on SMPTE (ST 2110 family of standards) to enable the deployment of equipment in a distributed way. The intention is to connect various video or multi-frame sources to various destinations, through a standard IP switch, instead of using a proprietary video switch. This is shown on the figure below (see [2]):
Copy of original 3GPP image for 3GPP TS 22.826, Fig. 5.2.1.1-1: Overview diagram of an Operating Room (O.R.)
Up
Carriage of audio-visual signals in their digital form has historically been achieved using coaxial cables that interconnect equipment through Serial Digital Interface (SDI) ports. The SDI technology provides a reliable transport method to carry a multiplex of video, audio and metadata with strict timing relationships but as new image formats such as Ultra High Definition (UHD) get introduced, the corresponding SDI bit-rates increases way beyond 10Gb/s and the cost of equipment that need to be used at different points in a video system to embed, de-embed, process, condition, distribute, etc. the SDI signals becomes a major concern. The emergence of professional video over IP solutions, enabling high quality and very low latency performance, now calls for a reengineering of ORs, usually a long and costly process but that can be accelerated thanks to the adoption of wireless communications whose flexibility also reduces installation costs.
Witnessing the increasing interest of health industry actor in wireless technologies, [8] predicts that the global wireless health market is projected to grow from $39 Billion in 2015 to $110 Billion by 2020. More specifically, [7] points out the increasingly prevalence of wireless technology in hospital which has led to the vision of the connected hospital, a fully integrated hospital where caregivers use wireless medical equipment to provide the best quality of care to patients and automatically feed Electronic Health Records (EHR) systems. As a natural evolution, for wireless technologies that can cope with hospitals' difficult RF environment and can provide needed security warranties, it is expected that they can be a promising opportunity enabling surgeons to benefit from advanced imaging/control systems directly in operating rooms while still keeping the flexibility of wireless connectivity. In practice, one can also expect the following benefits from going wireless in O.R.:
  • Equipment sharing between operating rooms in the same hospital which makes procedures planning easier and allows hospitals to deploy an efficient resource optimization strategy,
  • On-demand addition of complementary imaging equipment in case of incident during a surgery procedure which eventually leads to better care provided to patients,
  • Suppression of a range of cables connecting a multitude of medical devices, constituting as many obstacles, that makes the job of a surgical team easier and reduces the infection risk.
In addition, hybrid O.R. trend makes operating rooms increasingly congested and complex with a multitude (up to 100) of medical devices and monitors from different vendors. In addition to surgical tables, surgical lighting, and room lighting positioned throughout the OR, multiple surgical displays, communication system monitors, camera systems, image capturing devices, and medical printers are all quickly becoming associated with a modern OR. Installing a hybrid O.R. represents therefore a significant cost, not only coming from the advanced imaging systems themselves, but also from the complex cabling infrastructure and the multiple translation systems that are needed to make all those proprietary devices communicating together. Enabling wireless connectivity in O.R. simplifies the underlying infrastructure, helps streamlining the whole setup and reducing associated installation costs.
Up

5.2.1.2  Synchronization aspectsp. 13

As a general principle, since images and metadata are transported on a packet switched based network and are generated by different sources, sources and video receivers shall be finely synchronized on the same clock synchronisation service. This synchronization is often achieved through dedicated protocols such as e.g. PTP version 2 offering sub-microsecond clock accuracy.
Note that during surgery procedures, surgeons sometimes need to switch between different medical image sources on the same monitor. A smooth image transition at source switching involves line level synchronization and translates into < 1μs clock synchronicity accuracy.
Reference number Requirement text Application / Transport Comment
5.6.1The 5G system shall support a mechanism to process and transmit IEEE1588v2 / Precision Time Protocol messages to support 3rd-party applications which use this protocol.TSee TS 22.104
5.6.1The 5G system shall support a mechanism to synchronize the user-specific time clock of UEs with a working clock.TSee TS 22.104
5.6.1The working clock domains shall provide time synchronization with precision of ≤ 1 μs.TSee TS 22.104
5.6.1The 5G system shall provide an interface to the 5G sync domain which can be used by applications to derive their working clock domain or global time domain (Reference Clock Model).TSee TS 22.104
 
Reference number Number of devices in one Communication group for clock synchronisation Clock synchronicity requirement Service area Comment
5.6.2 - row 1Up to 10 UEs< 1 μs≤ 50 m x 50 mSee TS 22.104
 
Note that clock accuracy requirements defined here applies to all use cases defined in this modality unless specifically stated.
Up

5.2.1.3  Latency aspectsp. 13

5.2.1.3.1  Imaging Systemsp. 13
As far as medical images are real-time processed by applications to deliver results/information dedicated to ease or even guide the surgical gesture, tight latency constraints apply here and often mandate those applications to be hosted by hospital IT facilities at a short network distance from the operating room.
In case of a medical procedure also involving human beings, the round trip delay constraint is generally calculated based on the following formulae:
Round trip delay = Imaging System Latency + Human Reaction Time
Where,
Imaging System Latency = Image generation + end-to-end latency + Application Processing + Image Display
This principle is depicted on the figure below:
Reproduction of 3GPP TS 22.826, Fig. 5.2.1.3-1: Imaging System Latency for medical image transmission and display
Up
With,
T1 = Time for image generation,
T2 = T4 = Time Delay through 5G Network, defined as the end-to-end latency
T3 = Application processing time,
T5 = Time for image display,
And Imaging System Latency = T1 + T2 + T3 + T4 + T5
The Imaging System Latency impairs the achievable precision at a given gesture speed and is defined based on the fact that surgeons often feel comfortable with a latency that gives 0.5cm precision at 30cm/s hand speed (a better precision implying slower hand movements). This translates into an Imaging System Latency from the image generation to their display on a monitor being around 16ms for procedures on a static organ where the only moving object is the surgeon's hand. As one can see, this figure is not calculated going through a rational process but instead depends on the surgeon perception as to whether the equipment introduces delays he can cope with or not. If the organ or body part targeted by an operation is not static (for instances a beating heart) then the Imaging System Latency shall be reduced further to achieve robust enough gesture precision.
Breaking down further Imaging System Latency is needed in order to derive sub-contributions from equipment on the data path:
  • Latency introduced by images generation and display generally comes from synchronisation issues, this is to say the availability of data versus the next clock front. In a first approach, one can consider that this latency is in the order of the time interval between two successive images and is equally distributed between generation and display. If we consider 120fps, latency contribution for generation plus display would be 8ms.
  • In a first approximation, as applications may take up quite heavy processing, especially when Augmented Reality is involved, it looks like a safe bet to set the end-to-end latency much lower than the application latency and one considers a distribution of 25/75%. Under the same assumption as before (120fps), this leaves a budget of 2ms for the transport of packets through 5G System and 6ms for application processing.
The rational described above will be used in the use cases defined as part of this modality.
Finally, humans beings' median reaction time to visual events is in the 200ms ballpark and adds to the imaging system latency estimated above. So the round trip delay may be rather high but is compensated by surgeons slowing down their movements as necessary.
Up
5.2.1.3.2  Teleoperation Systemsp. 14
The whole tele-operated system, including the human operator and the environment constitutes a closed loop system whose performance is a matter of transparency and stability. Transparency relates to the 'degree of invisibility' of the robotic system, where if perfectly transparent, the operator senses as if he is directly operating the patient. In the context of tele-surgery high transparency leads to marginally stable systems and high stability leads to poor transparency, so performance of the system is a compromise between stability and transparency and the performance is thus limited by the stability. Several "controller to controlled-devices" schemes are developed to deal with those challenges in a tele-operation system, as explained hereafter:
  • Position Position Control: This is the simplest one, the only information exchanged between the control console and the robot is the position of surgeon's hands and of the instruments and forces are estimated based on position's errors.
  • Force Position Control: This one is more intuitive as real forces resulting from the contact between instruments and the environment are measured thanks to force sensors and sent back to the control console after filtering.
  • 4 Channel Control: This one utilizes both forces and position at both surgeon and robot side which improves stability and performances but at the price of added complexity and cost. We will assume that scheme in this document.
A typical robotic system setup is depicted on the figure below:
Reproduction of 3GPP TS 22.826, Fig. 5.2.1.3.2-1: Teleoperating Robotic System Setup
Up
In the direction from the console to the robot:
  • T1 = Time for commands generation,
  • T2 = End-to-end latency from the console to the medical application located at network edge,
  • T3 = Application processing time. In this case, there might be a 3D patient body pre-operative model at work that prevents instruments to enter into certain critical pre-defined zones.
  • T4 = End-to-end latency from the medical application located at the network edge to the robot,
  • T5 = Time to render control commands into real instruments movements,
In the direction from the robot to the console:
  • T6 = Time for instrument control feedback (effort, velocity, position) and/or image generation,
  • T7 = End-to-end latency from the robot to the medical application located at network edge,
  • T8 = Application processing time. It may correspond to image processing delays or to haptic feedback generation based on instrument location, velocity, effort measurements data issued by surgical instruments and 3D pre-operative patient body model.
  • T9 = End-to-end latency from the medical application located at the network edge to the console,
  • T10 = Time to render haptic and visual feedback through the surgeon console.
The overall teleoperation system latency is therefore defined as T1 + T2 + T3 + T4 + T5 + T6 + T7 + T8 + T9 + T10.
Studies conducted on state-of-the-art robotic surgery systems (see [9]) allow to derive the following findings:
  • The maximum tolerable teleoperation system latency, up to which surgeons can still improve their performance through repeating the same simple task over and over again has been found to be around 300 ms. However, the effective latency is distinctly noticeable during the course of the operative procedures and can only be compensated by a slowing of movements and by operations of type move-pause-move-pause.
  • Longer latencies extend the operating time especially in case of complex surgery procedures such as laparoscopic kidney transplant, which is, technically speaking, an operation deemed as very demanding.
Depending on the skills of individual surgeons, on the complexity of the procedure that is tele-operated, on the importance to complete the surgery in a limited time, and depending on whether a short or no learning curve is mandated (to make the technology accessible to less experienced surgeons) much more stringent requirements for the teleoperation system latency may be appropriate.
Breakdown of the different delays when going through all the sub-systems constituting the robotic system is a very complex issue and depends heavily on the different technologies implemented for those sub-systems. However, progress in actuators and sensors seems to be pointing to (T1 + T5) = (T6 + T10) being below 10 ms and we can apply same rule as in clause 5.2.1.3.1 for the breakdown of the remaining time budget between transport time and application processing time: 25%/75%.
In this document latencies are evaluated according to the accepted error in the perception of surgical instruments' position that is introduced at a given hand speed. Then, considering that robotic systems can scale surgeons hand speed down to a 3:1 ratio, this allows to derive an overall outer control loop teleoperation system latency of 50 ms using principles and error targets explained in clause 5.2.1.3.1. This leaves us therefore with roughly 2 ms end-to-end latency constraint on each of the four radio links involved in the robotic sub-systems connectivity.
Also, note that surgeons may be able to adapt to the overall teleoperation system latency through training under a constant delay. However, it is challenging to conduct telesurgery with variable latency.
Up

5.2.2  Duplicating Video on additional monitorsp. 16

5.2.2.1  Descriptionp. 16

In the context of image guided surgery, two operators are directly contributing to the procedure:
  • A surgeon performing the operation itself, using relevant instruments;
  • An assistant controlling the imaging system (e.g., laparoscope).
In some situations, both operators prefer not to stand at the same side of the patient. And because the control image has to be in front of each operator, two monitors are required, a primary one, directly connected to the imaging system, and the second one being on the other side. The picture below gives an example of work zones inside an operating room for reference:
Reproduction of 3GPP TS 22.826, Fig. 5.2.2.1-1: Example of operating work zones
Up
As shown on Figure 5.2.2.1-1, additional operators (e.g., surgery nurse) may also have to see what is happening in order to anticipate actions (e.g., providing instrument).
The live video image has to be transferred on additional monitors with a minimal latency, without modifying the image itself (resolution…). The latency between the monitors should be compatible with collaborative activity on surgery where the surgeon is for example operating based on the second monitor and the assistant is controlling the endoscope based on the primary monitor. All equipment is synchronized thanks to the Grand Master common clock.
It is expected that some scopes will produce 8K uncompressed video, with the perspective to support also HDR (High Dynamic Range) for larger colour gamut management (up to 10 bits per channel) as well as HFR (High Frame Rate), i.e.; up to 120 fps.
The acceptable end-to-end latency is calculated based on considerations explained in clause 5.2.1.3 and breaks down into 1ms on the path from the laparoscope to the application and 1ms on the path from the application to the monitors.
Estimation of targeted communication service availability is based on the probability of successful transmission of images within latency constraints discussed above. Considering that consecutive frame loss or delay may translate into a wrong estimated distance and may result in serious injury to the patient, we want this event to only happen with a very low probability during at least the duration of a procedure, e.g. twelve hours. Note that in this use case, a total of 240 images per second are exchanged over the 5G communication service (120 images per second in each direction).
Up

5.2.2.2  Pre-conditionsp. 17

The patient is lying on the operating table and the surgery team is ready to start the procedure. Each needed equipment (laparoscope, monitors …) is:
  • Powered up,
  • Subscribed to 5G-LAN type services deployed by the hospital IT infrastructure manager,
  • Configured on which private groups they shall use to communicate with each other,
  • Attached to the non-public 5G network covering the operating room.
In addition, the monitors are subscribed to a URLLC point-to-multipoint communication service dedicated to transport high data rate downlink video streams.
The application that handles the video stream generated by the laparoscope is up and running and instantiated on private IT resources inside the hospital at a short network distance from the operating room.
Up

5.2.2.3  Service Flowsp. 17

The surgeon performs small incisions in the patient's abdominal wall to insert a laparoscope equipped with a small video camera and a cold light source. Other small diameter instruments (grasper and scissors) are introduced in order to take a sample of tissue from one of the patient's organ and the patient abdomen is insufflated with carbon dioxide gas.
  1. As the laparoscope progresses into the patient's abdomen, 8K video stream is generated by the camera and sent out through a URLLC 5G communication service to a medical application instantiated at network edge.
  2. The application distributes the video stream generated by the laparoscope to the authorized devices (video monitors) through the broadcast URLLC 5G communication service.
Up

5.2.2.4  Post-conditionsp. 17

Images are displayed by each monitor without any noticeable delay and allow the surgery team to cooperate efficiently during the whole procedure.

5.2.2.5  Existing features partly or fully covering the use case functionalityp. 17

Reference number Requirement text Application / Transport Comment
6.24Set of requirements related to the management of 5G LAN-type services and to the transport of Ethernet frames between UEs belonging to the same 5G-LAN-type service.TSee TS 22.261
6.13Flexible Broadcast/Multicast service requirementsTSee TS 22.261
Up

5.2.2.6  Potential New Requirements needed to support the use casep. 18

Use case Characteristic parameter Influence quantity
5.2.2 - Duplicating video on additional monitors Communi­cation service availa­bility: target value in % Communi­cation service reliabi­lity: Mean Time Between Failure End-to-end latency: maximum Bit rate Direction Message Size [byte] Survival time UE speed # of active UEs Service Area [m2]
Uncompressed 8K (7680x4320 pixels) 120 fps HDR 10bits real-time video stream >99.99999>1 year<1 ms120 Gbits/s UE to Network~1500 - ~9000 (note 3)~8msstationary1100
8K (7680x4320 pixels) 120 fps HDR 10bits real-time video stream with lossless compression (note 1) >99.99999>1 year<1 ms48 Gbits/s (note 2) UE to Network~1500 - ~9000 (note 3)~8msstationary1100
Uncompressed 8K (7680x4320 pixels) 120 fps HDR 10bits real-time video stream >99.99999>1 year<1 ms120 Gbits/s Network to UEs~1500 - ~9000 (note 3)~8msstationary<10100
8K (7680x4320 pixels) 120 fps HDR 10bits real-time video stream with lossless compression (note 1) >99.99999>1 year<1 ms48 Gbits/s (note 2) Network to UEs~1500 - ~9000 (note 3)~8msstationary<10100
NOTE 1:
This line provides alternative KPIs that are still acceptable.
NOTE 2:
An average compression ratio of 2.5 has been considered when applying a lossless compression algorithm.
NOTE 3:
MTU size of 1500 bytes is not generally suitable to gigabits connections as it induces many interruptions and loads on CPUs. On the other hand, Ethernet jumbo frames of up to 9000 bytes require all equipment on the forwarding path to support that size in order to avoid fragmentation.
Up

Up   Top   ToC