Pilot Experiments

For the evaluation of the 5G-EPICENTRE platform, experimentation in the context of several first-party experiments (e.g. experiments deployed by consortium partners) is foreseen, which will be realized as a PPDR vertical. The purpose of these pilot experiments is to expand along the entire range of the 3 ITU-defined service types (i.e. eMBB, mMTC and URLLC), as well as to provide the floor for overseeing the platform’s secure interoperability capabilities beyond vendor-specific implementation. Toward this end, 5G-EPICENTRE has engaged SMEs and organizations, that will participate into the realization of the use cases, who constitute active players in the public security and disaster management market, thus acting as key enablers for the assessment of 5G-EPICENTRE with regard to the real needs that should be addressed. Finally, through the realization of these first-party experiments, KPIs relevant to 5G will be measured, especially those that pertain to services’ creation time.

5G-EPICENTRE Testbeds

The UMA platform integrates a distributed K8s-based infrastructure with a multi-master multi-node architecture. This infrastructure is composed of three different physical servers, one for the main data center and two edge nodes to distribute the services across locations based on the experimenters’ requirements.

K8s deployment at Malaga testbed.

The UMA K8s deployment combines the K8s orchestration with Docker and KubeVirt to manage both CNFs and VNFs. For the KPIs monitoring, an instance of Prometheus retrieves data from the NFVI and the core network. A RabbitMQ message broker connects to Prometheus and publishes data retrieved from the experiments via MQTT queues. The NFVI deployed is connected to the 5G core network and external networks. The core network contains an Athonet 5G Core (5GC)  Standalone instance, with both user and control planes configured, that attaches to the existing radio resources available at UMA’s premises. Regarding the K8s-based architecture, as depicted in Figure 1, the control plane is composed of 3 master nodes with a stacked etcd topology. The cluster includes 4 worker nodes, one acting as the main persistent storage server. The multimaster approach guarantees high availability, addressing PPDR services’ reliability requirements. The architecture integrates a high availability proxy acting as the K8s API, which balances the queries from researchers among the control plane nodes and the worker nodes. Storage is dynamically provisioned, and can be shared among workers to ensure scalability, geographical diversity, and minimal downtime in case of failure.

5G network

The Malaga testbed provides a private 5G outdoor deployment around the Ada Byron research building located at the campus of the University of Malaga. This outdoor network was deployed in the context of the European project 5GENESIS and includes 4 pairs of 4G + 5G Radio Remote Units (RRU) connected to a Nokia baseband unit (BBU). The radio display supports both the NSA (Non-standalone) operating mode and the SA (standalone ) operating mode. Currently the core network, provided by Athonet, is a 4G Release 16 core and a 5G core network for 5G SA mode. The 4G radio units operate in the 7 FDD band (2.6 GHz), while the 5G radio units operate in the 3.5 GHz TDD band (band n28) and in both cases the Telefónica spectrum is being used thanks to the agreement reached with the operator.  These RRU has been upgraded in June 2022 with the latest software provided by the manufacturer.

The 5G deployment also includes two RRU for millimetres waves that operates in band n257 and support NSA mode and 2 picocells 5G SA for indoor coverage in band n78.

5G private network

The testbed includes a 5G network emulator with support for both sub-6GHz and millimeter wave OTA 5GNR communications. The 5G network emulator used in the laboratory test environment is the Keysight E7515B UXM 5G wireless test platform, which is an evolution of previous 4G test platforms already included in the Malaga testbed. This state-of-the art setup reproduces a 5G network in a box for controlled experimentation with 5G mobile devices, including RAN and NAS emulation both for signalling and radio protocols. The emulated networks can deploy multiple cells not only in NSA configuration but also in SA configuration (without the need for an LTE anchor).

Additionally, the testbed offers 5G connectivity in the city centre of Malaga through a 5G NSA network deployed in collaboration with Telefónica and Malaga police to test PPDR services.

The Portugal experimental Facility-site is hosted by Altice Labs in Aveiro and has been built mainly in the framework of the H2020 ICT-17 5G-VINNI project (https://www.5g-vinni.eu). The figure below represents the main technological building blocks.

Altice Labs 5G Facility site

The infrastructure can be seen in the figure below.

Altice Labs 5G testbed infrastructure

The features supported are:

  • Fully functional and validated indoor 5G SA infrastructure
  • 5G NR – 3.7/3.8 GHz by ASOCS
  • O-RAN compliant CU/DU/RU RAN functional splitting
  • 5G SA Core based on Fraunhofer Fokus’ Open5GCore
  • Automated deployment and lifecycle management of 5G core components by SONATA
  • Integration of several 5G SA CPE devices validated (e.g. Huawei 5G CPE Pro H112-372)
  • Limited support of 5G network slicing (in progress in O5GC)
  • Integration of Open5GCore with Intel Openness for edge computing (AF-controlled traffic steering)

The testbed that is present at HHI incorporation of 5G-Berlin (https://5g-berlin.de/) employs a 5G base station with the mMIMO antenna capability operating in FR1 . A testing version of 5G core is being operated which not only gives the freedom for independent setup of the system parameters but also exposes different N(x) links for debugging purposes.

The main features of HHI testbed are:

  • 5G SA network
  • N78 band – 3.75 GHz operating frequency
  • 100 MHz bandwidth
  • 5G core setup as eMBB
  • mMIMO(64 Tx/Rx) Antenna with 2 beams
  • VPN access to be authorized to access the testbed

Schematic plan for the testbed network.

mMIMO Antenna with 64 Tx/rX configuration.

Nokia gNB configured as 5G SA.

A dedicated application server, Dell R440, is being deployed which cannot only be used to analyse the data from the core or the network but can also be used to deploy different use cases specific files such as docker and container setup to support the testing of the same. This facilitates the operation and testing of the use cases from single point.

Dell R440 application server.

Moreover, as the institute primarily focuses on researches in wireless technology, it gives a boost to the testbed as number of state of the art measuring equipments such as R&S spectrum analyzer, R&S signal generator and cell advisor Viavi are available for testing and analyzing the 5G network efficiently and specifically.

Cell advisor Viavi.

R&S Spectrum analyzer.

Furthermore, making the testing of different use cases more robust and mimicking real-world scenario a number of 5G user equipments (UE’s) are being available. These are:

  1. Quectel M.2 modems
  2. Huawei 5G mobile phones and CPE’s

Quectel M.2 5G modem.

Huawei 5G CPE.

Quectel 5G modem coupled with Raspberry Pi.

General experimentation capabilities

The CTTC 5G testbed allows creating multiple testbed instances (TI). Each TI is a Network Function Virtualisation (NFV) ecosystem. This allows sharing the same testbed physical infrastructure and building different subtestbeds according to the experimentation needs of each use case (UC). A testbed instance may include:

  • Computing capabilities (CPU, GPU, edge, cloud)
  • 5G network capabilities (including UEs, RAN, and core)
  • Other devices

A global view of the CTTC testbed is presented in the following figure.

The 5G CTTC Testbed.

The left part corresponds to generic purpose servers over which the testbed instances deploy their VMs or containers. Though the configuration of this part is flexible, this component can be seen as the cloud datacenter of the scenario under evaluation.

The lower part corresponds to edge servers which may be either generic purpose servers or those equipped with GPUs for offering services to demanding URLLC applications (e.g., VR).

Since the goal is to create testbed instances on top of the shared infrastructure, LXD is exploited to create system containers and virtual machines that are part of the NFVI infrastructure of the testbed instance. Inside each testbed instance, the user can decide the management and orchestration stack to deploy, which may include, for instance, OSM, Kubernetes or Openstack.

There are multiple flavors of 5G mobile network that can be used consisting of combinations of commercial and open-source hardware and software. Multiple real and emulated 5G devices can also be used in each testbed instance. The following sections describe in more detail each component.

Computing capabilities

The basic computing capabilities consist of 10 servers with a total of 456 central processing unit (CPU) cores, 2632 GB memory, 58 TB storage, and 6 graphic processing units (GPUs) distributed in two servers.

For edge computing, there are 10 machines with a total of 40 cores, 147 GB memory, and 2.5 TB storage.

It should be noted that the available GPUs can be virtualized for containers/VMs.

The following picture shows the CTTC testbed including the 5G network equipment.

Part of the CTTC 5G Testbed showing some of the servers and the Amarisoft 5G equipment.

Additionally, more servers, networking, and measurement equipment can be made available from another testbed of the Services as networkS (SaS) group, namely the EXTREME Testbed®, shown in the following picture.

Part of the CTTC 5G Testbed. EXTREME Testbed ®

5G Core capabilities

The 5GC (and RAN) can be deployed either through commercial products, i.e., the Amarisoft CallBox, or through open-source solutions integrated with Universal Software Radio Peripherals (USRPs).

More specifically, the CTTC testbed consists of a variety of 5GC, namely

  • Amarisoft 5GC, as part of the Callbox
  • Containerized Open5GS
  • Containerized OpenAirInterface

The main characteristics of the Amarisoft 5GC are:

  • Network Elements: AMS, AUSF, SMF, UPF, UDM and 5G-EIR (all integrated within the same software component)
  • 3GPP Release 16
  • NAS encryption and integrity protection: AES, SNOW3G and ZUC
  • Configurable QoS flows
  • Support of multi PDU sessions
  • Support of NR, LTE, NB-IoT and non-3GPP RAT
  • Slices are supported in the 5GC (and gNB) of the Callbox but operate in best effort mode

For more information, you can check the Amarisoft Callbox datasheet:

https://www.amarisoft.com/app/uploads/2022/01/AMARI-Callbox-Ultimate.pdf

The main characteristics for the Open5GS are:

  • Open Source implementation for 5G Core and EPC
  • 3GPP Release 16
  • 4G/5G NSA Core and 5G SA Core
  • AES, Snow3G, ZUC algorithms for encryption
  • Support of USIM cards using Milenage
  • Support of multi PDU sessions

You can find more information about Open5GS at https://open5gs.org/

Additionally, other 4G cores have been used previously and could as well be deployed for comparison purposes if needed:

  • Containerized Magma

This offers a lot of flexibility in the deployments to be capable to better adapt to the requirements of any UC and for comparison purposes.

Additionally, srsRAN can be used for 4G scenarios, and soon will offer 5G.

5G RAN capabilities

The FR1 band is supported in NSA and SA scenarios. Additionally, 4G scenarios can as well be deployed.

The testbed features come from the Amarisoft RAN, as part of the Callbox equipment:

  • One Amarisoft Callbox Ultimate
  • Two Amarisoft Callbox Mini

The main characteristics of the Amarisoft gNodeB are:

  • 3GPP Release 16
  • FDD/TDD FR1 (< 6GHz)
  • Bandwidth up to 100 MHz
  • Up to MIMO 4×4 in UL and DL
  • Subcarrier spacing. Data: 15, 30, 60 or 120 KHz; SSB: 15, 30, 120 or 240 KHz
  • Modulation schemes up to 256QAM in DL and UL
  • Supported modes: NSA and SA
  • Network interfaces: NG interface (NGAP and GTP-U) to 5GC and XnAP between gNodeBs
  • Carrier Aggregation up to 8 DL carriers in SA and NSA
  • Max number of 5G cells: 8
  • 5QI support
  • Slices are supported in the gNB (and 5GC) of the Callbox but operate in best effort mode
  • APIs for monitoring several metrics that could feed the monitoring system
  • APIs for configuring some parameters of the RAN

For more information, you can check the Amarisoft Callbox datasheets:

https://www.amarisoft.com/app/uploads/2022/01/AMARI-Callbox-Ultimate.pdf

https://www.amarisoft.com/app/uploads/2021/10/AMARI-Callbox-Mini.pdf

Additionally, srsRAN can be used for 4G scenarios, and soon will offer 5G.

5G Devices

The UE in the testbed is either:

  • 5G SA CPE (Huawei 5G CPE Pro 2), which enables connecting WiFi-only devices to the 5G network
  • 5G NSA smartphones (Two iPhone 12 Pro Max and One OnePlus 8)
  • 5G emulated UE through Amarisoft SimBox (able to emulate up to 64 UEs), and
  • Other (for the moment) non-5G equipment, e.g., MS HoloLens, which may be connected to the 5G network in different ways (e.g., 5G CPE that provides WiFi connectivity).

The Amarisoft Simbox also includes a channel simulator. The main configuration options of this channel simulator include:

  • Position, speed, and direction of the UEs
  • Channel type
  • Number of antennas
  • Noise spectral density
  • Reference signal power
  • UL power attenuation

Monitoring capabilities

For monitoring the physical/virtual resources, Netdata and/or Prometheus can be used. Custom-made monitoring systems developed in other projects may as well be used. This enables monitoring any metric from the RAN to the core and application and visualizing it. An instance of RabbitMQ supports a message-passing service.

Management and orchestration capabilities

This infrastructure is shared among multiple projects and researchers; therefore, virtualisation is essential. LXD is the main virtualisation technology that allows CTTC to create a virtualisation experimental environment called Testbed Instance. In general, the Virtual Network Functions (VNFs) are orchestrated by Open Source MANO (OSM), which uses Kubernetes and/or OpenStack as the Virtualised Infrastructure Manager (VIM), but this can be adapted depending on the testing needs. The computing resources managed by these VIMs are either VMs or Docker containers. Karmada is used to implement the multi-domain Kubernetes clusters.

Use Cases

5G-EPICENTRE will aim to address different PPDR operational scenarios through a series of first-party experimentation activities, each provisioned by a consortium partner, to act as piloting activities for the platform. For each use case, ingredients of each respective solution will be converted into CNF/VNFs and deployed to the orchestration environment of the 5G-EPICENTRE facilities. The goal will be to convincingly demonstrate a facility that is open enough to provide the network functionalities needed for the PPDR applications. Testing of key technologies such as network slicing and ultra-reliable low latency communication will be simplified by this approach. Each partner will then document their personal deployment experience in a VNF environment, the process of developing the VNFs, orchestration issues, marketplace preparation and demonstration results.

UC1 (Multimedia MC Communication and Collaboration Platform) will support the following sub use cases:

·         PPDR mobile users and dispatchers will be able to experiment with Mission Critical Services (MCS) applications enabling the following functionalities:

o   Group and individual voice calls.

o   Group and individual messaging.

o   Group and individual multimedia messaging.

o   Individual video calls.

o   Emergency calls.

o   Location and map services.

·         Applications developers will be able to integrate their solution with the Airbus MCS enabling the same functionalities as listed above. This will be done via Application Programming Interface (API) methods.

·         Video device providers (such as fixed, wearable, and drone cameras) will be able to send video streams coming from their devices to MCS clients (both mobile and dispatcher). This will be done either:

o   via network protocols for establishing and controlling media sessions and for delivering media

o   or via MCS APIs

·         PPDR mobile users and dispatchers will be able to experiment with Mission Management mobile and desktop applications enabling the tactical situation information exchange (resources detail and positions, drawing, documents and instant messages).


In 2013 a new Working Group (WG) was created under the 3rd Generation Partnership Project (3GPP). It was named System Architecture WG6 – SA6, with the responsibility to define, develop and maintain the technical specifications for Mission Critical Service (MCX) standards, including Mission Critical Push-To-Talk (MCPTT), Mission Critical Data (MCData) and Mission Critical Video (MCVideo) services. NEMERGENT has continued to expand this expertise since its inception, specialising in technologically advanced, fully standard compliant MCX solutions over 4G/5G broadband networks. MCXs provides a way for robust connection against errors, ensured predetermined quality of service and network priority over other communications. They are used from public safety and emergency (police, fire, health, etc.) to operators and industrial environments (Oil&Gas, mining, transport, etc.).

Sector of interest for MCX solutions.

The most widespread public safety communication solutions such as TETRA or P25 are based on legacy private radio technology, with proprietary deployments. This is why, both public administrations and private companies have to cover the costs of deploying these networks. In addition, there is a mix of providers and differences between networks that end up locking administrations into one provider, with the resulting cost overruns.

On the other hand, the current digital revolution demands new multimedia capabilities, high-speed data access and new functionalities. The emergency communications sector has not been excluded in this sense, and concepts such as remote video assistance, augmented reality, video emergency calls, etc. are beginning to gain relevance. However, the development of this type of services would not be possible over current TETRA or P25 networks, due to their technological limitations. Thus, the capabilities of 5G technology represent a key milestone in the evolution of the MCX sector, as it brings improved bandwidth, low latency with ultra-reliable service and massive machine-to-machine communication (eMBB, URLLC and mMTC) capabilities. In addition, 5G networks create the concept of “Network Slicing”, which opens up the possibility of splitting the 5G mobile operator’s network into something similar to a sub-network. This 5G “Slices” can be managed in a semi-independent way of each other, enabling a parallel MCX virtual sub-network, so even if the regular network is saturated by traffic, the MCX service can continue working normally.

MCX UE application.

This has therefore been the research focus of the last few years (2018-2022) for NEMERGENT, to deepen the understanding of 5G technology and create a solution that can ultimately deliver enhanced emergency communications services. Therefore, NEMERGENT has taken advantage of the need for a technological leap in the emergency communications sector, along with the opportunity that new 5G improvements offered, to develop solutions that can be deployed as a vertical service in public, private or both hybrid 5G networks.  

NEM’s approach to MCX-5G experimentation.

In this framework, a platform such as the one proposed in the 5G-EPICENTRE project acquires significant relevance. NEMERGENT intends to use the infrastructure and services created in the project to continue offering technological advances in its MCX solutions, and to experiment with new functionalities. The Use Case (UC) aims to address the need for coordination between Public Protection and Disaster Relief (PPDR) agencies by implementing a dynamic solution based on location or network conditions. It also seeks to explore Quality of Service (QoS) management and slicing for MCX communications. To this end, a fully microservices-based MCX solution has been developed, fitting with the cloud-native nature of the 5G-EPICENTRE platform. This technology, used in both the MCX solution and the 5G-EPICENTRE platform, will increase the deployment agility, enhancing the experimental needs raised in this UC.

 

Within the 5G-EPICENTRE project, HHI plans to test an important UC, which addresses the topic of PPDR. Among the most prominent cases of interest to the Wireless Communications and Networks department at Fraunhofer Institute for Telecommunications is the demonstration of super reliable drone navigation and remote control by utilising the federated testbed resources.

Drones have the potential to improve public safety, as they can, for example, be deployed to fly ahead of first responders and transmit live video of a particular situation at the site. This could be, for example, a fire on company premises. By flying ahead, the drone can clarify the emergency situation before first responders are able to reach the site. This saves extremely valuable time on the scene and improves planning of the mission, so that e.g., additional personnel can be alerted and deployed, if necessary. 

Efficient drone control and localisation, however, remain challenging tasks, as drone communications should be characterised by stability, wide availability, low cost and ultra-reliability, even when the drone is out of sight. As modern drones are typically controlled via remote control, their applicability in real-life situations remains severely limited. In this UC, HHI will experiment with various methods for drone control in different situations, particularly focusing on network overload situations, when the data channels are used in major events or disasters. 

A particularly efficient means of controlling drones via Bandwidth-optimised communication protocols in the mobile network will be deployed in the form of VNFs on top of the 5G-EPICENTRE infrastructure, facilitating a two-way communication where commands are transmitted to the device, which in turn responds with information about its position, altitude and battery status.

The mission drone will be using a 5G network slice in order to secure ultra-reliability and will be streaming Infrared (IR) and optical video streams of the site. The fire service is to receive a prioritised data link for the use of drones. Video and telemetry data should be able to be displayed on different devices at the same time.

The first step is to realise the CC link to the drone. This involves narrowband data with a high requirement for availability and latency. In the second step, the live video is to be transmitted via a separate VNF. The video has a high bandwidth requirement but lower latency and availability requirements than the CC link. Subsequently, the data from CC Link and Video should be able to be displayed in a common graphical user interface.

UC3 Ground Control app with integrated video

Experiment phases or deployment scenarios

The UC will demonstrate the use of a drone as part of a fire rescue service operation. 

  • The drone connects to the 5G testbed via 5G campus network connection using 100 MHz bandwidth at 3.7 GHz in band n78 for the transmission of telemetry and video data. 
  • Optionally, the drone can connect to a 5G slice within this band to ensure reliability for mission critical communications. 

Scenario 1

The control centre launches the drone and sends it to the operation area. The emergency forces who arrive on the scene after the drone can access it and its video data via a tablet or other suitable UE on their way to the site to get an impression of the situation. In order to relieve the emergency services, the monitoring of the drone can be returned to the control centre at any time. The mission operation manager is in control of the drone and the video streams all the time, even on the way to the site. 

Scenario 2

Optionally and additionally to scenario 1, another drone operator off-site is able to control the drone remotely from the mission control centre. In addition to the drone’s automatic flight operation, this gives the maximum control over the drone and helps to guide and advise first responders with regard to the site’s special conditions accordingly. This can make all the difference to save valuable time, even if only some crucial minutes, and, thereby, possibly save lives.

 

Use case main building blocks 

The interaction of the individual components is shown in the figure below.  

UC3 Interaction of the individual components (server equals management server)

The main building blocks consist of the drone with the dual camera and its 5G interface. The 5G network forms the node over which the entire communication runs. The processing of the data is done on an edge server. The access to the data and the control of the drone is done via Internet, the Server and the 5G Network as depicted in the UML diagram.

UC3 UML diagram

 

The Mobitrust platform is the key technology behind the Use Case 4 of 5G-EPICENTRE, which is centered on IoT for improving first responders’ situational awareness and safety. Mobitrust has been subject to continuous development by a specialised OneSource team for the past eight years, and the latest enhancements led to the cloudification and split intro microservices of its internal components. The wearable devices, known as BodyKits, are able to use 5GSA , which brings vast improvements to field awareness by delivering reliable communications, low latency and enough bandwidth for real-time HD video from multiple points in the field.

The platform has a wide range of possible end users, including police forces, fire departments, civil protection, military forces, industrial workers and emergency medical services. By leveraging multiple technologies and collecting inputs from multiple sources such as sensors (biological, environmental and positioning), cameras, among others, Mobitrust offers the following functionalities:

  • Integration with 4G and 5G for public safety communications
  • Data correlation and personalised notifications
  • Integration with Commercial-Off-The-Shelf (COTS) devices
  • Integration with Mobile Device Management
  • Advanced statistics
  • A secure mobile platform
  • Automated actions in response to a set of defined events
  • Automated alarms in case of an anomalous sensor reading

All the video, audio and sensors transmissions may be sent to mobile Command and Control Centres (CCCs) as well as to a central location CCC. From these control centres, with the enhanced situational awareness, operators are able to perform better informed decisions and improve the overall safety of field teams and the public in general.

Mobitrust  network services

Thought to be used anywhere at any time, the Mobitrust platform is now leveraging the microservice architecture, which allows the split of its internal components into multiple geographical locations. Hence, certain components can be instantiated near the scenarios where operations are taking place. Such proximity will allow a decrease in latency for communications and also the reliability of the whole system by being tolerant to failures in backhaul connectivity, which are known to occur in certain catastrophe scenarios.

For this use case, RedZinc provides wearable video for mobile telemedicine applications. BlueEye Handsfree is a wearable video solution for paramedics and nurses. A video camera is worn on an ergonomically designed headset. This camera sends live point of view video to a remote doctor. The doctor could be at a helpdesk, different part of the hospital, or at home at night. Real-time video of emergencies can benefit both patients and first responders. When the video is relayed to the emergency doctor at the hospital, the doctor can help with diagnosis, treatment and oversight before the patient reaches a hospital. For a stroke patient ‘time is brain tissue’ and for a heart attack patient ‘time is heart muscle’. Using point of view video from the paramedic to ‘immerse’ the hospital-bound doctor in the remote scene allows for quicker delivery of, for example, clot busting drugs, which can benefit patient outcome.

RedZinc also has come up with wearable video for educational and training set-up. The medical professional gives a live demonstration of a medical procedure to students in remote locations such as students at home. The professor or medical professional wears the hands-free headset through which the live video and audio feed are transmitted to the students who are in remote locations. The students log in to the BlueEye portal using a laptop or smartphone and access the real-time video and audio feed. 



 

OPTO is specialized in development, design and manufacture of sophisticated individual and complex large-scale solutions for measurement, monitoring, and control technologies. OPTO’s UseCase (UC) uses drones for an in-depth analysis of specific emergency situations that may arise. UC’s solution focuses on camera-based systems with image data transfer over networks. It will process the image using Artificial Intelligence (AI), customized for the PPDR-Sector, generating an image flow from a camera-based system connected to the 5G network, transferring the data via a Virtual Network Function (VNF) AI Analyzer to an 5G connected handheld display device. The AI-Analyzer, hosted in 5G-Core-Serverlandscape, is annotating detected objects in the image flow stream, which can be displayed at the 5G connected handheld display device. This means that UC6’s solution is a drone application focused on image processing and brings new needs to 5G experimentation in drone contexts. 

 

Youbiquo (YBQ) is the manufacturer of the “Talens” Smart Glasses, a wearable computer equipped with AR and AI features. Having achieved several successes in the manufacturing industry, the Smart Glasses come equipped with Smart Personal Assistant and Video Conference software, which YBQ plans to integrate into the rescue and operations environment. In this Use Case (UC7), YBQ aims to experiment with its Talens Holo Smart Glasses in 5G network conditions, targeting a case of interest to the PPDR domain, which is described below.

As shown in the figure below, instance segmentation and edge detection will be used to overlay useful information directly on top of the real world through the optical see-through display worn by civil defence workers (on-field operators), who patrol or operate in a designated area. For the realisation of this scenario, low latency edge device interconnection is a requirement so that ML processes dealing with costly, AI-driven semantic segmentation procedures can run efficiently in order to provide real-time view annotation. The overall situational awareness mechanism for the officers will be complemented by data interchanged between the operation site and the Command&Control Center (CCC). Finally, if drones are available on site, Machine Learning (ML) elaborated info coming from their cameras can be shown on the AR layer such as the number of people injured, fires placement, other public forces on the field and so on.

UC7 real-time semantic segmentation

Experiment phases or deployment scenarios 

A set of civil defence workers wearing Smart Glasses are able to see AR information on the disaster scene. The AR layer is composed of information locally elaborated into the wearable processing unit together with information remotely elaborated by ML algorithms in the CCC. The Smart Glasses worn by the operator send an audio/video stream to the CCC for ML situational awareness evaluation. In the CCC a set of views can be developed in order to analyse the different information coming from the disaster field, segmented by its meaning, i.e., a heat map to highlight the movements of operators onto the field. All the operators wearing the Smart Glasses can start an audio/video call with the remote CCC.

In this UC, it is possible to use patrolling drones to effectively support partners working on their deployment and communication.  

The drones could be sent to the disaster field prior to the arrival of civil defence workers. Drone cameras can map the scene, their images can be shared with the CCC and this info can be useful for the people wearing Smart Glasses sent to the field before they arrive.

Moreover, during the action, users wearing the Smart Glasses can receive elaborated real-time info from the video flow acquired by drones. Using the fast 5G connection, the drones’ video flow can be sent from the drones to the CCC, can be elaborated with ML algorithms and then sent to the Smart Glasses to have a complete awareness of the situation.

From a technological point of view, a mobile application has to be designed and developed for the Android platform (the OS of the Smart Glasses) and for the management of the drones’ communication. In the AR field, there are different AR engines available; the first tests done using the Unity Framework are promising.

Surgeons should play a central role in disaster planning and management due to the overwhelming number of bodily injuries that are typically involved during most forms of disaster. In fact, various types of surgical procedures are performed by emergency medical teams after sudden-onset disasters, such as soft tissue wounds, orthopaedic traumas, abdominal surgeries, etc [1,2]. HMD-based Augmented Reality (AR), using state-of-the-art hardware such as the Magic Leap or the Microsoft HoloLens, have long been foreseen as a key enabler for clinicians in surgical use cases [3] especially for procedures performed outside of the operating room. In such condtions, monolithic HMD applications fail to maintain important factors such as user-mobility, battery life, and Quality of Experience (QoE), leading to considering a distributed cloud/edge software architecture. Toward this end, 5G and cloud computing will be a central component in accelerating the process of remote rendering computations and image transfers to wearable AR devices.

ORamaVR  leads the  Use  Case (UC) ”AR-assisted emergency surgical care”,  identified  in the context of the 5G-EPICENTRE EU-funded  project. Specifically, the UC will experiment with holographic AR technology for emergency medical surgery teams, by overlaying deformable medical models directly on top of the patient body parts, effectively enabling surgeons to see inside (visualizing bones, blood vessels, etc.) and perform surgical actions following step-by-step instructions (see Figure 1). 

The PPDR responder uses an AR HMD to see overlayed info and deformable objects on top of the patient. Envisioned example of UI layout.

The goal is to combine the computational and data-intensive nature of AR and Computer Vision algorithms with upcoming 5G network architectures deployed for edge computing so as to satisfy real-time interaction requirements and provide an efficient and powerful platform for the pervasive promotion of such applications. Toward this end, the authors have adapted the psychomotor Virtual Reality (VR) surgical training solution, namely MAGES [4,5], developed by the ORamaVR company. By developing the necessary Virtual Network Functions (VNFs) to manage data-intensive services (e.g., prerendering, caching, compression) and by exploiting available network resources and Multi-access Edge Computing (MEC) support, provided by the 5G-EPICENTRE infrastructure, this UC aims to provide powerful AR-based tools, usable on site, to first-aid responders (see Figure 2 for an overview of the UC NetApps layout).

Envisioned layout of the UC components. Blue and Green nodes correspond to edge and cloud resources respectively.

[1] C. A. Coventry, A. I. Vaska, A. J. Holland, D. J. Read, and R. Q. Ivers, “Surgical procedures performed by emergency medical teams in sudden- onset disasters: a systematic review,” World journal of surgery, vol. 43, no. 5, pp. 1226–1231, 2019.
[2] T. Birrenbach, J. Zbinden, G. Papagiannakis, A. K. Exadaktylos, M. M ̈uller, W. E. Hautz, and T. C. Sauter, “Effectiveness and utility of virtual reality simulation as an educational tool for safe performance of covid-19 diagnostics: Prospective, randomized pilot trial,” JMIR Serious Games, vol. 9, no. 4, p. e29586, Oct 2021. [Online]. Available: https://games.jmir.org/2021/4/e29586
[3] P. Zikas, S. Kateros, N. Lydatakis, M. Kentros, E. Geronikolakis, M. Kamarianakis, G. Evangelou, I. Kartsonaki, A. Apostolou, T. Birrenbach et al., “Virtual reality medical training for covid-19 swab testing and proper handling of personal protective equipment: Development and usability,” Frontiers in Virtual Reality, p. 175, 2022.
[4] G. Papagiannakis, P. Zikas, N. Lydatakis, S. Kateros, M. Kentros, E. Geronikolakis, M. Kamarianakis, I. Kartsonaki, and G. Evangelou, “Mages 3.0: Tying the knot of medical VR,” in ACM SIGGRAPH 2020 Immersive Pavilion, 2020, pp. 1–2.
[5] G. Papagiannakis, N. Lydatakis, S. Kateros, S. Georgiou, and P. Zikas, “Transforming medical education and training with VR using MAGES,” in SIGGRAPH Asia 2018 Posters, 2018, pp. 1–2.