Header image

S3.8 - Mapping & Geospatial Referencing

Tracks
Track: Multi-Sensor & AI-enhanced Navigation
Thursday, April 30, 2026
11:50 AM - 12:50 PM
Room 1.14

Details

Co-Chairs: Markus Gerke & Phillipp Fanta-Jende


Speaker

Mr. Achim Hennecke
Managing Director
Naviki / beemo GmbH

Smartphone-Based Decimetre-Level Positioning for Urban Navigation and Surveying: Use-Case-Driven Evaluation within EGENIOUSS

11:50 AM - 12:10 PM

Abstract text

The EGENIOUSS project aims to develop a highly accurate and cost-effective positioning, navigation and timing system based on the integration of multiple geodata sources. These include visual localisation, in-situ sensor data and satellite-based augmentation services (Galileo High-Accuracy Service). The service is designed to support for smartphone-based use cases, including professional surveying in urban environments (QField app) and bicycle navigation with associated trip data collection (Naviki app). EGENIOUSS provides significant advantages, especially in areas where positioning quality is degraded due to building-induced shading and signal obstruction.

During the EGENIOUSS project, the EGENIOUSS service was implemented and tested within existing software packages in order to demonstrate its applicability, functional performance as well as the usability within real-world application contexts. The smartphone-based use cases (UC) Naviki and QField are ingesting EGENIOUSS positioning into their existing application environment. The UC show how the EGENIOUSS service is integrated into operational applications and which technical and organisational measures are needed to realise this integration. An analysis of application compatibility for each use case, together with data on user feedback and observed user behaviour collected during multiple test iterations with QField and Naviki, provides detailed insights into the practical application potential of EGENIOUSS. Test results and the methodology for user testing, encountered challenges and the lessons learnt from implementing and using EGENIOUSS within QField and Naviki are presented and discussed.

This contribution presents (1) two practical use-cases of smartphone-based positioning using the EGENIOUSS service in two representative urban scenarios: bicycle navigation and professional mobile surveying and (2) insights as to the integration into existing applications (Naviki and QField), including system compatibility, usability, and deployment considerations

Biography

- Achim Hennecke, Managing Director, beemo GmbH, Germany. Main area of activity: Naviki bicycle app, smart software for cycling. Topic: egeniouss as a localisation tool within Naviki - Berit Mohr, GIS Specialist, OPENGIS.ch, Switzerland. Main area of activity: GIS Consulting, Training, Team management and software development. Topic: egeniouss as a localisation tool within QField
Ms. Yasmin Loeper
Research Associate
Institute of Geodesy and Photogrammetry, Technische Universität Braunschweig

Pose Verification Based on Visual Localisation Using CityGML Models within the EGENIOUSS Framework

12:10 PM - 12:30 PM

Abstract text

In urban environments, absolute pose estimation based solely on GNSS is often degraded by multipath effects, resulting in poor accuracy. GNSS is also vulnerable to jamming and spoofing. Yet, many safety-critical navigation tasks demand highly accurate six degrees of freedom (6-DoF) pose estimates. The EGENIOUSS framework addresses these challenges by fusing data from inertial measurement units (IMUs), cameras via visual localisation (VL) and visual odometry (VO), and GNSS receivers from smartphone and low-cost mobile platforms. The EGENIOUSS system is therefore a complementary, redundant navigation framework.

Within EGENIOUSS, two independent VL components are implemented. The first VL module employs 3D meshes to estimate a 6-DoF pose, which is then fed into the main sensor-fusion pipeline together with GNSS and VO outputs. The second VL module uses City Geography Markup Language (CityGML) models as a reference data source and is implemented as an external verification step. The aim of the object-based VL component is not, as is usual in VL, to estimate the pose on the basis of a query image and its correspondence with the reference data. The object-based VL component is used to verify the pose from the sensor fusion based on the matching quality between query image and reference data. For this purpose, the CityGML models are rendered on the basis of a given pose. Due to the low memory requirements of CityGML models, the models can either be loaded directly on the platform and thus also used offline or loaded from the EGENIOUSS cloud solution. The rendered image is then used for matching with the query image. CityGML models are a challenging reference data type for VL due to their textureless, colourless and low-detail nature. Therefore, classical VL approaches and feature extractors and matchers often fail to match the query image with the rendered scene of the CityGML model. We have therefore implemented line extractors and a geometric line matching method.

In this contribution, we investigate the use of CityGML models as an independent reference data source from the mesh-based VL component in EGENIOUSS as a verification module for the pose from sensor fusion. We will evaluate the reliability of the verification module by testing different pose accuracies as input poses. We will investigate which matching thresholds indicate good, medium and bad input poses in order to develop a reliable verification module. This verification module provides the EGENIOUSS system with a further redundant, complementary safety layer that is independent in terms of methodology and data.

Biography

Yasmin Loeper works at the Institute of Geodesy and Photogrammetry at the Technische Universität Braunschweig in Braunschweig, Germany. Her main activity lies in the field of using CityGML models in visual localisation. She presents the use of CityGML models as reference data in visual localisation as a pose verification module within the EGENIOUSS framework.
Mr. Jakub Velich
PhD student
Czech Technical University In Prague

Impact of solid-state FMCW radar processing parameters on 3D point cloud matching

12:30 PM - 12:50 PM

Abstract text

The millimeter wave (mmWave) Frequency Modulated Continuous Wave (FMCW) radar is one of the ubiquitous sensors used in a wide range of automotive applications, such as adaptive cruise control and collision avoidance in Advanced Driver Assistance Systems (ADAS). Single-chip solid-state mmWave radar sensors are low-cost and lightweight. One of their biggest advantages is their ability to operate in challenging environments (fog, smoke, etc.) where cameras and Light Detection and Ranging (LiDAR) sensors fail. This motivates research on how radar can be used for positioning.
One possible approach for positioning with radar is to use 3D point cloud matching techniques. While point cloud matching for LiDAR sensors can be considered mature, point cloud matching for FMCW radar is only emerging. One of the fundamental issues with the single-chip radar is the inherent sparsity of point clouds that it generates due to the coarse azimuth and elevation resolution dictated by the antenna array size. Since dense point clouds are necessary for conventional point cloud matching techniques such as Iterative Closest Point (ICP), the radar point cloud cannot be used directly. Another challenge is ghost targets, which often appear in radar point clouds due to multipath propagation.
We present the use of an AWR1843BOOST radar sensor to investigate how radar processing parameters affect point cloud density, ghost target occurrence, and scan matching accuracy. Particularly, we focus on the Constant False Alarm Rate (CFAR) detection threshold and transmit power level, as these parameters directly influence the number of detected targets. We discuss methods to augment and filter the point cloud data, with the goal of densifying the point cloud and minimizing the number of ghost targets.
The pre-processed point clouds are then used to perform 3D point cloud matching using methods such as ICP. To evaluate radar point cloud matching with the proposed approach, datasets are recorded with a mobile platform equipped with the radar and additional sensors (LiDAR, wheel odometry, and Inertial Measurement Unit (IMU)). The mobile platform is also equipped with markers tracked by an optical tracking system, which are tracked continuously to generate a ground truth for position and attitude. To analyze the achievable accuracy, the result of the 3D point cloud matching is compared to the ground truth. Results include a thorough analysis of how different settings in the radar configuration influence the 3D position and attitude accuracy of the scan matching.

Biography

Jakub VELICH was born in 1999 and finished his master’s degree at FEE CTU in Prague in 2023. Currently, he pursues the PhD degree ibidem. His research is focused on utilizing mmWave radar for SLAM and indoor navigation.
loading