S3.4 - AI-enhanced Navigation (I)
Tracks
Track: Multi-Sensor & AI-enhanced Navigation
| Wednesday, April 29, 2026 |
| 11:30 AM - 12:30 PM |
| Room 1.14 |
Speaker
Mr. Thomas Barbero
R&d Engineer
Abbia Gnss Technologies
Opportunities of Self-Supervised Learning for GNSS: Evaluation of a Deep Learning Enhanced PVT Algorithm
Abstract text
In dense urban environments, Global Navigation Satellite System (GNSS) Positioning, Navigation and Timing becomes less reliable due to multipath effects and Non-Line-of-Sight (NLOS) signals. Because these effects depend on complex and highly variable signal propagation conditions determined by the environment surrounding the receiver, they cannot be captured by traditional parametric models. Deep Learning (DL) offers a promising alternative by learning complex patterns directly from data.
Despite their potential, deploying DL-based mitigation methods remains difficult, primarily due to concerns about limited generalization to unseen receivers and environments. Generalization requires training on diverse data, yet GNSS data collection at scale faces two major obstacles. Reliable labels such as reference positions or LOS/NLOS indicators require specialized equipment and post-processing pipelines, and remain vulnerable to occasional estimation errors. In addition, open-source datasets often differ in labels availability and labelling protocols, making their integration challenging for Deep Learning usage.
These limitations motivate the adoption of learning paradigms that aim to extract robust intrinsic features without relying on labels. By generating learning targets from the data itself, Self-Supervised Learning (SSL) removes the need for pre-existing labels and allows large unlabelled datasets to be exploited for robust representation learning. Once pretrained, the model serves as a general feature extractor that can be adapted to supervised downstream tasks through a subsequent fine-tuning phase.
Despite emerging attempts to apply SSL to multipath mitigation, the design of an effective pretraining objective remains underexplored. In contrast, a wide range of robust and scalable pretraining strategies has been validated across modalities such as vision, language and time series. So far, few of these methods have been adapted to the GNSS domain.
Motivated by this unmet potential, we adapt an established SSL strategy to the characteristics of GNSS observables. A Transformer model is pretrained with a BERT-inspired masked-prediction objective using data from a dedicated campaign combined with multiple open-source datasets. To evaluate the effectiveness of the extracted representations, the pretrained backbone is fine-tuned for LOS/NLOS classification and integrated within a Position, Velocity, and Time (PVT) framework. Leveraging the diversity of the collected dataset, we assess the generalization performance of our approach across diverse environments and receivers.
During pretraining, a random portion of the input sequence is masked, and the model is trained to reconstruct the missing samples from the remaining context. Through this objective, the model learns correlations within the GNSS measurements and their spatio-temporal dependencies.
For evaluation purposes, the pretrained model is fine-tuned on a labelled subset of our dataset for LOS/NLOS classification. Thereafter, the predictions of the fine-tuned model are incorporated into a Weighted Least Squares PVT estimator, where signals identified as NLOS are down-weighted. This allows us to assess the effectiveness of the learned representations directly at the position level.
Generalization is evaluated through the positioning accuracy of the complete PVT pipeline across different environments and receivers, comparing the pretrained model against a purely supervised baseline. Finally, we examine the representations of the models to understand how they encode characteristics of the surrounding environment.
Despite their potential, deploying DL-based mitigation methods remains difficult, primarily due to concerns about limited generalization to unseen receivers and environments. Generalization requires training on diverse data, yet GNSS data collection at scale faces two major obstacles. Reliable labels such as reference positions or LOS/NLOS indicators require specialized equipment and post-processing pipelines, and remain vulnerable to occasional estimation errors. In addition, open-source datasets often differ in labels availability and labelling protocols, making their integration challenging for Deep Learning usage.
These limitations motivate the adoption of learning paradigms that aim to extract robust intrinsic features without relying on labels. By generating learning targets from the data itself, Self-Supervised Learning (SSL) removes the need for pre-existing labels and allows large unlabelled datasets to be exploited for robust representation learning. Once pretrained, the model serves as a general feature extractor that can be adapted to supervised downstream tasks through a subsequent fine-tuning phase.
Despite emerging attempts to apply SSL to multipath mitigation, the design of an effective pretraining objective remains underexplored. In contrast, a wide range of robust and scalable pretraining strategies has been validated across modalities such as vision, language and time series. So far, few of these methods have been adapted to the GNSS domain.
Motivated by this unmet potential, we adapt an established SSL strategy to the characteristics of GNSS observables. A Transformer model is pretrained with a BERT-inspired masked-prediction objective using data from a dedicated campaign combined with multiple open-source datasets. To evaluate the effectiveness of the extracted representations, the pretrained backbone is fine-tuned for LOS/NLOS classification and integrated within a Position, Velocity, and Time (PVT) framework. Leveraging the diversity of the collected dataset, we assess the generalization performance of our approach across diverse environments and receivers.
During pretraining, a random portion of the input sequence is masked, and the model is trained to reconstruct the missing samples from the remaining context. Through this objective, the model learns correlations within the GNSS measurements and their spatio-temporal dependencies.
For evaluation purposes, the pretrained model is fine-tuned on a labelled subset of our dataset for LOS/NLOS classification. Thereafter, the predictions of the fine-tuned model are incorporated into a Weighted Least Squares PVT estimator, where signals identified as NLOS are down-weighted. This allows us to assess the effectiveness of the learned representations directly at the position level.
Generalization is evaluated through the positioning accuracy of the complete PVT pipeline across different environments and receivers, comparing the pretrained model against a purely supervised baseline. Finally, we examine the representations of the models to understand how they encode characteristics of the surrounding environment.
Biography
Thomas Barbero works at Abbia GNSS Technologies on the subjects of EGNOS performances and robust Navigation in Harsh Environements.
This work presents a Deep Learning Enhanced Navigation algorithm based on Self-Supervised Learning. This work is the direct continuation of the work presented at the ENC 2025.
Dr. Saeid Haji-Aghajany
Assistant Professor
Wrocław University Of Environmental And Life Sciences
Deep Learning–Based Tropospheric Delay Prediction for Enhanced Precise Point Positioning
Abstract text
Precise Point Positioning (PPP) is a rapidly growing Global Navigation Satellite Systems (GNSS) technique that offers high-accuracy positioning with a single receiver and global coverage. However, real-time and near real-time PPP faces limitations such as extended convergence times, primarily caused by tropospheric delay, satellite geometry, and observation quality. While first-order ionospheric delays can be effectively mitigated, tropospheric delays remain challenging due to their non-dispersive nature and high variability across spatiotemporal scales, particularly under severe weather conditions. This study presents a deep learning-based approach to predict tropospheric delay using three-dimensional (3D) wet refractivity derived from troposphere tomography during severe weather events in Poland (sweeping rain bands) and California (storms). Wet refractivity data are obtained from GNSS observations and validate against high-resolution Weather Research and Forecasting (WRF) model outputs. The prediction is performed at a 2-hour resolution using long-term tomography time series, employing a Long Short-Term Memory (LSTM) network optimized with Genetic Algorithm (GA) for hyperparameter tuning. Predicted wet refractivity values, combined with WRF meteorological variables, are used to estimate Slant Tropospheric Delay (STD) for each satellite via ray-tracing techniques.
The estimated delays are incorporated into PPP using three distinct approaches: the first approach treats tropospheric delays and gradients as unknowns within the positioning process; the second applies predicted STD directly to GNSS observations before positioning, removing tropospheric parameters from estimation; and the third combines ray-traced STD using predicted tropospheric parameters with GNSS-derived delays to refine observations prior to positioning. Both static and kinematic positioning modes were evaluated using five GNSS stations per study area. Results indicate that the third approach consistently provide superior accuracy and stability under extreme weather conditions. In static mode, the third approach reduced the 3D Mean Absolute Error (MAE) up to 33% compared to the first approach, while in kinematic mode, the average 3D MAE reduction reached approximately 30%. Furthermore, convergence times were significantly shortened with the third approach outperforming others in most stations.
This study demonstrates that deep learning-based prediction of tropospheric wet refractivity, combined with ray-tracing techniques, offers an effective solution to enhance near real-time PPP performance under severe weather conditions.
The estimated delays are incorporated into PPP using three distinct approaches: the first approach treats tropospheric delays and gradients as unknowns within the positioning process; the second applies predicted STD directly to GNSS observations before positioning, removing tropospheric parameters from estimation; and the third combines ray-traced STD using predicted tropospheric parameters with GNSS-derived delays to refine observations prior to positioning. Both static and kinematic positioning modes were evaluated using five GNSS stations per study area. Results indicate that the third approach consistently provide superior accuracy and stability under extreme weather conditions. In static mode, the third approach reduced the 3D Mean Absolute Error (MAE) up to 33% compared to the first approach, while in kinematic mode, the average 3D MAE reduction reached approximately 30%. Furthermore, convergence times were significantly shortened with the third approach outperforming others in most stations.
This study demonstrates that deep learning-based prediction of tropospheric wet refractivity, combined with ray-tracing techniques, offers an effective solution to enhance near real-time PPP performance under severe weather conditions.
Biography
Saeid Haji-Aghajany received his Ph.D. in Geodesy and Geomatics Engineering from K. N. Toosi University of Technology, Tehran, Iran. He is an Assistant Professor at the Institute of Geodesy and Geoinformatics, Wrocław University of Environmental and Life Sciences, Wrocław, Poland. His research focuses on GNSS, InSAR, and AI-based tropospheric analysis using remote sensing techniques. He has published over 20 peer-reviewed articles as first author in remote sensing and AI.
Dr. Alexander Mudrak
Engineer
European Space Agency
Scaling KalmanNet for a GNSS Composite Clock
Abstract text
Precise and robust timekeeping is essential for Global Navigation Satellite Systems (GNSS) in order to provide a robust and continuous reference time for estimation and prediction of the satellite clocks. The Composite Clock algorithms is a time-proven solution to compute such timescale based on an ensemble of the atomic clocks located both on ground and space segment (onboard the satellites) [1]. In the previous work presented at ENC 2025 [2], we have studied the Deep Learning Assisted Composite Clock (DLACC), which integrates a neural network into the traditional Kalman filter framework. This neural network assisted Kalman Filter, called KalmanNet [3], has demonstrated its potential for improved resilience and adaptability in modelling the dynamics of the atomic clocks.
This paper explores the DLACC implementation for an inhomogeneous ensemble of atomic clocks. We will introduce the implementation considerations to build the KalmanNet into the classic Composite Clock framework. Then, the DLACC performance for different operational scenarios and ensemble composition options will be presented. This study also aims to present preliminary results regarding DLACC accuracy and robustness. Finally, the limitations of the KalmanNet approach in the GNSS context will be discussed.
[1] Brown, Kenneth R., Jr., "The Theory of the GPS Composite Clock," Proceedings of the 4th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 1991), Albuquerque, NM, September 1991, pp. 223-242.
[2] Fayon, G.; Mudrak, A.; Sobreira, H.; Castillo, A. Deep Learning Assisted Composite Clock: Robust Timescale for GNSS through Neural Network, Proceedings of the European Navigation Conference 2025, Wrocław, 21–23 May 2025
[3] G. Revach, N. Shlezinger, R. J. G. van Sloun and Y. C. Eldar, "Kalmannet: Data-Driven Kalman Filtering, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, pp. 3905-390
This paper explores the DLACC implementation for an inhomogeneous ensemble of atomic clocks. We will introduce the implementation considerations to build the KalmanNet into the classic Composite Clock framework. Then, the DLACC performance for different operational scenarios and ensemble composition options will be presented. This study also aims to present preliminary results regarding DLACC accuracy and robustness. Finally, the limitations of the KalmanNet approach in the GNSS context will be discussed.
[1] Brown, Kenneth R., Jr., "The Theory of the GPS Composite Clock," Proceedings of the 4th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 1991), Albuquerque, NM, September 1991, pp. 223-242.
[2] Fayon, G.; Mudrak, A.; Sobreira, H.; Castillo, A. Deep Learning Assisted Composite Clock: Robust Timescale for GNSS through Neural Network, Proceedings of the European Navigation Conference 2025, Wrocław, 21–23 May 2025
[3] G. Revach, N. Shlezinger, R. J. G. van Sloun and Y. C. Eldar, "Kalmannet: Data-Driven Kalman Filtering, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, pp. 3905-390
Biography
Dr. Alexander Mudrak joined the European Space Agency in 2008. His main area of work is precise timekeeping and time synchronization.