S3.3 - Mapping & Geospatial Referencing
Tracks
Track: Multi-Sensor & AI-enhanced Navigation
| Wednesday, April 29, 2026 |
| 10:00 AM - 11:00 AM |
| Room 1.14 |
Speaker
Ms. Chih Yun Hsieh
Master Student
National Cheng Kung University
High-Precision Control Point Cloud Mapping for Autonomous and Geospatial Systems
Abstract text
The development of reliable High-Definition (HD) maps is paramount for the robust deployment of Autonomous Mobility and Intelligent Transportation Systems (ITS). Current HD mapping workflows frequently encounter challenges related to long-term geometric consistency, scalability across large geographic areas, and cost-intensive data acquisition and post-processing. To address these limitations, this research presents an AI-driven, sensor-fused, and unified mapping framework designed to establish and maintain centimeter-level accurate control point cloud maps as foundational geospatial infrastructure.
The core methodology integrates LiDAR, GNSS/INS, and Artificial Intelligence (AI) tools for efficient data registration. A significant contribution is the introduction of a novel hybrid control point strategy, which strategically combines distributed physical ground control points (GCPs) with aerial virtual control points derived from orthophotos. This approach significantly enhances mapping precision while maximizing cost-efficiency and scalability. The framework incorporates a custom-developed AI-driven stitching and refinement algorithm that ensures geometric continuity and achieves long-term positional consistency across vast, heterogeneous mapping environments (highway, industrial, and urban settings), adhering to national HD map standards.
The proposed framework has been extensively validated across Taiwan's national highways. Validation confirms the achievement of sub-2 cm positional accuracy, verified through rigorous cross-check measurements. The methodology has yielded a 40–60% gain in alignment efficiency and a substantial reduction in manual post-processing time compared to conventional methods. As a landmark achievement, the system has successfully completed the generation of over 2,000 kilometers of high-precision control point cloud maps, covering 1,700 kilometers of national highways. Furthermore, these control point clouds serve as the ground truth geometric reference for semi-automated vector extraction, enabling the efficient, large-scale generation of vectorized HD map layers for lane markings and roadside features.
This research demonstrates a cost-effective and highly scalable solution for the rapid production and updating of high-precision geospatial data. The resulting unified framework establishes a reproducible national mapping standard, significantly reducing mapping time by over 50%. The generated centimeter-accurate control point cloud maps serve not only as the backbone for autonomous navigation and ITS but also facilitate advanced AI-enabled road inspection(already deployed and tested over 350 km) and the construction of geo-spatial Digital Twins for sustainable infrastructure governance. Future expansion to over 5,000 kilometers of provincial highways is planned, solidifying this framework as the authoritative spatial reference for next-generation mobility and digital governance.
The core methodology integrates LiDAR, GNSS/INS, and Artificial Intelligence (AI) tools for efficient data registration. A significant contribution is the introduction of a novel hybrid control point strategy, which strategically combines distributed physical ground control points (GCPs) with aerial virtual control points derived from orthophotos. This approach significantly enhances mapping precision while maximizing cost-efficiency and scalability. The framework incorporates a custom-developed AI-driven stitching and refinement algorithm that ensures geometric continuity and achieves long-term positional consistency across vast, heterogeneous mapping environments (highway, industrial, and urban settings), adhering to national HD map standards.
The proposed framework has been extensively validated across Taiwan's national highways. Validation confirms the achievement of sub-2 cm positional accuracy, verified through rigorous cross-check measurements. The methodology has yielded a 40–60% gain in alignment efficiency and a substantial reduction in manual post-processing time compared to conventional methods. As a landmark achievement, the system has successfully completed the generation of over 2,000 kilometers of high-precision control point cloud maps, covering 1,700 kilometers of national highways. Furthermore, these control point clouds serve as the ground truth geometric reference for semi-automated vector extraction, enabling the efficient, large-scale generation of vectorized HD map layers for lane markings and roadside features.
This research demonstrates a cost-effective and highly scalable solution for the rapid production and updating of high-precision geospatial data. The resulting unified framework establishes a reproducible national mapping standard, significantly reducing mapping time by over 50%. The generated centimeter-accurate control point cloud maps serve not only as the backbone for autonomous navigation and ITS but also facilitate advanced AI-enabled road inspection(already deployed and tested over 350 km) and the construction of geo-spatial Digital Twins for sustainable infrastructure governance. Future expansion to over 5,000 kilometers of provincial highways is planned, solidifying this framework as the authoritative spatial reference for next-generation mobility and digital governance.
Biography
Chih-Yun Hsieh(Sophie) received her B.Eng. degree in Geomatics Engineering from National Cheng Kung University (NCKU), Tainan, Taiwan. Previously, she served as a Project Manager at the High Definition Maps Research Center, where she gained invaluable experience focusing on the verification and validation of High-Definition (HD) Maps for autonomous systems.
She commenced her M.S. studies in Geomatics Engineering in 2025, specializing in Point Cloud Mapping, Multi-Sensor Fusion, and Navigation techniques.
The current submission details an AI-driven, hybrid control framework that successfully established over 2,000 kilometers of centimeter-accurate, national-scale high precision control point cloud maps for Taiwan’s next-generation autonomous mobility infrastructure.
Meng-lun Tsai
Doctor
National Cheng Kung University
The Development and Application of Taiwan High-Definition Maps
Abstract text
With the development of Intelligent Transport System (ITS), autonomous vehicles have become a new mode of transportation in the future. According to the classification method proposed by the Society of Automotive Engineers (SAE) International, the driving system can be divided into six levels (SAE Level 0 to Level 5) (NHTSA, 2017). To achieve functional safety at SAE Level 4 or higher, obtaining precise position information and ensuring the vehicle is traveling on the correct road are critical for autonomous vehicles. High-Definition Maps (HD Maps) due to their high accuracy, rich road scene semantic data, real-time traffic conditions, and driving experience information are the key point for autonomous vehicle technology. More than 200 kilometer of HD Maps and 1,000 kilometer of control point cloud maps have been successfully constructed in Taiwan. These maps have been used for following topic: mapping technology development and HD Maps application. In the mapping technology, this project has developed three Artificial Intelligent (AI) mapping tools, namely semi-automated HD Maps generation tool, variation detection tool, data format convert tool, for HD Maps generation. The advantage is to speed up creating vector map, OpenDRIVE format, and Lanelet2 format maps production and ensure their accuracy achieving requirement. These tools are provided to over 10 manufacturers for conducting maps. In the HD Maps application issue, this project not only uses these maps for developing value-added applications, but also makes them available to over 20 autonomous vehicle industry or mapping companies for their use. Applications areas include developing algorithm and software, driving simulation, real-vehicle navigation and localization, filling in the Operational Design Domain (ODD), creating digital twin models, establishing AI training database, and so on. Besides the research described above, this project explores data 5G transmission techniques and evaluates multi-platform mapping production technologies. These efforts are intended to support and speed up the advancement of autonomous vehicle technologies and their associated domains.
Biography
Meng-Lun Tsai received his Ph.D. degree from the Department of Geomatics, National Cheng Kung University, Taiwan. He is interested in the development of next generation’s multi-sensor fusion such as GNSS modernization, inertial navigation system, lidar, digital photogrammetry, and mobile multi-sensor mapping systems. He is also focusing on the operation, verification, validation, data contents, and formats standard of High Definition Maps.
Mr. Jakub Velich
PhD student
Czech Technical University In Prague
Impact of solid-state FMCW radar processing parameters on 3D point cloud matching
Abstract text
The millimeter wave (mmWave) Frequency Modulated Continuous Wave (FMCW) radar is one of the ubiquitous sensors used in a wide range of automotive applications, such as adaptive cruise control and collision avoidance in Advanced Driver Assistance Systems (ADAS). Single-chip solid-state mmWave radar sensors are low-cost and lightweight. One of their biggest advantages is their ability to operate in challenging environments (fog, smoke, etc.) where cameras and Light Detection and Ranging (LiDAR) sensors fail. This motivates research on how radar can be used for positioning.
One possible approach for positioning with radar is to use 3D point cloud matching techniques. While point cloud matching for LiDAR sensors can be considered mature, point cloud matching for FMCW radar is only emerging. One of the fundamental issues with the single-chip radar is the inherent sparsity of point clouds that it generates due to the coarse azimuth and elevation resolution dictated by the antenna array size. Since dense point clouds are necessary for conventional point cloud matching techniques such as Iterative Closest Point (ICP), the radar point cloud cannot be used directly. Another challenge is ghost targets, which often appear in radar point clouds due to multipath propagation.
We present the use of an AWR1843BOOST radar sensor to investigate how radar processing parameters affect point cloud density, ghost target occurrence, and scan matching accuracy. Particularly, we focus on the Constant False Alarm Rate (CFAR) detection threshold and transmit power level, as these parameters directly influence the number of detected targets. We discuss methods to augment and filter the point cloud data, with the goal of densifying the point cloud and minimizing the number of ghost targets.
The pre-processed point clouds are then used to perform 3D point cloud matching using methods such as ICP. To evaluate radar point cloud matching with the proposed approach, datasets are recorded with a mobile platform equipped with the radar and additional sensors (LiDAR, wheel odometry, and Inertial Measurement Unit (IMU)). The mobile platform is also equipped with markers tracked by an optical tracking system, which are tracked continuously to generate a ground truth for position and attitude. To analyze the achievable accuracy, the result of the 3D point cloud matching is compared to the ground truth. Results include a thorough analysis of how different settings in the radar configuration influence the 3D position and attitude accuracy of the scan matching.
One possible approach for positioning with radar is to use 3D point cloud matching techniques. While point cloud matching for LiDAR sensors can be considered mature, point cloud matching for FMCW radar is only emerging. One of the fundamental issues with the single-chip radar is the inherent sparsity of point clouds that it generates due to the coarse azimuth and elevation resolution dictated by the antenna array size. Since dense point clouds are necessary for conventional point cloud matching techniques such as Iterative Closest Point (ICP), the radar point cloud cannot be used directly. Another challenge is ghost targets, which often appear in radar point clouds due to multipath propagation.
We present the use of an AWR1843BOOST radar sensor to investigate how radar processing parameters affect point cloud density, ghost target occurrence, and scan matching accuracy. Particularly, we focus on the Constant False Alarm Rate (CFAR) detection threshold and transmit power level, as these parameters directly influence the number of detected targets. We discuss methods to augment and filter the point cloud data, with the goal of densifying the point cloud and minimizing the number of ghost targets.
The pre-processed point clouds are then used to perform 3D point cloud matching using methods such as ICP. To evaluate radar point cloud matching with the proposed approach, datasets are recorded with a mobile platform equipped with the radar and additional sensors (LiDAR, wheel odometry, and Inertial Measurement Unit (IMU)). The mobile platform is also equipped with markers tracked by an optical tracking system, which are tracked continuously to generate a ground truth for position and attitude. To analyze the achievable accuracy, the result of the 3D point cloud matching is compared to the ground truth. Results include a thorough analysis of how different settings in the radar configuration influence the 3D position and attitude accuracy of the scan matching.
Biography
Jakub VELICH was born in 1999 and finished his master’s
degree at FEE CTU in Prague in 2023. Currently, he pursues
the PhD degree ibidem. His research is focused on utilizing
mmWave radar for SLAM and indoor navigation.