PhotoMesh seamlessly combines oblique and nadir photographs with both aerial and terrestrial LiDAR data, automatically converting them into high-resolution, textured 3D mesh models (3DML). This fusion not only enhances the relative and absolute accuracy of the models but also reduces occlusions and improves the overall quality of 3D meshes. This article delves into the advantages of combining these two technologies, the data processing workflow within Skyline PhotoMesh, the role of LiDAR classification, and future directions in integrating imagery and LiDAR data. More about: Building a project with LiDAR data and Adding LiDAR and trajectory data >
In this article:
- Comparing the two technologies
- Data ingestion and processing workflow in Skyline PhotoMesh
- LiDAR classification
- Embedding the 3D textured mesh with attribution and semantic qualities
- Future directions in data integration
Comparing LiDAR and Photogrammetry
LiDAR Benefits
- Canopy Penetration (Multiple Returns): LiDAR emits laser pulses that penetrate vegetation canopies and measure the ground surface beneath, thanks to its ability to capture multiple returns from each pulse—first from the canopy top and then from subsequent layers down to the ground. This feature proves invaluable in forestry and topographic mapping for accurately modeling terrain underneath vegetation.
- Classification and Feature Extraction: LiDAR is highly suited for automatic classification, leveraging distinct data characteristics to distinguish accurately between surface types. It captures precise 3D coordinates within dense point clouds, essential for detailed identification and classification of features. Additionally, the variation in intensity of reflected light, corresponding to different materials such as vegetation, asphalt, and buildings, enriches the data's classification potential. Geometric details in the point clouds, including feature shape and texture, further refine its analytical capabilities. Together, these attributes enable LiDAR to be used in differentiating between and classifying a broad range of environmental and man-made features.
-
Efficiency in Narrow Scenes: LiDAR requires only a single ray to range and measure a feature, making it particularly effective in narrow or constrained environments (e.g., power lines, pipelines, and urban canyons) where photogrammetry might struggle due to the need for multiple photographic angles.
-
High Accuracy and Consistent Point Density: LiDAR is known for its high level of accuracy, providing precise measurements of distance based on the time it takes for a laser pulse to return after hitting a surface. This accuracy is consistent across all points, regardless of the texture or color of the surfaces, ensuring a uniform point density and reliable data for analysis.
-
Operational Flexibility: Unlike photogrammetry, LiDAR is not dependent on ambient light conditions. It can operate in complete darkness (nighttime) or under variable lighting conditions, offering flexibility in planning survey missions.
LiDAR Limitations
-
Lack of Color Information: Standard LiDAR sensors capture only the intensity of the returned light, which does not include color (RGB) information. This means that LiDAR datasets alone do not provide visual or color details about the surfaces.
-
Lower Spatial Resolution: While LiDAR provides highly accurate distance measurements, the spatial resolution (the density of points per square meter) might be lower compared to high-density aerial photogrammetry. For instance, high-density aerial LiDAR might achieve 50 points per square meter, whereas photogrammetry can achieve much finer resolutions, essential for detailed surface modeling.
-
Operational Complexity: Operating a LiDAR system, particularly from aerial platforms, requires careful planning and precise calibration of GPS and IMU (Inertial Measurement Units) to ensure data accuracy. These systems are sensitive to flying conditions and require optimal conditions to perform best, which can make LiDAR surveys more complex and potentially more costly to execute compared to simple photogrammetric methods.
Photogrammetry Benefits:
- Ease of Entry and User Friendly Operation: Photogrammetry is known for its accessible entry point, making it an attractive option for newcomers in fields like surveying and mapping. Users can capture photos with standard cameras or drones and its user-friendly interface and software solutions allow individuals to quickly get started with data capture.
- Robustness in Varied Conditions: Unlike some other remote sensing technologies, photogrammetry is less sensitive to ideal flying conditions or the availability of high-end GPS (Global Positioning System) and IMU (Inertial Measurement Unit) equipment. It often relies on ground control points, which provide reference information for accurate data processing.
- Multispectral Capability: Photogrammetry has the potential to work with multispectral data, including Near-Infrared (NIR) and thermal imagery. This enables users to capture and analyze a broader range of information beyond visible light, useful in various applications like agriculture and environmental monitoring.
- Realistic Texture: One of the strengths of photogrammetry is its ability to create 3D models with realistic textures. By incorporating RGB (Red, Green, Blue) pixel data from photos, it produces visually accurate representations of objects and surfaces.
- High Resolution: Photogrammetry can achieve high-resolution outputs. For instance, a Ground Sampling Distance (GSD) as fine as 5 centimeters translates to 400 pixels per square meter. This level of detail is valuable in applications requiring precise mapping and measurement.
Photogrammetry Limitations:
- Single Return: Photogrammetry provides a single return or perspective of the captured scene. This means it can only model what is directly visible in the images. Objects or surfaces obscured from view are not included in the final model, limiting its effectiveness in scenarios with occlusions or hidden features.
- Multiple Perspectives Required: To accurately reconstruct objects and features, photogrammetry typically requires at least two perspectives (images) of each correlated feature. This necessity for multiple viewpoints can be challenging in environments with restricted space or limited access.
- Correlation Challenges: Photogrammetry relies on correlating features across images, which can introduce accuracy biases. Triangulation and feature matching are critical steps, and their effectiveness can be reduced in cases of low-texture surfaces, low-quality correlation, or lower data density, leading to less accurate results.
Data Ingestion and Processing Workflow in PhotoMesh
Importing Photos and LiDAR Data
- Imagery Requirements: Photos must be in JPG or TIF format and include exterior and interior orientation metadata.
- LiDAR Data Compatibility: Supports LAS, LAZ, and e57 formats. Incorporating trajectory information is beneficial for enhancing mesh surface normal calculation.
Aerotriangulation and Photo-LiDAR Alignment
- Aerotriangulation Process: PhotoMesh automatically extracts tie points from the photos and performs a full Bundle Block Adjustment.
- LiDAR Integration: PhotoMesh extracts control points from LiDAR intensity data, enabling the precise alignment of photo blocks with the underlying LiDAR geometry for seamless integration.
Merging Geometric Data and Textures
- 3D Reconstruction: Combines the image-based point cloud with LiDAR data to form a unified geometric source which serves as the basis for the mesh model.
This process includes linear feature extraction (edge matching) and pixel-level detail detection. - Texture Application: Applies textures to the mesh model by evaluating the highest image resolution, ensuring alignment is perpendicular to the mesh triangle normal, and optimizing for color balance.
Lidar Classification
LiDAR Classification serves multiple purposes, including the removal of unwanted objects from the LiDAR data, which in turn eliminates them from the final 3D mesh (e.g., vehicles, power lines, vegetation). Additionally, it plays a potential role in guiding the 3D reconstruction process by influencing triangle count and edge extraction based on the nature of the features being modeled.
The fusion of LiDAR's classification capabilities with the mesh model's attribute information and semantic qualities is pivotal in creating a smart 3D mesh model.
Embedding Attribution and Semantic Qualities
The 3D mesh models, can be classified in TerraExplorer Pro using classification polygons containing attribute data relevant to the mesh layer. Classification enables you to categorize different areas of the layer (e.g., based on building type: residential, commercial, or industrial). This classification can then be used to visually distinguish the different areas (e.g., by applying a different color to each category) and to perform spatial and attribute queries on the feature layer that is classifying the mesh layer.
Future Directions in Data Integration
- Expanded Sensor Integration: Integration of aerial Imagery and LiDAR with drone-based and mobile sensors.
- Enhanced Co-Registration Automation: Improving the automatic alignment of LiDAR and photographic data, streamlining the process to ensure a seamless and accurate fusion of diverse datasets.
- Innovative Sensor Applications: Addition of specialized sensors, like thermal imaging combined with LiDAR, to enable nighttime and low-visibility 3D modeling. This initiative will expand the capabilities and applications of 3D modeling by embracing a broader spectrum of data collection technologies.