AI tools in Reality Capture (Point Cloud Classification, Object Detection, and Image Segmentation)

Preview:

Current Reality Capture devices have sensors that are able to take a lot of different types of data and we need to have tools for automatic segmentation, detection, and classification of this data. Point cloud classification automates this process by leveraging machine learning algorithms to assign class labels to points. This significantly reduces the manual effort required for data interpretation and allows for a faster and more efficient way of analysis, object recognition, decision-making, and planning. 


At the moment we have already available tools for automatic point cloud classification in the below software:

Note! In each case you should have a nice graphic card on your PC do to that kind of operation. Recommended Specifications for Classification:

From all of this software, you can export the classified point cloud to LGS format, which can be opened in the free viewer Leica TruView, published on your own server via Leica Cyclone Enterprise, or opened via CloudWorx plugins. This plugin is available for all popular CAD software, it has embedded tools for automatic feature extraction with it becoming possible to automatically fit different objects to point clouds like walls, columns, doors, windows, pipes, steels, and others much more easily if your point cloud has been split by classes.

Leica CloudWorx For Revit - Classified Point Cloud

An additional very useful type of data that we are collecting while reality capturing its panoramic and highly detailed images. From them, we can understand the type of elements and material,  clearly see small details, identify text, and do much more. REGISTER 360 and Pegasus Office have already AI tools to identify and blur people and vehicles in the imagery.

Leica Pegasus Office - Image Anonymisation

Additional AI tools for images available in MMS Deliver (Leica Pegasus Manager Road Factory) - Road signs recognition and Crack Index Process. You can find them in the Road Assessment tools.

▪ “TrafficSign .cfg”,

▪ “TrafficSign .db”

▪ ‘‘AutoDetectRoadSign.cfg’’

This folder doesn’t need to be stored in the project folder but should be always accessible. It can be stored in a generic directory of the user's PC. You need to open the .cfg file, using the text editor program, and define the attributes that have to be shown on the georeferenced database at the end of the processing. For each sign, you need to describe the properties that will be automatically implemented in the shape file attribute table. When database preparation will be done click on the ‘‘Road sign recognition’’ button in the ‘‘Road assessment tools’’ section. The software will start automatically the process of road sign recognition. As a result, you'll get points located in the right position with predefined attributes and represented as shape files. Also, it's possible manually add other signs, using the normal traffic sign Recognition, or to modify wrong identifications. 

Leica MMS Deliver - Road Sign Recognition

▪ UID - ID number 

▪ PM_FNUM - the sequential number, corresponding to the number of the sequential number of the region of interest (ROI). 

▪ PM_TRACK - the name of the analyzed track. 

▪ PM_MISSION - the mission’s name. 

▪ PM_CRACK - the number of detected cracks for the region of interest (ROI). 

▪ GIS_AREA - the area. 

▪ GIS_LENGHT - the length. 


If you checked the option “Create crack features” you can find the shape file created by this tool with the name “TrackXX_CrackFeatures_(InitialFrameXX)-(FinalFrameXX). shp” and the following layer attributes: 

▪ UID - the number of detected cracks for the region of interest (ROI). 

▪ frameIndex - the number of the frame. 

▪ GIS_AREA - the area. 

▪ GIS_AREA - the length. 

Leica MMS Deliver - Crack Index

All the above tools highly speed up processing time and make it autonomous, but what if we need to detect some specific elements in our scene? For that purpose let's see how we can use imagery data for custom object detection. First of all, we need to get a dataset for AI model training, so use the below tools for image extraction:

Next, we need to decide which objects we want to detect. Let's try to find Fire hydrants and Manholes for the TRK Mobile dataset and for the BLKARC dataset Exit signs and Fan coils. Now we need to label our images and for that purpose, you can find a lot of tools on the internet, I'll be using Roboflow for that. Got to https://app.roboflow.com/ Create an account, Create a new project and add your dataset there. 

Just follow instructions and run processes one by one. To import your own dataset go to the section Preparing Custom Dataset to Step 5 (Exporting Dataset) and insert your code there. Run this process to start importing your labeled data from Roboflow. The next step will run the training. The more epochs you'll set the more time you'll need for the training and the more accurate model you'll get. After the training will be finished you can see some statistics about the results and add your own file to check how it works. I'll add some video from the same area but with a mirrored effect. All results are automatically saved to the runs/detect/predict folder, let's download it and see what we get.

Roboflow - Image annotation with BLKARC data

As you can see in the place where the object was detected rectangular fence appears with confidence level information. If you'll get too many redundant elements you can increase the minimum level of confidence or the opposite way if you'll get too poor results, try to reduce the level of confidence. To get more accurate behavior for the model we need to have more data for training. But I'm thinking it's a very nice result for such a small test. 


When we apply this tool to images each image gets an attribute with information about which object was detected on it, so we can combine this information with coordinate and orientation parameters that were exported before. For the BLKARC small dataset, I'll use simple Excel to visualize and highlight areas with a bigger number of detected elements. I'll use a bubble chart for that. Later generated images can be imported into the Leica REGISTER 360 Plus and used as a background for LGS files.


With the TRK dataset, we can put everything into GIS software to visualize those areas with detected objects. For example, it could be Hexagon M.App Enterprise software. M. App Enterprise allows you to store your geospatial data and provides the dynamic delivery of geospatial analytics and workflows to enhance business intelligence. You can create web or mobile applications with this platform, use previously collected data or get it In real-time from IoT sensors, and use a wide range of tools for the visualization of your information. 

Leica BLKARC Image Object Detection
Leica TRK100 Image Object Detection

Thank you for watching this video! I hope you found it interesting and that you gained some valuable tips and tricks from it. Today, I aimed to showcase how AI tools can be effectively utilized in our daily work. However, it's important to note that there are numerous other tools available for us to explore.


For instance, there are software applications specifically designed for working with airborne point clouds, enabling classification and analysis. Additionally, devices like the BLK247 can detect objects in real-time within point cloud data. When it comes to images, there is software that allows the creation of custom object detection and classification algorithms for airborne orthoimages. Moreover, with the advancements in AI, we can perform object segmentation on images and apply the results to point cloud data.


it was indeed a challenge to cover everything comprehensively in just one video. However, I am genuinely interested in your feedback.  Let me know if you'd like to see more on this topic in future videos. Your feedback is valuable. Thanks for watching, and see you next time!

If you still have a question or you want to repeat this tutorial by yourself, but don't have a valid license then you could reach me by clicking the below button or leave a comment.