Used software:
Cyclone REGISTER 360 PLUS v. 2023.0.2
Cyclone 3DR v. 2023.1.1
CloudWorx for BricsCAD v. 2023
Cyclone Pegasus Office v. 2023.1.1
Point cloud classification
Leica REGISTER 360 Plus
Leica Cyclone 3DR
Leica Cyclone Pegasus Office
Image AI tools - Blurring
Road Signs recognitions
Crack Index
Custom Object Detection from Images with AI
Current Reality Capture devices have sensors that are able to take a lot of different types of data and we need to have tools for automatic segmentation, detection, and classification of this data. Point cloud classification automates this process by leveraging machine learning algorithms to assign class labels to points. This significantly reduces the manual effort required for data interpretation and allows for a faster and more efficient way of analysis, object recognition, decision-making, and planning.
At the moment we have already available tools for automatic point cloud classification in the below software:
Leica Cyclone REGISTER 360 PLUS - it has predefined models for Indoor and Outdoor environments. To run this tool you should download and install a free classification package for the software. You can download it from the Online Help site. Click the Classification Manager button, select the desired model, and run this tool. Keep in mind that only visible points will be classified, even if they are in another SiteMap. If you want to classify a mix of Indoor and Outdoor points, then place these setups into separate Bundles. When classifying the indoor Setups, select all outdoor Setups and use the Show/Hide Point Clouds to hide the outdoor Setups before selecting Classify Visible Points. Repeat for the outdoor Setups, using the Show/Hide Point Clouds to hide the indoor Setups. After the process will be finished, you could manually move the points to the desired class. Also possible to apply different settings for classification to different bundles on the Import Stage.
Leica Cyclone Pegasus Office - here is the classification package available from default, just check the Point Cloud classification option to identify and classify features in the high-resolution point cloud. After the classification has been performed, use the Rendering style tool and select Classes to visualize the classified point cloud. Open the Assets panel to hide/show specific classes or to manually reclassify portions of the point cloud. If you hide some classes and then export, these points will be excluded from the publishing results, you can use this option if you want to export your data without noise, pedestrians, and cars.
Leica Cyclone 3DR - it has 4 different predefined models for classification. The training of classification models is an ongoing process. Therefore, model updates are regularly released, as well as new classification models. To download and install the classification package just click the Auto Classification tool. It will install everything and after that, you'll be able to use it. Select point cloud, run this tool and choose any of the available models for classification in the list. If necessary, use Manual Classification to improve the result. At the end you can split the point cloud by class layer, click for that the Explode by Class button. Additionally, you can apply standard filters to the classified point cloud, like Distance, Walls and Floors, Real Color, Inspection Value, and others.
Note! In each case you should have a nice graphic card on your PC do to that kind of operation. Recommended Specifications for Classification:
Ampere NVIDIA, Volta, or Turing
Any RTX
GeForce GTX 1650 or better
Only NVIDIA GPUs are supported
4GB of Video RAM or better
A GPU computer capability of 7.0 or better is recommended
From all of this software, you can export the classified point cloud to LGS format, which can be opened in the free viewer Leica TruView, published on your own server via Leica Cyclone Enterprise, or opened via CloudWorx plugins. This plugin is available for all popular CAD software, it has embedded tools for automatic feature extraction with it becoming possible to automatically fit different objects to point clouds like walls, columns, doors, windows, pipes, steels, and others much more easily if your point cloud has been split by classes.
An additional very useful type of data that we are collecting while reality capturing its panoramic and highly detailed images. From them, we can understand the type of elements and material, clearly see small details, identify text, and do much more. REGISTER 360 and Pegasus Office have already AI tools to identify and blur people and vehicles in the imagery.
Blurring in Leica Cyclone REGISTER 360 PLUS - select setup and click on it with the right mouse button, in the popup window select edit image blur. In the appeared window you'll see the panoramic image where you could click Auto Blur and the software with AI will detect all people and car plates on the image and will add blur to them. For the mobile data from autonomous devices (BLK2GO, BLKARC, BLK2FLY) select the Waypoint in the right-bottom corner and click the Edit Blur Button.
Blurring in Leica Cyclone Pegasus Office - for the Leica TRK mobile scanners we can apply the Image Anonymization function directly in the field, while capturing the data, in that case, all people and cars will be blurred right after capturing. Or we can do it later in the office via desktop software. For that at the Finalize stage, you just need to check the relevant option and click Finalize button.
Additional AI tools for images available in MMS Deliver (Leica Pegasus Manager Road Factory) - Road signs recognition and Crack Index Process. You can find them in the Road Assessment tools.
For the Road Signs before starting with the automatic traffic sign recognition, need to follow the requirements for database preparation. For road sign recognition, it is needed a library of the traffic sign of the correspondent country, in .jpg format. In the same folder must be present these three files as shown beside:
▪ “TrafficSign .cfg”,
▪ “TrafficSign .db”
▪ ‘‘AutoDetectRoadSign.cfg’’
This folder doesn’t need to be stored in the project folder but should be always accessible. It can be stored in a generic directory of the user's PC. You need to open the .cfg file, using the text editor program, and define the attributes that have to be shown on the georeferenced database at the end of the processing. For each sign, you need to describe the properties that will be automatically implemented in the shape file attribute table. When database preparation will be done click on the ‘‘Road sign recognition’’ button in the ‘‘Road assessment tools’’ section. The software will start automatically the process of road sign recognition. As a result, you'll get points located in the right position with predefined attributes and represented as shape files. Also, it's possible manually add other signs, using the normal traffic sign Recognition, or to modify wrong identifications.
The “Crack index process” works with data from pavement cameras, it lists in order the cracks surveyed in one designated Region of Interest (ROI). This ROI is analyzed along the trajectory, using the images to evaluate the condition of the road. The Crack Index Process works only on RGB images, not on the point cloud. At the end, the tool generates inside a field of the feature selected, the number of tiles affected by issues found in the images. To find the crack index, the software divides the image internal of the ROI into many parts per side and it calculates an index related to the total number of cells analyzed. The output of this tool is a shape file of points with an attribute of the Crack Index. The “TrackXX_Crack (InitialFrameXX)-(FinalFrameXX).shp”, represents the number of detected cracks. It is visualized as a punctual object in the center of each ROI. As bigger the points are, as greater the number of detected cracks. This layer has the following attributes:
▪ UID - ID number
▪ PM_FNUM - the sequential number, corresponding to the number of the sequential number of the region of interest (ROI).
▪ PM_TRACK - the name of the analyzed track.
▪ PM_MISSION - the mission’s name.
▪ PM_CRACK - the number of detected cracks for the region of interest (ROI).
▪ GIS_AREA - the area.
▪ GIS_LENGHT - the length.
If you checked the option “Create crack features” you can find the shape file created by this tool with the name “TrackXX_CrackFeatures_(InitialFrameXX)-(FinalFrameXX). shp” and the following layer attributes:
▪ UID - the number of detected cracks for the region of interest (ROI).
▪ frameIndex - the number of the frame.
▪ GIS_AREA - the area.
▪ GIS_AREA - the length.
All the above tools highly speed up processing time and make it autonomous, but what if we need to detect some specific elements in our scene? For that purpose let's see how we can use imagery data for custom object detection. First of all, we need to get a dataset for AI model training, so use the below tools for image extraction:
From Leica Cyclone REGISTER 360 Plus - On the Report Stage select the Pano Images option for the export and click Publish
From Leica Cyclone Pegasus Office - On the Finalise & Export Stage select the export JPEG option and click Finalise
Next, we need to decide which objects we want to detect. Let's try to find Fire hydrants and Manholes for the TRK Mobile dataset and for the BLKARC dataset Exit signs and Fan coils. Now we need to label our images and for that purpose, you can find a lot of tools on the internet, I'll be using Roboflow for that. Got to https://app.roboflow.com/ Create an account, Create a new project and add your dataset there.
Start to annotate your images, you can add more layers if you want, for example, let's add drain grate layers. Use a simple fence for annotation. When it'll be done you need to submit your images and add them to the dataset for training.
After that, you need to go to the Generate tab and apply some transformations and augmentations to your images. This tool will create copies of your images with selected options like a mirror, color change, orientation change, etc. With these changes, your dataset will be much bigger, and the training process for the AI model will be more accurate.
When the dataset will be ready we can generate a snippet - it's a code that we need to insert into our training engine. For the training and deploying code, I'll be using the Google Collab platform. It is a product from Google Research. Colab allows anybody to write and execute arbitrary Python code through the browser and is especially well suited to machine learning, data analysis, and education.
Here you can find already predefined code for your tests - https://colab.research.google.com/drive/1tYQl_tdVkfJd9dAJmu39BYi3QQkPhuDK#scrollTo=D2YkphuiaE7_
Just follow instructions and run processes one by one. To import your own dataset go to the section Preparing Custom Dataset to Step 5 (Exporting Dataset) and insert your code there. Run this process to start importing your labeled data from Roboflow. The next step will run the training. The more epochs you'll set the more time you'll need for the training and the more accurate model you'll get. After the training will be finished you can see some statistics about the results and add your own file to check how it works. I'll add some video from the same area but with a mirrored effect. All results are automatically saved to the runs/detect/predict folder, let's download it and see what we get.
As you can see in the place where the object was detected rectangular fence appears with confidence level information. If you'll get too many redundant elements you can increase the minimum level of confidence or the opposite way if you'll get too poor results, try to reduce the level of confidence. To get more accurate behavior for the model we need to have more data for training. But I'm thinking it's a very nice result for such a small test.
When we apply this tool to images each image gets an attribute with information about which object was detected on it, so we can combine this information with coordinate and orientation parameters that were exported before. For the BLKARC small dataset, I'll use simple Excel to visualize and highlight areas with a bigger number of detected elements. I'll use a bubble chart for that. Later generated images can be imported into the Leica REGISTER 360 Plus and used as a background for LGS files.
With the TRK dataset, we can put everything into GIS software to visualize those areas with detected objects. For example, it could be Hexagon M.App Enterprise software. M. App Enterprise allows you to store your geospatial data and provides the dynamic delivery of geospatial analytics and workflows to enhance business intelligence. You can create web or mobile applications with this platform, use previously collected data or get it In real-time from IoT sensors, and use a wide range of tools for the visualization of your information.
Thank you for watching this video! I hope you found it interesting and that you gained some valuable tips and tricks from it. Today, I aimed to showcase how AI tools can be effectively utilized in our daily work. However, it's important to note that there are numerous other tools available for us to explore.
For instance, there are software applications specifically designed for working with airborne point clouds, enabling classification and analysis. Additionally, devices like the BLK247 can detect objects in real-time within point cloud data. When it comes to images, there is software that allows the creation of custom object detection and classification algorithms for airborne orthoimages. Moreover, with the advancements in AI, we can perform object segmentation on images and apply the results to point cloud data.
it was indeed a challenge to cover everything comprehensively in just one video. However, I am genuinely interested in your feedback. Let me know if you'd like to see more on this topic in future videos. Your feedback is valuable. Thanks for watching, and see you next time!
If you still have a question or you want to repeat this tutorial by yourself, but don't have a valid license then you could reach me by clicking the below button or leave a comment.