TruView LIVE Perspectives: A New Dimension in Virtual Surveying with Computer Vision

 Preview:

Hey Everyone! 🌟 Get ready for an exciting tutorial where I'll show you how to transform a regular point cloud viewer into a virtual surveyor tool with a dash of computer vision magic! 🚀 Building on my previous tutorial on AI tools for custom object detection using panoramic images from a laser scanner (check it out here - AI tools in Reality Capture), this time, we're taking it up a notch. Learn how to teach your PC to read text from images through computer vision and leverage it for seamless Plan generation in AutoCAD Civil3D. 


As the point cloud viewer, I'm going to use the Leica Cyclone ENTERPRISE platform with the Leica TruView LIVE web browser viewer. If you're not familiar with Leica Cyclone ENTERPRISE, it delivers a simplified management & collaboration platform. Powered by Leica Geosystems’ JetStream technology, Cyclone ENTERPRISE facilitates:


Provide secure access to internal & external users on a per-project basis. No data leaves the premise. Sessions are managed centrally & remotely. You can assign different roles to users with different abilities.


My favorite advantage of this server-based approach is that you can store all your vast amounts of data in a single space and access it easily through your browser, desktop viewer, or CAD software without downloading it. The JetStream technology streams the data directly from the server to your device.

Leica Cyclone ENTERPRISE - your own server storage for Reality Capture data

To extract coordinates from this browser viewer, we'll need to create some custom scripts. For that, I'll use the Python language and PyCharm software. PyCharm is an integrated development environment (IDE) specifically designed for Python development. Let's install PyCharm; you can download the free Community Version from this link. I'll use the version for Windows. After installation, create a new project and specify the location for it. When you open the new project, you'll see that a virtual environment will be created, and by default, there will be a main.py script, which you can run just to check that everything is working. To use modules for computer vision, you need to install some libraries. For installation, go to the terminal window and type the following commands:



To use Tesseract from any location in the command prompt, you can add the Tesseract installation directory to the system's PATH environment variable. To do that, go to the Environmental Variables menu:

Go back to the Pycharm terminal and type the ''pip install pytesseract'' command. Be sure that in your python project folder "Your python project/.venv/Scripts/"- pytesseract.exe will be installed.


PyCharm interface

To perform a simple test and ensure everything is working, follow these steps:


Simple Image to Text script

#1. Import Libraries

import pytesseract

from PIL import Image


#2. Open Image

img = Image.open("image_coordinates.JPG")


#3. Specify Tesseract Command Path

pytesseract.pytesseract.tesseract_cmd = r"C:\Users\soal\AppData\Local\Programs\Tesseract-OCR\tesseract.exe"


#4. Extract text from the image with OCR

text = pytesseract.image_to_string(img)


#5. Print Result

print (text)

This script uses the Tesseract OCR (Optical Character Recognition) engine, along with the pytesseract and PIL (Pillow) libraries, to extract text from an image. Let's break down the script step by step:


If you run it, you'll see in the console that the text from your image will be recognized and extracted, which means everything is working.

Image to Text script

Next, we should teach the program to take a screenshot of the desired area and save it as a file. To define the area, we'll use the coordinates of your screen. You can extract the coordinates using the simple script below:

Get Mouse Cursor Coordinates 

#1. Import Libraries

import pyautogui

import time


#2. Pause Execution for 2 Seconds: This allows the user to position the mouse cursor where they want before any subsequent actions are performed.

time.sleep(2)


#3. Print Mouse Cursor Position.

print(pyautogui.position())

The script waits for 2 seconds to give the user time to position the mouse cursor, and then it prints the current coordinates (X, Y) of the mouse cursor on the screen. You need to get the coordinates of the left upper corner and right bottom corner to define the area for a screenshot. Add these coordinates to the below script.

Script - Create screenshot of desired area and extract text from it

#1. Import Libraries

import time

import numpy as np

import pyscreenshot as ImageGrab

import cv2

import os

import pytesseract


#2. Initialize Variables

filename = 'Image.png'

x = 1

last_time = time.time()


#3. Capture Screenshots in a Loop

while (True):

   screen = np.array(ImageGrab.grab(bbox=(0, 200, 992, 771)))

   print('loop took {} seconds'.format(time.time() - last_time))

   last_time = time.time()

   cv2.imshow('window', cv2.cvtColor(screen, cv2.COLOR_BGR2RGB))

   cv2.imwrite(filename, screen)

   x = x + 1

   print(x)

   if x == 2:

       cv2.destroyAllWindows()

       break


#4. Perform OCR on Captured Screenshot

img = cv2.imread('Image.png')

text = pytesseract.image_to_string(img)

print(text)

Run this script and test that it's working fine. You can also check the captured screenshot by opening the "Image.png" file.

Script - Create screenshot of desired area and extract text from it

As the next step, I decided to modify the script to keep only numerical values and save all extracted coordinates into a .txt file. To make it faster, I created one more script for hotkeys. It runs the previous script by pressing a specified key on the keyboard and assigns different names to the coordinates. Name codes are based on the Description Key Set parameter from Autodesk Civil 3D. A Description Key Set is a set of rules or instructions that defines how survey data, specifically points, is translated or converted into objects within the Civil 3D environment. If you use codes from this Description Key Set, then when you import your .txt file into Civil 3D, all objects will be generated automatically based on the rules from the Description Key Set. I already demonstrated this approach in my previous tutorial about Road feature extraction in Cyclone 3DR software


So I decided to assign the below codes to the hotkeys:


1 - STR: for regular structure elements

2 - TR: for trees

3 - PL: for poles

4 - SWMH: for manholes

5 - SIGN: for road signs


Of course, you can change these namings or assign another hotkey. Just make sure that these keys are not already in use by default in the software.

Autodesk Civil 3D - Description Key Set

To understand which points were extracted, I decided to create another script that will generate a Geotag at each extracted point. The script should also assign related names and categories to the Geotag for easy management. For that, we can use a Python module that provides functions for programmatically controlling the mouse and keyboard.


The new script allows the user to define specific actions associated with certain keys. When a defined key is pressed, the script performs a series of mouse and keyboard actions, including running an external script (Find.py) and interacting with specific areas of the screen. The script is set up to be responsive to specific hotkeys and demonstrates basic automation capabilities. The Find.py script performs image-to-text conversion. The Hot Keys 3.py script is in charge of hotkeys.


Note that the script relies on specific screen coordinates, and you may need to adjust them based on your application's UI. All final scripts can be downloaded below.

Leica TruView free viewer - Geotags colored by category (Leica BLK2GO data)

Now, all Geotags are generated in the specified location automatically by pressing the related hotkey. If you stream this point cloud to desktop Leica TruViewer, you'll be able to colorize and filter them by category. Point coordinates can be extracted in 3D point cloud view or in Image Panoramic view. A text file is generated simultaneously, so after finishing, you can import it into your CAD software, and all icons will appear automatically.


You may ask why this is needed if I can just import a point cloud into my CAD software and start generating drawings there based on the point cloud data. I agree that it's easier, but imagine a scenario where you have 20-30 drawing engineers in your organization, and all of them should work on a big project (around 200-300 GB or even more).


As you can see, for a big project or companies with a large number of employees, divisions, or stakeholders, this server-based approach is a must-have. And keep in mind that if needed, you can stream the reality capture data from the Leica Cyclone ENTERPRISE server to your CAD software via the Leica CloudWorx plugin.

Leica Cyclone Enteprise benefits table 

These were my top 10 favorite tools and approaches in Leica Cyclone 3DR for construction analysis applications. Instead of ranking them by popularity, I've categorized them based on the main construction stages – from groundwork to concrete works, installation analysis, and so on.

Of course, there are many more tools you can utilize in your daily work. If you're eager to learn more, head over to the Leica E-learning platform, where you'll find three comprehensive online courses covering various tools within Leica Cyclone 3DR. These courses include detailed workflow descriptions, assessment tests, and data samples. Upon completion, you'll even receive an official certificate from Leica Geosystems.

So, don't miss out on the opportunity to enhance your skills and boost your expertise. Explore these courses, and let's continue to build a brighter future in construction analysis with Leica Cyclone 3DR. Thanks for watching!