How to merge drone images with terrestrial laser scans inside of RealityCapture.

How to merge drone images with terrestrial laser scans inside of RealityCapture.


Hi and welcome everyone. My name is Jakub and I work as a Technical Pre-sales Expert at Capturing Reality and I would
like to talk about merging terrestrial laser scanning with photogrammetry in
RealityCapture. In this tutorial, I am going to reconstruct the exterior of a small Gothic church. I will be using 4 datasets. Each of these datasets was
acquired with a different camera and one was acquired with a terrestrial laser
scanner. What is interesting is that each of these datasets was acquired in a
different day and sometimes even months apart but I was still able to align them
all together in RealityCapture. First, I will introduce you the datasets and
then we will align the datasets separately and export them as components to demonstrate you the component workflow. Once all the components are
prepared, we’ll bring them all together and continue with the automatic
alignment. One of the data sets was captured in a way that the automatic
alignment will not work because there is no overlap between the images and the
rest of the data sets and to merge them I will show you how to use control
points. After we finish the alignment we will reconstruct the mesh, filter out the
unwanted geometry and finally texture the mesh. Let’s start by introducing the
datasets like I said before we have 4 datasets, the first one contains
483 images and is from a drone that was controlled manually around the church.
The second data set was acquired from the ground with a mirrorless camera this
dataset contains 76 images and these images were used to help the automatic
alignment between the manual drone flight and the laser scans. The last
image dataset contains 76 images and it was an automatic drone flight in a
single grid pattern in 50 metres altitude above ground level to capture
the wider area of the church. The laser scanning dataset is containing 10 scan stations, these scans were previously registered or in other words aligned in
a laser scanning software, and later they were exported as ordered PTX point
clouds. At this moment RealityCapture supports ordered e57 and PTX point
clouds. After launching RealityCapture we will
import our first images. In this first project we will align the images from the ground with the images from the manual drone flight. I will import the
images using the inputs icon in the workflow tab. They can also be imported by drag and dropping the images into the user interface. I will select all of the images by pressing Ctrl A and click on open. Next, I will import the images from
the manual drone flight the same way I did with the images from the ground. Now with all 559 images in the project
we can start the alignment. The alignment finished after 8 minutes and 20 seconds. I’m using a laptop with 6 core i7 CPU with 2.2 Ghz, 16 GB of RAM
and NVIDIA GTX 1060 running on Windows 10 OS. When I click on the created component in the 1Ds view I can check the alignment report containing more
information the alignment time for example. Now I am ready to export the
registration, I will go to the alignment tab and click on export registration. I
just need to select the proper folder where I want to export my component. After it is done, I can create a new
project and this time I will import the laser scans. I will use the import laser
scans tool in the Workflow tab. I will select all the PTX point clouds and
click on open. RealityCapture will display the import laser scans dialog where we have a few options. Because the laser scans were already aligned in a laser scanning software, I will choose the registration as Exact because I do
not want RealityCapture to change the positions of the scan stations. I have
two other options if I want to optimize the registration or if the scans are not
aligned at all. The laser scans are not geo-referenced,
and then they were scanned with color. It is possible to line images with laser
scans without color in RealityCapture, but we always recommend our customers to scan with color. The only thing that I changed was the
folder for the converted laser scans, because the RealityCapture converts the
scans to lsp files and the next time you would like to use them you can import
them just like images and skip the whole import laser scan tool. The import finished after one minute. Right now, we do not see anything in the 3D view, to view the point cloud we have to click on align. Now we can see the point cloud,
but you may notice that it is not as dense like you’ve seen it in the laser
scanning software. You don’t have to worry about it at all because all the
points are stored in the memory and will be used during reconstruction. When I
click on the component we can see that the alignment time is 0 because the
registration was imported as exact. This time I will save the project because I
will use it later to import the other components and align them together. After
it is saved, I will create a new project for the drone automatic flight. Instead
of inputs icon I will use the Folder icon to import a whole folder and that
is containing the images. 76 images were imported, and I can just click on align. This time we had fewer images so the alignment finished quickly in just one minute. We can see the sparse point cloud with the camera positions. Now I will repeat the whole process like with the first data set and export the
registration. Again, the export tool is located in the alignment tab. Now I will go back to the saved project with the laser scans so we can proceed with the
automatic alignment without control points. To import a component, go to the
alignment tab and use the import component tool and search for our first
component that is containing the images from the ground and from the manual
drone flight. After the import, we can see a new component in the1Ds view, and next to it we can see a star icon and this star is symbolizing that this component
was imported. Since I already processed these datasets
before recording this video, I know that I need to change the detector
sensitivity from medium to high for the automatic alignment to work on the first
try. Let’s do that right now in the alignment settings. Also, I’m going to change the merge components only from No to Yes for this case. When set to Yes, the application will not align new images it will only join components. But do not
forget to change this option when you start a new project otherwise, RealityCapture will not align your images in the new project. After changing these settings I can just click on align, and wait for the alignment to finish. This time the alignment finished in one minute. The lines that you can see in the 3D view are residuals and they are showing the difference between the
original position of the cameras and the adjusted position after merging with the
laser scans. It is not a problem at all and I can disable them from the Scene
context tab. After the alignment, RealityCapture created a new component that is
containing all the images in the project. For the sake of keeping things organized
I can rename the component to Merge without control points. Now let’s continue with the alignment with control points. This time I will import the
component that is containing images from the automatic drone flight. The component was successfully imported
to the project and I can delete the older components because I no longer
need them. This time I will merge the already Merged component without control points with the automatic drone flight component. I will change the application layout to 1+2 for more comfortable placement of control points. I want to see the 3D view containing the sparse point cloud, it will be used for
placing the control points and I also want to see the 2D view for visualizing
the control points suggestions in the images. To activate the placement of
control points I need to go to the alignment tab and click on the control
point tool. While the text is highlighted the tool is active and now when I hover
the mouse cursor over the sparse point cloud in the 3D view I can see image
suggestions in the 2D view. To place the control point just click with the
left mouse button. While I am placing the control points in the 3D view I don’t
have to be that accurate because I will later refine the control point position
in the image suggestions. Because there are no targets or markers in the scene and because the datasets were required in different days the only similar
features in the images were the tombstones or the church itself, so I
decided I will use the tombstones for placing the control points. I placed three control points in the first component and three points is the
minimum for the alignment. We recommend you to use more and they should also be evenly distributed in the overlapping areas of your components when it is
possible of course. In the 1Ds view we can already see some image suggestions under the control points name. So I finished with the first
component and now I have to place the same control points in the second
component and I need to place them to the same spot. To place an existing control point it needs to be highlighted in the 1Ds view otherwise after
clicking I would place a new control point. On the last control point with the
number 2 I wasn’t entirely sure which corner of the tombstone I picked so at
just in case I went back to check the correct position. After the control points are placed in
both components I can disable the control points tool and start with
refining the position of the points in the image suggestions. I can change the
application layout back to 1+1 and change the large window from 3D to 2D. I will select the first suggestion under control point number 0. Click on the control point in the 2Ds view with the left mouse button, place it to the correct
spot, and while I’m still holding the left mouse button, I will press the Enter
key and this way the application will confirm the points position, and will
automatically switch to the next suggestion. This way you can place a lot of control points in a relatively short time. I will repeat the process for all the suggestions and for the second and the third control point I speed up the
video because the process is exactly the same. Once we are finished, we can check the
points position in the 3D view in both components. Before the alignment we can save the
project and after it’s saved we can just click on online images. This time the alignment finished in 2
minutes and 38 seconds. We have a new component that is containing 694 images
out of 694 images. I can change the scale of the cameras to make them more visible in the scene contacts tab. After the alignment is finished I can continue with the reconstruction. First I will adjust the size of the reconstruction
region from the top view. I’ll be using the colored circles to manipulate each
side of the region. Next, I will switch to one of the side views and adjust the top and the bottom of the reconstruction region so we are not cropping out some
parts that we would like to reconstruct. When we are ready I can go to the
reconstruction tab and start the reconstruction of the mesh by clicking
on Normal detail. On normal detail RealityCapture will downscale the
original images by the factor of 2 on the default settings. The reconstruction finished in approximately 1 hour and if the scene is containing a lot of
polygons, you can get this warning sign that you do not have enough video
memory to display all the polygons. My graphics card with 6 GB of RAM
can display up to 40 million polygons but my mesh has more polygons and that
is why I can only view the dense point cloud in the 3D view. If you do not want
to see this warning sign again just check do not show again and click on
close. RealityCapture a reconstruction algorithm creates watertight meshes and
that means that we also have large polygons on the bottom side of the mesh.
Also, we can see that we have some floating tree branches next to the
church so now I will show you how to get rid of these bottom polygons and also
how to get rid of the floating unwanted tree branches. To delete the bottom
polygons go to the reconstruction tab, pick the Advanced selection tool, and use
the Select marginal triangles tool in the 1Ds view. RealityCapture will select the bottom polygons and to delete them I will click on filter selection. RealityCapture will create a new mesh but this time without the selected polygons.
We still have the floating tree branches and there are multiple ways how to
select them, but I can use the lasso tool to select a small part of the Church,
expand the selection, invert the selection to select the three
branches and possibly some other floating geometry that we didn’t notice
before. To delete the unwanted polygons I will click on filter selection again.
Just as a side note for selecting the largest connected mesh part I could have
used the Select the largest connected component in the advanced selection tool.
After our mesh is cleaned we can start texturing it. To check out the settings
let’s go to the texturing settings in the reconstruction tab. By default
RealityVapture will create a 1 texture with maximum texture resolution of 8K.
If you want to get the best possible quality of your texture you need to
change the texturing style to Fixed texel size and change the fixed texel size
to optimal. On these settings RealityCapture will create as many textures as
needed to get the 100% texture quality. In the beginning I mentioned that we have our laser scans with color but because the onboard
cameras in the laser scans have lower quality I will disable them for the
texturing process because I do not want them to lower the quality of the final
texture. I can select them in the 1Ds view and in the 1Ds view I can
also disable them for texturing. After the disabling the laser scans I can go
to the reconstruction tab and click on texture. The texturing finished in 30 minutes and
now we are looking at the final textured mesh. I already simplified it to 1 million polygons so we could see the textured mesh in the sweet mode instead
of vertices mode that displays the dense point cloud only. For comparison I also prepared the exact same mesh but textured using the laser scans only and
right away you can see the huge difference in the texture quality. And that’s it we reach the end of this tutorial to recap we had 3 datasets
of images one from the ground, two from the air, and 10 laser scans. I showed you
the Component workflow, Alignment with control points and Alignment without
control points when there is not enough overlap between the images. We
reconstructed the mesh, deleted unwanted polygons and finally textured the mesh. Thank you very much for your attention and stay tuned for future tutorials. Bye.

4 thoughts on “How to merge drone images with terrestrial laser scans inside of RealityCapture.”

  1. Great tutorial – being able to merge with terrestrial (structured) scans is great. However my company is increasingly moving to mobile (unstructured) LIDAR scanning systems which don't have built in cameras. Is there any way to import and merge unstructured scans in a similar method? I would be very curious to know. Thank you!

  2. About marking the control points – could there be a situation when it would be necessary to mark same CPs also to the laser scans?

Leave a Reply

Your email address will not be published. Required fields are marked *