top of page

Smooth Performance with Large Point Clouds! 6 Tips to Speed Up Cross-Section Extraction

By LRTK Team (Lefixea Inc.)

All-in-One Surveying Device: LRTK Phone

Table of Contents

Introduction

Tip 1: Pre-subsetting the data

Tip 2: Point thinning

Tip 3: Hardware resource optimization

Tip 4: Choice of processing algorithms

Tip 5: Staged processing and batch processing

Tip 6: Pre-removal of unnecessary data

Practical example of a combined approach

Troubleshooting

Future technological developments

Metrics for managing the trade-off between processing speed and accuracy

Operational structure for speeding up large-scale projects

Improving processing efficiency through stepwise skill development


Introduction

Large-scale point cloud data measured by drones and high-precision laser scanners contains an enormous amount of point information. When extracting cross-sections from such large-volume data, many practitioners experience long processing times and decreased work efficiency. In particular, when rapid on-site verification or the creation of multiple cross-sections is required, shortening processing time is important.


This article explains in detail six practical tips for extracting cross-sections quickly from large point cloud datasets. By understanding and properly applying these tips, you can greatly reduce processing time and significantly improve on-site responsiveness. Without replacing your computer hardware with higher-performance components, you can dramatically improve processing efficiency through software-side techniques.


Tip 1: Pre-subsetting the data

The first step when processing large point cloud datasets is to remove unnecessary parts in advance and limit the data to only the required extent. This process is called "subsetting." This step lays the foundation for speeding up all subsequent processing.


In practice, it is common to define the area to be subsetted as including tens of meters (tens of ft) around the measurement target. This allows you to secure all the data necessary for creating cross-sections while excluding unnecessary surrounding data. The subsetting process itself completes within a few minutes on almost all tools, so the effort required to perform it is minimal compared to the overall time-saving effect.


First, clarify the geographic extent of the area for which the cross-sectional drawing will be created. For example, in road surveying, only data around the road section in question is necessary. From all acquired data, define a rectangular area that includes the surroundings of the target area (generally a few tens of meters (a few tens of ft) beyond the target area).


Next, perform a "clipping" process to remove points outside this rectangular region. Many point cloud processing tools include spatial filtering functions that can extract only the points within a specified rectangular region. This process can reduce the file size to a fraction of its original.


It is important to save the subsetted file under a different name and retain the original data. If cross-sections are needed for additional ranges later, it is useful to be able to create a new subset from the original data.


The time savings from subsetting are substantial, and in many cases can reduce the overall processing time by more than half. This simple step speeds up all subsequent processing.


Tip 2: Point thinning processing

"Downsampling" or "thinning," which reduce point cloud density, are also effective means of speeding up processing. This operation reduces the number of points while preserving the overall shape of the terrain.


There are several methods for thinning. The simplest is "fixed-interval thinning", which keeps only one point at each fixed distance (for example, at intervals of 0.1 m (0.3 ft)) and removes the others. This method is fast and easy to implement.


Next is "statistical thinning." It is a method that identifies important points based on distances to neighboring points and the distribution of intensities, and removes unnecessary points. Because this method more readily captures changes in terrain, it is superior in terms of accuracy, but its processing time is somewhat longer.


The degree of thinning should be determined by balancing the original point cloud density and the required level of accuracy. In general, reducing the number of points per 1 m^2 (10.76 ft^2) to 50–80% of the original density offers the best balance between practical accuracy and processing speed.


Even after thinning, data can typically be used to generate cross-sections with sufficient accuracy. In practice, it is useful to compare processing results under multiple thinning conditions to determine in advance how much reduction is acceptable.


Tip 3: Optimizing Hardware Resources

Optimizing memory and CPU utilization on a computer can improve processing speed.


First, close any other applications that are running. In particular, closing memory-intensive web browsers and media playback applications will maximize the memory available to the point cloud processing tool.


Next, check the tool's memory allocation settings. Many software programs have a setting to specify the maximum available memory. Setting this value to around 80–90% of the system's memory will maximize processing performance while maintaining stability.


For tools that support multithreaded processing, you may be able to specify the number of CPU cores to use. Setting this to the maximum allows you to utilize multiple cores in parallel and speed up processing.


If system memory is limited, check the virtual memory (disk memory swap function) settings. If virtual memory is enabled, it is possible to process data larger than the system memory, but processing speed will decrease. Make your decision by considering the balance between the time to complete processing and the on-site waiting time.


Tip 4: Choosing the processing algorithm

When a point cloud processing tool offers multiple algorithms, it is important to select the optimal one by taking processing speed and accuracy into account.


Generally, the following ranking holds. The fastest is the "grid-based method." By dividing three-dimensional space into a grid and selecting a representative point for each cell, it enables fast processing. Accuracy may be somewhat reduced, but it is suitable for the initial processing of large-scale data.


The next fastest are algorithms that use KD-trees or octree structures. These build hierarchical spatial structures to accelerate point searches. They achieve a balanced processing speed and accuracy.


The most time-consuming part is "statistical processing and filtering." It takes time because, for each point, it computes in detail the relationships with surrounding points. The accuracy is high, but it is not suitable for large-scale data.


Selection of processing algorithms should be based on data size and the required accuracy. A staged approach is efficient: in the initial phase, capture the overall shape with a fast algorithm and, if necessary, apply a high-precision algorithm in later stages.


Tip 5: Incremental Processing and Batch Processing

Rather than processing large volumes of data at once, you can improve overall efficiency by splitting processing into multiple stages.


First, a coarse downsampling is applied to the subsetted data. The processing time is short, and users can quickly check the results. Based on the results, it is determined whether further detailed processing is necessary.


Next, detailed processing will be performed using a high-precision algorithm only when necessary. This approach avoids unnecessary detailed processing and shortens the overall time.


If batch processing mode is available, we recommend using it. By scheduling the creation of multiple cross-sections and data processing to run in a single batch, you can reduce the interactive burden on users. You can also schedule it to run overnight.


Tip 6: Remove Unnecessary Data in Advance

Point cloud data can contain measurement errors and reflection noise. By removing these unnecessary data beforehand, you can reduce the amount of data to be processed and speed up processing.


First, apply filtering based on classification information. When classification labels such as "ground", "building", "tree", and "noise" are assigned, remove non-target categories. For example, in road measurements, keeping only "ground" and removing "building", "tree", and "noise" can greatly reduce the amount of data to be processed.


Next, perform filtering based on intensity values and classification confidence. By removing points with low measurement confidence (for example, confidence of 50% or less), you can further reduce noise.


Finally, statistical outlier detection is applied to automatically remove points that are clearly outliers from the surrounding data. This process can eliminate the effects of measurement errors.


By removing unnecessary data, it is possible to reduce the amount of data to be processed by up to 70–80%. Depending on the reduced data size, overall processing time will be dramatically shortened.


Practical Examples of a Multifaceted Approach

I will explain a practical approach to speeding up performance by combining these six tips.


For large-scale road projects, the following workflow is effective. First, subset only the data around the target road section. Next, extract only "ground" from the classification information and remove "buildings" and "trees". Then, remove noise using statistical outlier detection. Furthermore, reduce point density by 50% using fixed-interval thinning. Finally, rapidly generate cross-section lines using a grid-based method.


This approach can reduce the original data size to one-tenth or less and dramatically shorten processing time from several minutes to tens of seconds.


Troubleshooting

This section explains the measures to take if a problem occurs during processing.


If processing is interrupted due to insufficient memory, increase the degree of point thinning, or further subdivide the data into smaller subsets. As a last resort, split the processing into multiple small batches and run them sequentially to handle datasets that exceed memory limits.


If processing speed is slower than expected, check CPU usage and memory usage. If CPU usage is low, other applications may be occupying resources. If memory usage is close to 100%, the system is frequently accessing virtual memory, causing reduced processing speed. In this case, switching to access high-speed storage such as an SSD can be expected to provide a slight improvement.


If the accuracy of the processing results is lower than expected, it may be that the downsampling is too aggressive or that filtering has mistakenly removed important data. Adjust the processing parameters and run multiple test executions to find the optimal values. Comparing and verifying against the original data is also effective.


Hardware upgrades are also worth considering as a long-term solution. In particular, increasing memory capacity and replacing hard drives with SSDs can have a significant impact on processing speed.


Future technological advancements

Point cloud processing technologies are continuously evolving, and more advanced optimization methods are expected to emerge. By parallelizing computation using the GPU (graphics processing unit), further acceleration may be achievable. In particular, for large-scale data processing, the parallel processing capabilities of GPUs often greatly exceed those of traditional CPUs, and there are increasing cases in which processing time can be reduced to a fraction of the original.


Alongside the evolution of hardware, algorithmic efficiency is also advancing. There is active work on techniques that perform more processing with less memory and on the development of approximate algorithms that reduce computational complexity. These advances mean that large-scale data that once could only be processed on high-performance workstations can now be handled on ordinary office computers.


Furthermore, the utilization of distributed computing technologies will also be an important theme going forward. By processing data concurrently on multiple computers, reductions in processing time and expansions in processing scale can be achieved simultaneously.


In addition, leveraging cloud-based processing infrastructure is also an important theme going forward. It makes it possible to process large volumes of data at high speed without worrying about memory or CPU limitations on local computers.


Furthermore, combining with high-precision positioning technologies also affects processing efficiency. For example, the higher the coordinate accuracy of the reference points obtained from an iPhone-mounted GNSS high-precision positioning device (LRTK), the lower the complexity of point cloud data preprocessing (coordinate system transformation and alignment), and the shorter the overall processing time.


By acquiring high-precision reference coordinates in advance, the need for geometric correction and coordinate transformation in point cloud processing is reduced, allowing you to concentrate on the primary processing (cross-section generation). The integration of such high-precision positioning technology with point cloud processing will further improve practical efficiency.


Metrics for Managing the Balance Between Processing Speed and Accuracy

When streamlining the extraction of cross-sectional drawings from large point clouds, it is important not to pursue processing speed alone but to strike a balance while always being mindful of the required level of accuracy. Because the acceptable accuracy varies depending on the project's objectives, first clarifying "how much accuracy is needed" leads to selecting the most appropriate techniques.


Establishing quantitative management metrics is effective. For example, by setting concrete indicators such as "maintain the post-processed point cloud density at 30% or more of the original" and "RMS error of 5 cm (2.0 in) or less," the criteria for determining the extent of decimation or filtering become clear. Ideally, these indicators should be set based on historical performance data from similar past projects.


Validation through trial processing is also an important step. Before performing full-scale processing of all data, try multiple processing configurations on a small subset and evaluate the trade-off between processing time and accuracy. Spending 30 minutes on this validation prevents failures and rework during the main processing and improves overall project efficiency.


Recording and utilizing processing logs is often overlooked but effective. By recording the processing time for each case, the settings used, and evaluations of the results’ quality, decision-making about settings for similar tasks in the future becomes faster. Sharing these processing logs within the organization helps accumulate knowledge across the entire team.


High-Speed Operational Framework for Large-Scale Projects

In large-scale projects, performance is influenced not only by individual technical ability but also by the team's operational framework. When multiple team members perform tasks in parallel, it is important to define roles and standardize workflows.


Separating members dedicated to preprocessing (subsetting, downsampling, and removal of unnecessary data) from those responsible for cross-section extraction increases the throughput of the entire process. Because cross-section extraction can be started sequentially on data as soon as preprocessing is completed, overall waiting time is greatly reduced.


Using cloud storage also facilitates smoother collaboration within the team. By saving intermediate processing data to the cloud, other team members can access the data to review it or perform additional processing. By leveraging version control features, you can restore a previous state even if an incorrect operation was performed.


It should not be overlooked that the quality of measurement data directly affects processing speed. By setting high-precision reference points during measurement with an LRTK (iPhone-mounted GNSS high-precision positioning device), coordinate integration processing is simplified and the time required to merge multiple scan datasets is reduced. With high-precision reference coordinates, the downstream task of verifying data consistency can be greatly reduced, contributing to a shorter overall processing time.


Improving Processing Efficiency through Incremental Skill Development

For fast processing of large point clouds, not only technical knowledge but also rules of thumb based on a deep understanding of software and hardware characteristics play an important role. At the beginner stage, a step-by-step approach—trying the six tips one by one to confirm their effects and, as you become accustomed, combining multiple tips—will accelerate skill development.


The effectiveness of each tip varies depending on the type and scale of the data. For example, subsetting consistently yields high benefits, but the benefit of thinning depends greatly on point cloud density and the required accuracy. By learning empirically which tips are particularly effective for your project, you can quickly determine the optimal processing strategy for each project.


Improving processing speed goes beyond merely saving time. When processing is accelerated, more trial and error becomes possible, increasing the number of experiments to find the optimal settings. As a result, a virtuous cycle emerges in which the quality of cross-sectional diagrams is also improved.


Integration with high-precision positioning technologies also affects processing efficiency. By utilizing accurate reference point coordinates obtained with LRTK (an iPhone-mounted GNSS high-precision positioning device), the integration accuracy of multiple scan datasets is improved, reducing the data-alignment work in downstream processing. Ensuring accuracy at the measurement stage thus directly contributes to improved efficiency in the processing stage.


When working to improve processing speed, the most important thing is to accurately identify the current processing bottlenecks. Measure which step takes the most time and apply the optimal tips to that step to achieve the greatest improvement. Regular recording and analysis of processing times form the basis for continuous efficiency gains. By taking an overview of the entire workflow from point cloud measurement to cross-section creation and steadily optimizing each step, you can realize improved productivity for the entire project.


High-speed processing techniques for large-volume point clouds are an important competitive advantage in streamlining surveying operations. By continuously practicing these six tips in the field, you will increase your technical proficiency and gain the capability to handle more complex requirements.


Next Steps:
Explore LRTK Products & Workflows

LRTK helps professionals capture absolute coordinates, create georeferenced point clouds, and streamline surveying and construction workflows. Explore the products below, or contact us for a demo, pricing, or implementation support.

LRTK supercharges field accuracy and efficiency

The LRTK series delivers high-precision GNSS positioning for construction, civil engineering, and surveying, enabling significant reductions in work time and major gains in productivity. It makes it easy to handle everything from design surveys and point-cloud scanning to AR, 3D construction, as-built management, and infrastructure inspection.

bottom of page