Table of Contents
• Introduction
• Setting 1: Optimizing Slice Thickness
• Setting 2: Configuring Point Filtering Conditions
• Setting 3: Adjusting Noise Removal Processing Stages
• Setting 4: Parameters for Automatic Generation of Section Lines
• Setting 5: Quality Verification and Correction Before Output
• Integrated Approach to Improving Accuracy
• Common Issues and Countermeasures
• Learning from Practical Case Studies
• Technological Evolution and Future Directions
• Settings Guide for Different Measurement Environments
Introduction
When creating cross-sections from point cloud data, a common problem many beginners face is that the section lines become jagged. Even after obtaining measurement data and spending time to produce the cross-sections, many people have experienced that the finished section lines are full of bumps and unusable for practical work. This problem is not simply caused by poor quality of the point cloud data; in most cases it can be solved by properly configuring the software.
In this article, we provide a detailed explanation of five key settings for creating high-precision cross sections from point clouds. By accurately understanding and adjusting these settings to match your project, you can reliably generate highly accurate cross sections. If you are struggling with jagged cross sections, be sure to check and review these settings.
Setting 1: Optimization of slice thickness
One of the most important factors that determines the accuracy of a cross-sectional drawing is the setting of the slice thickness (section thickness). Slice thickness is the parameter that specifies how far to the left and right of the measurement line point cloud data will be included in that section. The setting of this parameter has a direct impact on the quality of the cross-sectional drawing.
Slice thickness is not merely a geometric parameter but is based on statistical sampling methods. Around the object being measured there will always be noise and measurement errors, and slice thickness determines to what extent these influences affect the measurement. To understand slice thickness physically, it is useful to envision an actual measurement site. For example, when creating a longitudinal profile of a straight road, a perfectly straight surface does not exist; there are slight irregularities and gradients. Slice thickness can be regarded as the parameter that decides to what extent these natural irregularities are accepted as part of the "normal terrain."
If the slice thickness is too large, data from positions distant from the measurement line become mixed in, causing the included information to differ from the actual cross-sectional shape. For example, if the slice thickness is 1 m (3.3 ft), all points within ±1 m (±3.3 ft) of the measurement line will be included, and structures or obstacles that should not be part of the cross-section may be reflected.
On the other hand, if the slice thickness is too small, the number of data points included will be insufficient, producing statistically unreliable results. In areas with low point cloud density, if the slice thickness is too small the data may consist of only a few points, and the lines connecting them will appear jagged.
To determine the optimal slice thickness, we recommend the following process. First, check the density of the point cloud. By understanding the number of points per square meter, you can determine the lower bound for the slice thickness. Typically, for statistical confidence, each cross-section should contain at least several dozen points.
Next, consider the characteristics of the terrain or structure in question. In areas with complex terrain or many undulations, you should set a smaller slice thickness to capture fine details. Conversely, for relatively simple or flat terrain, it is acceptable to set the slice thickness somewhat larger.
In practice, the most reliable approach is to create trial cross-sections at multiple slice thicknesses and compare the results. Try several values such as slice thicknesses of 0.3 m (1.0 ft), 0.5 m (1.6 ft), 0.7 m (2.3 ft), and 1 m (3.3 ft), and choose the value that balances visual smoothness and consistency with the original data.
Configuration 2: Setting Point Filtering Conditions
Another cause of a jagged cross-sectional diagram is the presence of mixed noise and outlier points in the data. By effectively removing these points, the accuracy of the cross-sectional diagram can be greatly improved. Filtering is an extremely important step in data preprocessing and largely determines the success of subsequent processing.
The filtering process should be customized according to the measurement environment. For measurements conducted outdoors, atmospheric fluctuations and wind-induced reflections should be taken into account. For indoor measurements, reflective surfaces and insufficient lighting that cause measurement errors become the main sources of noise. Understanding the noise characteristics specific to each environment and setting filtering conditions accordingly is the key to obtaining high-quality results.
First, consider filtering based on height (Z value). You can predefine the height range to be measured and automatically exclude points outside that range. For example, in road surveys, target only the range from the ground surface up to several meters above it, and exclude points above or below that range.
Next, make use of filtering based on classification information. Many point cloud datasets include classification labels such as "ground", "building", "trees", and "noise". When creating longitudinal profiles of roads, use only the "ground" classification and exclude "trees" and "building" to more accurately represent the terrain.
Additionally, we recommend leveraging statistical outlier detection. Most software includes features that use statistical methods to automatically detect anomalous values that deviate significantly from the surrounding data. Enabling this feature can effectively remove noise caused by measurement errors or reflection errors.
Filtering based on intensity values (the strength of the return signal) is also effective. By targeting only points within a specific intensity range, you can eliminate measurement errors associated with certain materials or conditions. For example, measurements over water surfaces tend to show different intensity values because their reflection characteristics differ.
Setting 3: Adjustment of Noise Reduction Processing Stages
To obtain smooth cross-sectional lines, it is essential to perform noise reduction appropriately. However, excessive noise reduction can risk losing the original shape information, so a gradual, staged approach is important.
First, it is important to understand basic noise-reduction filters. A moving average filter replaces the value at each point with the average of the surrounding points. This smooths out isolated noisy points, but if applied excessively it can also flatten sharp terrain changes.
The median filter is a method that uses the median of surrounding points. It is more resistant to outliers than a moving average, so it is suitable for noise removal. In particular, the median filter is effective when isolated anomalous values are present.
As a practical approach to gradual noise reduction, we recommend the following process. First, process with a weak filter setting and check the results. Then, gradually increase the filter strength as needed, checking the results at each stage to find the optimal balance.
Smoothing is also effective. Smoothing is a technique that, rather than removing noise, smoothly interpolates the trend of a set of data points. By using advanced smoothing methods such as the Savitzky-Golay filter, you can achieve overall smoothness while preserving fine-scale variations.
It is important to verify at each stage of processing that the processed data appropriately represents the original point cloud data. If processing is too aggressive, the original terrain information can be lost and its practical value compromised.
Setting 4: Automatic Generation Parameters for Section Lines
The parameter settings used when automatically generating section lines from extracted point cloud data also have a major impact on the final accuracy. Most software provides multiple automatic generation algorithms, and each algorithm produces different results.
Spline fitting is a method for fitting a smooth curve to point cloud data. This algorithm excels at capturing the overall trend of the data, but in areas where terrain changes abruptly it may overlook fine-scale variations. By adjusting the spline's degree and tension (the tightness of the spline), you can customize the results.
Least-squares linear fitting is a method that fits straight lines to each interval and connects them. By representing complex terrain through segmentation, it can capture fine-scale variations, but when many segments are required, management becomes complicated.
Locally weighted averaging (LOWESS) is an algorithm for capturing local trends in data. It adapts well to complex variations and provides excellent results in many practical applications. As a parameter, adjusting the width of the local window controls the balance between smoothness and sensitivity to local changes.
Running each algorithm experimentally with different parameter settings and comparing the results is a reliable way to find the optimal settings.
Setting 5: Quality verification and correction before output
After the section lines are automatically generated, it is important to perform a quality check before outputting. Verification at this stage ensures the quality of the final file.
First, perform a visual inspection. Verify by eye that the generated cross-section lines closely match the original point cloud data. Carefully check for any unnatural bumps, abrupt steps, or clearly misplaced lines.
Next, conduct statistical validation. Most software provides functions to calculate the error between the extracted data and the generated lines. Check statistical metrics such as mean error, standard deviation, and maximum error, and determine whether these fall within acceptable ranges. As a general guideline, the standard deviation should be a small value proportional to the point cloud density.
Comparing with existing survey data is also effective. If there are existing cross-section drawings or detailed survey data for the same location, compare them with the newly created cross-sections to check for any significant discrepancies.
We will perform manual adjustments as necessary. In areas of complex terrain, there may be locations that cannot be handled by automatic generation. For such locations, add, delete, or move points manually to accurately represent the shape. However, it is important to keep manual corrections to a minimum and to respect the original data.
An Integrated Approach to Improving Accuracy
By integrating the five settings described above, you can achieve the highest accuracy. For a practical workflow, we recommend proceeding in the following order.
First, as part of data preprocessing, filter by height range and classification information to exclude unnecessary points. Next, perform statistical outlier detection to remove obvious noise. After that, experimentally set the slice thickness to multiple values and create initial cross-sections.
From the multiple results obtained, select the most appropriate slice thickness. Apply noise reduction processing incrementally at that thickness to achieve smooth cross-sectional contours while avoiding excessive smoothing. Finally, test different automatic generation algorithms and select the best result.
After that, conduct thorough quality verification and apply manual corrections as needed. By implementing this kind of systematic approach, you can generate cross-sectional drawings that are highly accurate and reliable.
Common issues and how to address them
I will explain the common problems that tend to occur in practical work and how to deal with them.
If the cross-sectional contours fluctuate sharply up and down, the slice thickness may be too small or noise removal may be insufficient. Increase the slice thickness or strengthen the noise-reduction processing.
If the cross-sectional line does not capture the true shape, the slice thickness may be too large. Reduce the slice thickness and target more localized data.
If cross-section lines disappear or suddenly change direction in specific sections, there may be a problem with the data quality in those sections. Check the relevant point cloud data and adjust the classification filter as necessary.
Learning from Practical Case Studies
We introduce effective configuration examples from real-world measurement projects.
For road design measurements, it is generally effective to use the following settings: slice thickness 0.5 m (1.6 ft), target only "ground" in classification information, enable statistical outlier detection, and apply light smoothing. These settings allow lane and shoulder shapes to be captured accurately. Tests across multiple slice thicknesses have empirically confirmed that 0.5 m (1.6 ft) is optimal for many road measurements. However, in narrow alleys or mountain roads with complex gradients, reducing it to 0.3 m (1.0 ft) can represent finer terrain features.
In bridge surveying, set a relatively small slice thickness of 0.2–0.3 m (0.7–1.0 ft) and clearly distinguish classifications such as structural elements and parapets to select the target data. By using automatic generation based on locally weighted averaging, the shapes of complex structures can be accurately represented. In the case of bridges, because the thickness of the structure is often tens of centimeters (several in), setting a small slice thickness is indispensable.
In river measurements, set the slice thickness relatively large at 1-1.5 m (3.3-4.9 ft), try multiple automated generation algorithms, and select the result that best captures the downstream trend. A slightly larger slice thickness is effective for representing the river’s complex lateral undulations while clarifying the overall downstream trend.
By accumulating practical experience, you build up configuration know-how tailored to a project's characteristics. For projects of the same type, referring to the configurations of past successful examples can significantly reduce the time spent on trial and error.
Technological Evolution and Future Directions
Point cloud processing technologies are evolving rapidly, and more advanced automation features are expected to emerge. Features such as automatic parameter optimization using machine learning and AI have already begun to be implemented in some tools. These automation features reduce the need for users to find optimal settings through trial and error, enabling faster and more reliable processing.
At the same time, the sophistication of processing algorithms is advancing. New filtering techniques that enhance statistical robustness, advanced interpolation algorithms that accurately represent complex terrain, and uncertainty-assessment methods that take measurement errors into account are among the theoretical advances that are increasingly being reflected at the implementation level.
The use of cloud computing is also poised to significantly change the future of point cloud processing. It enables the processing of large-scale data beyond the memory and CPU limits of local computers, making higher-precision and more complex analyses increasingly feasible.
Combining with high-precision positioning technologies is also an important theme going forward. By integrating high-precision coordinate data obtained from an iPhone-mounted GNSS high-precision positioning device (LRTK) with point cloud data, more reliable measurement results can be achieved. Using the coordinates of control points acquired from high-precision positioning devices like LRTK to precisely register the point cloud data significantly improves the absolute positional accuracy of cross-sectional drawings.
Thus, combining advanced point cloud processing techniques with high-precision positioning technologies will become the standard approach in modern surveying. Understanding the five settings described in this article will serve as the foundational knowledge to facilitate a smooth transition to future technologies.
Configuration Guide for Different Measurement Environments
The five settings that improve the accuracy of cross-sectional drawings have optimal values that vary depending on the measurement environment. By understanding the environmental characteristics and adjusting the settings accordingly, you can improve both work efficiency and the quality of the results.
First, consider the case of outdoor wide-area terrain surveying. In airborne laser scanning and drone surveys, point cloud density is relatively low (about 1–10 points per square meter (about 1–10 points per 10.8 ft²)), and a variety of targets such as vegetation and buildings are mixed. In this environment, slice thickness is set relatively thick at about 1–2 m (3.3–6.6 ft), and pre-filtering using classification information (ground, buildings, vegetation) is always performed. Noise removal is set to a moderate level, prioritizing the general shape of the terrain. For quality verification before output, cross-checking with existing topographic maps and aerial photographs is effective.
Next, for detailed measurements of buildings and structures. In close-range measurements using ground-based laser scanners, the point cloud density is very high (multiple points per square centimeter (per 0.16 square inch)), and the shapes are complex. In this environment, set a thin slice thickness of about 0.1-0.3 m (0.3-1.0 ft). For filtering, specify the target material and height range strictly. Noise removal is performed mainly with median filtering to preserve the edges of structures. Spline fitting is suitable for automatic generation and can represent the contours of smooth structures.
Measurements involving fluids, such as rivers and coastlines, require special attention. Because water-surface measurements are unstable, prioritize data collected at low tide or during periods of low flow velocity. Erroneous data caused by water-surface reflections should be removed by intensity-value filtering, and slice thickness should be adjusted within the range of 0.5–1.5 m (1.6–4.9 ft) according to terrain characteristics. By setting statistical outlier detection more aggressively, noise from random water-surface reflections can be effectively removed.
Adjusting settings to match the measurement environment, as described, is based on practical experience and knowledge. Accumulating measurement experience across various environments makes setting decisions faster and more reliable, contributing to improved quality of cross-sectional drawings.
Skill development in point cloud processing is built through continuous practice. Properly leveraging the five settings and being able to make flexible judgments based on the measurement environment is the path to becoming a practitioner-level technician.
Next Steps:
Explore LRTK Products & Workflows
LRTK helps professionals capture absolute coordinates, create georeferenced point clouds, and streamline surveying and construction workflows. Explore the products below, or contact us for a demo, pricing, or implementation support.
LRTK supercharges field accuracy and efficiency
The LRTK series delivers high-precision GNSS positioning for construction, civil engineering, and surveying, enabling significant reductions in work time and major gains in productivity. It makes it easy to handle everything from design surveys and point-cloud scanning to AR, 3D construction, as-built management, and infrastructure inspection.

