Table of Contents
• What challenges arise when confronted with excessively heavy point cloud data
• Tip 1. Split point cloud data to load only the necessary parts
• Tip 2. Reduce rendering load with LOD (Level of Detail) processing
• Tip 3. Reduce computational load by optimizing the coordinate system (convert to local coordinates)
• Tip 4. Convert point cloud data to lightweight formats
• Tip 5. Leverage GPU and optimize viewer settings
• What PC specifications are suitable for point cloud display and processing
• Summary: LRTK as an option for smoothly handling heavy point clouds
What challenges arise when faced with excessively large point cloud data?
In construction, civil engineering, and surveying sites, opportunities to work with point cloud data represented by formats such as LAS and LAZ are increasing. However, point cloud files tend to become very large, and we often hear complaints like “too large to open” and “display and manipulation are extremely slow.” In fact, point cloud data are a vast collection of 3D points comprising millions to tens of millions of points; for example, measuring a 300 m (984.3 ft) × 400 m (1312.3 ft) area at the public surveying standard density (4 points or more per 1 m² (10.8 ft²)) can yield more than 10 million points, and the resulting file size can exceed 100 MB (sometimes around 500 MB). The amount of information is orders of magnitude greater than traditional 2D drawing data, and this large number of points due to high density is the main cause of heavy files. Furthermore, each point often has rich attribute information in addition to position coordinates (X, Y, Z), such as RGB color, return intensity, and classification codes, which increases the data per point and contributes to the overall file size bloat. When all site features (terrain, buildings, trees, structures, etc.) are converted into a single point cloud, many points outside the needed areas are included, and the mixture of unnecessary points depending on the use can also drive up data volume. In addition, the format of point cloud data cannot be ignored. Text formats (CSV or PTS, etc.) are particularly inefficient because they store numbers as strings, making them larger than LAS (binary format) and vastly larger compared with LAZ format (a compressed form of LAS). The combination of the vast number of points, rich attribute information, and format inefficiencies causes the problem of point cloud data being “too large to open.”
So how can you smoothly display and manipulate such large point clouds on site? Below, with environments ranging from desktop point cloud viewers for Windows to viewer environments in web browsers in mind, we explain five specific techniques to speed up rendering. These are general methods that do not depend on any particular software, so try adapting them to fit your own environment.
Tip 1. Techniques for loading only the necessary parts when partitioning point cloud data
Large point cloud files should not be forced into a single file; you can reduce processing load by splitting them by extent or content. Rather than packing all data into one file, keeping smaller chunks lets a viewer load only the necessary parts and avoids situations where the file is "too heavy to open." Typical ways to split are by geographic extent, point-cloud classification (feature type), and intended use (required accuracy or density).
First, for division by area (extent), divide the vast survey area into a checkerboard of tiles. For example, in the Geospatial Information Authority of Japan’s Fundamental Geospatial Data, LAS/LAZ files are provided split by mesh units corresponding to 1:25,000 topographic maps, and similarly dividing by rectangular grids such as 1 km square (3280.8 ft square) or 500 m square (1640.4 ft square) is effective. By separating files per area, each file will be on the order of a few hundred MB, and you can adjust the granularity to subdivide more finely only in high-density urban areas. If the filenames after division include reference coordinate values or mesh numbers, you can tell at a glance which area the data covers, making management easy.
Next is splitting by classification codes. Point cloud data contains a mixture of features such as ground, buildings, vegetation, and vehicles, but for some purposes it is sufficient to handle only specific point groups, such as the ground surface alone or structures only. If you separate files in advance according to the classification information attached to the point cloud, you can quickly extract only the data required for each purpose. Recent point cloud processing software and cloud services offer extensive AI-based automatic classification functions that can automatically label ground, low shrubs, tall trees, buildings, and so on, and separate them into layers. For example, for terrain analysis you can prepare a file that contains only ground points; for structural modeling you can prepare a file that contains only building and bridge points. Preparing datasets by classification is advantageous both for reducing file size and for extracting the necessary data. A point cloud containing only the ground surface can be kept small enough to run smoothly on a laptop, allowing quick viewing even on-site.
Partitioning (data selection) according to the intended use is also important. If precise 3D modeling is the goal, you will likely need the full, high-density point cloud of the original data, but for simple condition surveys or presentation purposes a thinned (downsampled) coarser point cloud may suffice. By extracting only the necessary accuracy and extent and decisively reducing unnecessary points to create lightweight datasets, viewing and analysis in viewers become significantly lighter. In some cases you can generate TIN meshes or contour lines from the point cloud and substitute the derived 2.5D data. Preparing such lightweight and derived versions alongside the full-resolution version allows you to provide data according to the intended use, making distribution to internal and external parties smooth.
As described above, by appropriately partitioning and managing point clouds by spatial extent, feature classification, and intended use, you can greatly reduce frustrations such as "the data is too heavy to open" and "display is slow because it loads points that aren't needed every time." After splitting, organize files with clear naming conventions and a folder structure so all stakeholders can easily select the data they need. As an advanced option, sophisticated methods that hierarchically tile based on a spatial index—such as Entwine Point Tiles (EPT)—have also emerged. EPT is designed so the viewer only loads tiles for the required area and resolution, making it well suited for distributing and viewing large datasets over the Internet. When considering data delivery to external parties or point-cloud visualization in web browsers, it is worth considering the use of such cloud-oriented tiled delivery methods.
Tip 2. Reduce rendering load with LOD (Level of Detail) processing
LOD (Level of Detail) techniques, which dynamically adjust the level of detail of displayed points according to distance and zoom level, are effective for efficiently rendering point cloud data. Attempting to render distant points at full resolution—even those imperceptible to the human eye—unnecessarily increases load, but by incorporating LOD processing the viewer renders at an appropriate point density for the situation. Specifically, when zoomed out to view the entire scene, only a coarsely subsampled point cloud is displayed, while when zoomed in to a specific area, a high-density point cloud is loaded to show fine details. Many 3D point cloud viewers and engines perform this LOD control internally, maintaining performance by rendering only the minimal necessary points without the user being aware.
To get the maximum benefit from LOD processing in practical use, it is key to have the data side prepared in advance to support multiple levels of resolution. The aforementioned Entwine Point Tiles (EPT) embodies exactly this concept: by spatially subdividing point clouds and managing them in a hierarchical structure (octree), it is designed to deliver an optimal amount of points at any scale. Similarly, the recently introduced COPC (Cloud Optimized Point Cloud) point cloud data format also enables efficient sequential loading on the cloud by incorporating a hierarchical index and streaming capabilities while maintaining compatibility with LAS/LAZ. COPC is a format that improves on LAZ compression, and because it allows only the necessary parts of very large point cloud files to be read directly from the cloud, it delivers powerful performance for fast rendering in remote environments. By using these formats, workflows that previously required first downloading the entire dataset locally can now view point clouds smoothly and without interruption over the network.
If you handle wide-area point cloud data in a web browser or share large amounts of data over a corporate LAN, consider such multilevel data optimization. The point is to implement the contrast of "coarse points for distant objects and detailed points only for nearby ones" in both the data and the viewer. When appropriate LOD processing is performed, users will hardly notice the switching, resulting in a consistently responsive display while necessary details are immediately visible, leading to a comfortable user experience.
Tip 3. Reduce computational load by optimizing the coordinate system (transforming to local coordinates)
One thing that is often overlooked when handling point cloud data is the difference in processing load caused by the scale and reference system of coordinate values. For example, when point clouds are captured in large-magnitude absolute coordinate systems such as plane rectangular coordinates in the Japanese geodetic system or latitude/longitude (World Geodetic System), the numeric values for each point become very large (on the order of millions) as-is. Within a computer, calculations involving large numbers are more computationally expensive than those involving small numbers, and some 3D rendering engines can suffer reduced numerical precision and display misalignment problems when coordinates are too far from the origin (0,0,0). The Ministry of Land, Infrastructure, Transport and Tourism also notes in its materials that, if latitude/longitude or plane rectangular coordinates are left unchanged, "if coordinate values are too large (too far from the origin), processing load increases and it becomes difficult to move to an exact position," and it recommends translating the point cloud data origin to near the centroid for processing.
In other words, if the acquired point cloud remains in global geodetic coordinates, converting it once to a local coordinate system (a coordinate system with an arbitrary reference point as the origin) before loading it into a viewer will bring the values into a numeric range that is easier for the rendering engine to handle. This improves the processing efficiency of spatial transformations such as rotation and translation, and ensures display stability and accuracy. Especially when integrating and visualizing multiple point cloud datasets, if their coordinate systems do not match, the datasets will be spatially offset and will not align correctly. In laser scanner surveys, data are obtained in the instrument-based local coordinate system, so it is common to convert them in post-processing to geodetic coordinates aligned with known control points (georeferencing); however, when dealing with very large datasets, the reverse approach can be effective: use a local coordinate system during processing to reduce computational load, and then revert to the designated geodetic system for the final deliverables. Also be careful about the coordinate unit system (m or mm), because some software will perform unnecessary scaling calculations unless units are properly standardized.
In short, the key to speeding things up is to perform point-cloud alignment and transformations as close to the origin as possible, and to avoid running heavy computations while coordinates remain enormous. By optimizing the coordinate system in this way, you can reduce the computational load behind the scenes without changing the apparent data, and as a result the viewer’s display and interaction become more responsive.
Tip 4. Converting point cloud data to lightweight formats
The choice of file format and data compression are also important points for making heavy point clouds easier to handle. If the source data is in a text format (such as .txt or .csv), you should first convert it to a binary format like LAS, and furthermore using the losslessly compressed LAZ format instead of LAS as-is can dramatically reduce file sizes. LAZ is an industry-standard format that can compress LAS data losslessly to about 1/10 (about 7–20%) of the original size, and has become a de facto standard domestically and internationally, for example in the distribution of airborne laser survey data by the Geospatial Information Authority of Japan. Simply compressing to LAZ yields significant benefits such as shorter network transfer times and storage savings, markedly improving the handling of heavy point cloud files. For example, when exchanging point cloud data within your company or with business partners, it is advisable to use LAZ rather than LAS whenever possible.
Even if you only have LAS files on hand, conversion is not difficult. Using an open-source tool called LASzip (part of point cloud processing tools), you can compress LAS to LAZ with a single command, and conversely decompress LAZ to LAS. If you prefer a GUI, for example some open-source point cloud viewers (free point cloud processing software) support LAS/LAZ import and export, and with the appropriate plugin settings you can export to LAZ from the "Save" menu. In recent years, some GIS software, including open-source GIS applications, have added support for reading LAZ and will automatically convert them internally into tiled formats (such as Entwine Point Tiles). Because the procedure itself is not particularly difficult, it is advisable—especially for workflows that handle large volumes of data—to enforce an internal standard of storing and sharing data in LAZ rather than LAS.
Besides LAZ, there are lightweight formats suited to different use cases. For example, you can mesh point clouds and save them in the E57 format (a versatile compressed point cloud/mesh format), and the aforementioned COPC format is gaining attention as a next-generation format that incorporates a hierarchical structure while maintaining compatibility with LAZ. The important thing is to choose the optimal format that your software and systems can read and write and that reduces data volume. If the point clouds for a particular project are too large to load properly, consider rethinking the format itself. In some cases, simply changing the format can dramatically improve performance—for example, when text data has been loaded uncompressed. Note that when using compressed formats, you should confirm in advance that the viewer supports that format (for example, some software supports LAS but requires a plugin for LAZ). Choose a lightweight format that fits your environment to lower the handling cost of large point cloud data.
Tip 5. Optimizing GPU Utilization and Viewer Settings
Settings that fully utilize the PC's graphics performance for the point cloud viewer are also indispensable for speeding up display. Modern 3D rendering is primarily hardware-rendered by the GPU (graphics processing unit), but depending on the PC's configuration and software settings that performance may not be fully realized. As a basic premise, it is desirable to use a PC equipped with a dedicated GPU (discrete graphics). Onboard integrated graphics cannot withstand the load of rendering large point clouds in real time, making reduced frame rates and display stuttering more likely. Machines with mid- to high-end GPUs can handle rendering on the scale of tens of millions of points relatively smoothly. VRAM (video memory) capacity is especially important: when working with 4K resolution displays or massive datasets on the order of hundreds of millions of points, models with at least 8 GB of VRAM, and preferably 12 GB or more, are recommended. Some viewers automatically adjust the number of points loaded for high-resolution displays, but if VRAM is insufficient they cannot keep all the required points resident, causing display interruptions. Therefore, even with a high-end GPU, insufficient VRAM defeats the purpose; choose a GPU with video memory capacity that matches the resolution and data scale.
Also, adjusting settings in the point cloud viewer and on the driver side can be effective. For example, enable viewer drawing options such as "Performance priority" or "Simple display mode," and use features that coarsely downsample points during interaction to lighten the load. In fact, some point cloud viewers perform progressive rendering: when you drag to move the viewpoint they automatically reduce the number of rendered points, and when you stop they switch back to high-resolution display. If such features are off, simply turning them on will likely improve responsiveness.
Adjusting point size (the size of points) and shading effects is also effective. For example, Eye-Dome Lighting (a display feature that adds shading effects to point clouds to enhance depth perception) and rendering based on point normals (a feature that tilts points according to surface orientation) can improve appearance but increase GPU load; if performance is sluggish, turning them off can gain speed. The smaller the points are—and the more completely they fill gaps—the greater the rendering load. However, if you use slightly larger points and allow them to overlap, you can maintain the necessary screen density while reducing the number of points actually drawn, which can make rendering lighter as a result.
By reviewing and adjusting settings that trade off display quality and speed in this way, you can operate point clouds more comfortably within the capabilities of your GPU.
Also, keep your graphics drivers always up to date and verify the manufacturer-recommended settings. Especially on laptops, many models switch between a power-saving integrated GPU and a high-performance GPU, so when using a point cloud viewer be sure it is configured to use the high-performance GPU. Finally, you cannot ignore the choice of viewer itself. No matter how high-end your hardware is, if the software only uses a single thread or cannot take advantage of the GPU, it will be a wasted asset. In general, single-core CPU performance is also important for point cloud rendering (fast single-threaded processing is advantageous for loading and data conversion), but in recent years more software has added multicore support and GPU utilization. Choose the viewer that best matches your company’s PC environment and the scale of your point cloud data, and making full use of your hardware resources is the shortcut to fast display.
What are the PC specifications suitable for point cloud visualization and processing?
So far I have discussed data-side and software-side measures, but to make the most of those measures the specs of the underlying PC are also extremely important. To handle large point clouds smoothly, sufficient performance and balance are required across the CPU, memory, GPU, and storage. First, regarding memory (RAM): because point cloud processing temporarily stores and computes millions to hundreds of millions of points, stability and speed improve the more memory you install. At least 16 GB is required; 32 GB is recommended for practical work, and 64 GB or more provides peace of mind if you process multiple files simultaneously or generate large-scale 3D models. If memory is insufficient, software may crash during loading or processing may become extremely slow due to swapping to virtual memory, so equip a capacity with some margin.
Next is the CPU. Although the rendering in the point cloud viewer itself depends heavily on the GPU, many tasks such as point cloud data conversion, filtering, and analysis are performed on the CPU. Single-thread CPU performance (high clock speed) becomes important especially in scenarios that perform sequential read/write of a single file. On the other hand, for batch-heavy processes like point cloud registration and point cloud generation via photogrammetry, many software packages fully utilize multiple cores, so CPUs with a larger number of cores are also advantageous. In short, it depends on the purpose: for on-site real-time point cloud viewing and simple measurements, a high-clock 6–8-core CPU is sufficient, whereas if you generate point clouds or perform advanced analyses in-house, you should consider CPUs in the 16–32-core class. Fortunately, recent CPUs increasingly combine high clocks and many cores, and for construction industry use the Intel Core i7/i9 series and AMD Ryzen 7/9 series are often chosen as a balanced option.
And the GPU (graphics card) is, as mentioned above, the key component for point cloud display. A high-performance GPU is recommended; unlike CAD applications, gaming-oriented GPUs can also deliver strong performance. The key point is the VRAM capacity mentioned earlier—choose one that will not be insufficient for the data you handle. For medium-sized datasets, think of a mid-range GPU (e.g., RTX 3060/3070 class with 8–12 GB VRAM); for massive datasets or working at 4K or higher, a high-end GPU (RTX 3080/3090 or RTX A5000/A6000 class with 16–24 GB VRAM) is appropriate. In practice, professional workstations sometimes combine 128 GB or more of RAM with high-end GPUs to prepare for large-scale point cloud processing. It comes down to budget, but consider investing with future data sizes in mind so you don’t face “work stoppage due to insufficient machine power when needed.”
Finally, storage. The read/write speed of point cloud files directly impacts display responsiveness and processing times. The high speed of SSDs is, needless to say, essential, and an NVMe-connected SSD is desirable if possible. Especially when handling massive point clouds locally rather than over a network, HDDs introduce seek bottlenecks that leave the CPU/GPU idling, so store and process data on fast SSDs. In addition, capacity is important, as point clouds tend to grow in size before you notice. For professional use, equip a 1 TB or larger SSD and leave headroom on the working drive to ensure smooth reads and writes. To summarize, broadly speaking, a configuration of “CPU with high clock speeds and a moderate number of cores,” “as much RAM as possible,” “a high-performance GPU with ample VRAM,” and “SSD storage” is well suited to point clouds. Of course, where you place emphasis within your budget depends on how you use the system; for example, if display is the main task, favor the GPU and fast storage, whereas if you also perform analysis, invest generously in the CPU and RAM—allocate according to roles. High-spec PCs are by no means cheap, but considering the time savings from increased productivity, they should deliver significant value on site.
Summary: LRTK as an option for handling large point clouds smoothly
Finally, as a perspective not to be overlooked when handling large point cloud datasets, I will touch on the latest tools that streamline the acquisition and updating of point clouds. One device that has attracted attention in recent years is the innovative "LRTK" series, which turns smartphones such as the iPhone into high-precision GNSS surveying instruments. The LRTK is a small GNSS receiver that attaches to a smartphone; simply mounting it on the device makes the phone a centimeter-level (cm level accuracy, half-inch accuracy) GNSS positioning terminal, allowing users to easily obtain high-precision location information in the field. With an integrated battery and antenna and a compact design that fits in one hand, it avoids the complexity of traditional fixed GPS equipment. By combining this LRTK with a smartphone’s camera and LiDAR technology, anyone can easily create high-precision point clouds with absolute coordinates on site. For example, by attaching an LRTK to an iPhone and walking around a site, you can scan the point cloud of the area you walked in real time, save it to the cloud, and immediately perform volume calculations or difference checks against drawings.
Point-cloud surveying, which traditionally required specialized equipment and skilled operators, can now be completed by a single person in a short time, and moreover offers positioning accuracy always at the cm level (cm level accuracy, half-inch accuracy), making it a groundbreaking solution and a cutting-edge technology that also conforms to i-Construction promoted by the Ministry of Land, Infrastructure, Transport and Tourism. The introduction of the LRTK series can dramatically streamline regular updates of current point clouds and as-built management, accelerating the cycle of point-cloud data utilization. Alongside the theme of "how to handle heavy point clouds," "how to easily acquire and update them" is also important for on-site DX. LRTK is precisely the tool that holds that key, and its adoption is expected to grow increasingly. If your company is advancing the use of point clouds, it may be worth looking at the latest devices like the LRTK. The LRTK, which brings advanced GNSS positioning technology to a smartphone, could lift your site's surveying operations and level of data utilization to the next stage.
Next Steps:
Explore LRTK Products & Workflows
LRTK helps professionals capture absolute coordinates, create georeferenced point clouds, and streamline surveying and construction workflows. Explore the products below, or contact us for a demo, pricing, or implementation support.
LRTK supercharges field accuracy and efficiency
The LRTK series delivers high-precision GNSS positioning for construction, civil engineering, and surveying, enabling significant reductions in work time and major gains in productivity. It makes it easy to handle everything from design surveys and point-cloud scanning to AR, 3D construction, as-built management, and infrastructure inspection.

