top of page

Basic Terms Beginners in SfM Processing Should Know: Understand the Basics of Photogrammetry

By LRTK Team (Lefixea Inc.)

All-in-One Surveying Device: LRTK Phone

Introduction

Recently, the terms SfM processing and photogrammetry have been heard more often on construction, civil engineering, and surveying sites. This technique—capturing a site with drones or digital cameras and creating a 3D model on a computer—has attracted attention for its potential to improve efficiency and safety. However, for those encountering SfM processing for the first time, questions like “How do you make a 3D model from photos?” or concerns about many technical terms can cause confusion.


This article gently explains the basic technical terms that beginners in SfM processing should know. Organizing the fundamental terminology of photogrammetry will make it easier to understand how the technology works and will help with communication and learning in the workplace. Technical terms are explained as simply as possible, so readers who want to start drone-based photogrammetry, those in the construction industry adopting ICT-based surveying methods, and municipal staff with no surveying experience can read on with confidence.


Basic Technical Terms

First, let’s review the basic technical terms frequently used in the fields of SfM processing and photogrammetry.


SfM (Structure from Motion): A computer vision technique that reconstructs the three-dimensional structure (shape) of an object from multiple photographs. Software analyzes a set of photos taken from different positions and angles and reconstructs the object as a 3D model. Because it can automatically create 3D models from photos taken by drones or cameras, SfM plays a central role in photogrammetry.

Photogrammetry: A general term for methods that measure dimensions and shapes of terrain or structures using photographic images. Historically developed for creating maps from aerial photos, modern digital imagery and advances in SfM processing now enable anyone to generate detailed 3D data from photos. By overlapping and analyzing many photos taken from drones or on the ground, photogrammetry creates point cloud data and 3D models of the subject.

Point Cloud Data: 3D data representing a shape using many points in space. Each point has coordinates (X, Y, Z) indicating positions on an object’s surface. In photogrammetry, software generates many points by matching features across photos to reproduce the shape of buildings or terrain as a collection of points. Higher point density captures shapes in more detail, and sufficiently dense point clouds can represent the subject with an appearance close to the photos.

Mesh: A 3D model composed of polygons generated from a point cloud. While a point cloud consists of countless individual points, a mesh model connects points to form surfaces (polygons), creating a continuous representation of an object’s surface. Meshes are typically constructed from triangular polygons and are automatically generated from point cloud data. Converting to a mesh makes the model easier to handle as a 3D object, and textures can be applied to the surface to achieve a more lifelike appearance.


Key Terms Used in the Processing Workflow

In the workflow of photogrammetry software that generates 3D models from photos, specialized terminology appears at each step. Here are the key terms you should know in the SfM processing workflow.


Feature Points: Distinctive points in an image (areas with strong contrast, corners, patterns, etc.). SfM automatically detects many feature points from each photo. These points serve as markers to find common locations between images and form the basis for later 3D reconstruction.

Matching (Feature Matching): The process of pairing the same feature points that appear in multiple photos. Software compares patterns of feature points across images to find corresponding points on the same object. Obtaining a sufficient number of matches allows the software to estimate the positional relationships between the photos.

Bundle Adjustment: A computational optimization process that simultaneously refines camera parameters (for the photos) and the positions of points in the point cloud based on feature matching results. For all photos, it adjusts the camera positions and orientations (internal and external parameters) and the coordinates of the reconstructed 3D points to minimize error. In simple terms, it iteratively tweaks the relative positions of the photos and the 3D point arrangements so that the projected positions of feature points align consistently. This process yields accurate camera poses and an initial 3D point cloud.

Sparse Point Cloud: A sparse point cloud obtained as a result of bundle adjustment. The positions of the feature points used in matching are calculated by triangulation, producing a relatively sparse set of points. A sparse point cloud shows the overall shape of the object and establishes the consistency of camera placement and scale.

Dense Point Cloud: A much higher-density point cloud than the sparse one. After camera positions are determined, software estimates depth in detail for each image and generates point clouds that cover nearly the entire surface of the object. The resulting dense point cloud provides detailed 3D data that captures fine features of buildings and terrain. Generating a dense point cloud prepares the ground for creating mesh models and texture mapping.


Terms Related to Accuracy and Coordinates

To use 3D data produced by photogrammetry for surveying outputs or design, it’s important to improve model accuracy and align it to a real-world coordinate system. Here are the key terms related to accuracy control and coordinate settings.


GCP (Ground Control Point): Reference points placed on the ground whose coordinates have been measured precisely in advance. In photogrammetry, multiple GCPs are positioned across the target area and their locations are associated with the images in the software, giving the generated point cloud and model real-world coordinates and the correct scale. Using GCPs improves the positional and elevation accuracy of the 3D model.

RTK (Real-Time Kinematic): A technique using GNSS (satellite positioning) to achieve centimeter-level positioning in real time. A base station (fixed receiver) and a rover (mobile receiver) share satellite data to correct errors, delivering far more precise location information than standard GPS. In photogrammetry, RTK-enabled drones can tag photos with high-precision coordinates (geotags), or GCP coordinates can be obtained via RTK surveying, ensuring high spatial accuracy of the model.

Georeferencing: The process of tying the created point cloud or model to a real-world geodetic coordinate system (positioning and alignment). In short, it’s aligning the finished 3D model to the correct location and scale on a map. Photogrammetry software uses known GCP coordinates or photo geotags to align the point cloud to the real-world coordinate system. Georeferenced data can be overlaid with other map data or CAD drawings.

Scale Bar: A marker or ruler of known length placed in the photos. For example, by placing a precisely measured ruler or rod near the subject and entering the true distance between its two points into the software, you can set the correct scale for the model. If GCPs cannot be placed, using a scale bar allows the photogrammetry-generated model to be scaled to real-world dimensions.

Geotag: Positional information attached to photos (latitude, longitude, altitude, etc.). When shooting with a drone or smartphone, the EXIF data of each image often automatically records the capture location. Photogrammetry software reads these geotags to set initial camera locations for each photo. Using geotagged images helps start reconstruction with the model’s position and orientation roughly aligned to the real world, making later alignment easier.


Terms Related to Output Models

Let’s also cover terms related to the deliverables (output data) generated by photogrammetry software. Below are typical outputs produced as 3D models or images and their descriptions.


Orthophoto: A composite image created from an overhead viewpoint (an orthorectified image). Multiple aerial photos are stitched together and corrected for distortion so the resulting image is an overhead view with a consistent scale. Measurements of distance and area on an orthophoto are accurate because every part is at the same scale. In drone photogrammetry, you can output an orthomosaic image of the entire site based on the generated 3D model, which is useful for getting a plan-view understanding of the current conditions.

Texture Mapping: The process of applying photographic images to the surface of a 3D mesh model. Mesh shapes obtained from photogrammetry are geometric, but overlaying the corresponding photos’ colors and patterns on the surface creates a lifelike appearance. Textured models contain more visual information than point clouds or bare mesh shapes and are useful for intuitively understanding a site.

3D Mesh (Polygon Model): A three-dimensional surface model generated from a point cloud. It represents an object’s surface by combining many small triangular polygons, resulting in a smoother, continuous shape than a point cloud. 3D mesh models produced by photogrammetry recreate the actual contours and shapes of buildings and terrain and can be used for design, simulation, and VR visualization.

Polygon: An individual face element (a polygon) that makes up a 3D mesh. In SfM-generated models, triangular polygons are typically used in large numbers. The level of detail and data size depend on the number of polygons: models with more polygons represent details more smoothly but have larger file sizes and heavier processing requirements. Therefore, models are often simplified (reduced) to an appropriate polygon count depending on the intended use.


Other Related Terms

Finally, here are a few other terms related to SfM photogrammetry.


Cloud Processing: Services or methods that perform photogrammetry data processing on servers in the cloud. Generating 3D models from many photos requires high-performance computers, but cloud processing allows users to upload photos via the internet and have the server automatically generate point clouds and models. Users only need to download the results, making it easy to request processing from the field.

Image Overlap (Overlap Rate): The proportion of overlap between captured photos. Low overlap reduces the number of corresponding feature points between images, making successful 3D reconstruction difficult. Therefore, photogrammetry recommends ensuring sufficient overlap between neighboring photos (e.g., around 80% in the flight direction and 60% or more sideways). When shooting with a drone, it’s important to plan flight routes and capture intervals that account for overlap.

Flight Path: The route the camera (drone) follows during shooting. To comprehensively capture a subject from various angles, the flight path is planned in advance. For example, for wide-area surveys, parallel zigzag flight lines may be flown, while for structures, flying around in a circular pattern may be used to capture the surroundings. Stable flight paths that acquire sufficient photos help prevent holes or gaps in the SfM-processed 3D model.


Summary

This article introduced the main basic terms used in SfM processing. At first, the technical vocabulary may seem overwhelming, but understanding the meaning of each term helps clarify the flow and mechanisms of photogrammetry. Knowing these basic terms will greatly assist communication when handling 3D models on site and in learning related technologies.


In recent years, solutions have emerged that leverage these technical elements in the background, enabling anyone to perform 3D surveying easily. For example, simplified surveying using LRTK combines a smartphone with high-precision GNSS to capture a site and instantly obtain a high-precision 3D point cloud with global coordinates. This approach uses the technologies behind RTK and SfM so users can get results without needing to focus on technical terms or complex procedures. By using such tools, even beginners can quickly and safely record and survey site geometry. Deepen your understanding of the basic terminology and take the next step toward introducing new technologies on site.


Next Steps:
Explore LRTK Products & Workflows

LRTK helps professionals capture absolute coordinates, create georeferenced point clouds, and streamline surveying and construction workflows. Explore the products below, or contact us for a demo, pricing, or implementation support.

LRTK supercharges field accuracy and efficiency

The LRTK series delivers high-precision GNSS positioning for construction, civil engineering, and surveying, enabling significant reductions in work time and major gains in productivity. It makes it easy to handle everything from design surveys and point-cloud scanning to AR, 3D construction, as-built management, and infrastructure inspection.

bottom of page