Code Documentation

Dataset I/O

class opensfm.dataset.DataSet(data_path)[source]

Accessors to the main input and output data.

Data include input images, masks, and segmentation as well temporary data such as features and matches and the final reconstructions.

All data is stored inside a single folder with a specific subfolder structure.

It is possible to store data remotely or in different formats by subclassing this class and overloading its methods.

camera_models_overrides_exists()[source]

Check if camera overrides file exists.

exif_overrides_exists()[source]

Check if EXIF overrides file exists.

feature_type()[source]

Return the type of local features (e.g. AKAZE, SURF, SIFT)

image_size(image)[source]

Height and width of the image.

images()[source]

List of file names of all images in the dataset.

load_camera_models()[source]

Return camera models data

load_camera_models_overrides()[source]

Load camera models overrides data.

load_combined_mask(image)[source]

Combine binary mask with segmentation mask.

Return a mask that is non-zero only where the binary mask and the segmentation mask are non-zero.

load_detection(image)[source]

Load image detection if it exists, otherwise return None.

load_exif(image)[source]

Load pre-extracted image exif metadata.

load_exif_overrides()[source]

Load EXIF overrides data.

load_features_mask(image, points)[source]

Load a feature-wise mask.

This is a binary array true for features that lie inside the combined mask. The array is all true when there’s no mask.

load_ground_control_points()[source]

Load ground control points.

It uses reference_lla to convert the coordinates to topocentric reference frame.

load_image(image, unchanged=False, anydepth=False)[source]

Load image pixels as numpy array.

The array is 3D, indexed by y-coord, x-coord, channel. The channels are in RGB order.

load_mask(image)[source]

Load image mask if it exists, otherwise return None.

load_reference()[source]

Load reference as a topocentric converter.

load_report(path)[source]

Load a report file as a string.

load_segmentation(image)[source]

Load image segmentation if it exists, otherwise return None.

load_segmentation_mask(image)[source]

Build a mask from segmentation ignore values.

The mask is non-zero only for pixels with segmentation labels not in segmentation_ignore_values.

load_tracks_graph(filename=None)[source]

Return graph (networkx data structure) of tracks

open_image_file(image)[source]

Open image file and return file object.

profile_log()[source]

Filename where to write timings.

save_camera_models(camera_models)[source]

Save camera models data

save_camera_models_overrides(camera_models)[source]

Save camera models overrides data

save_ply(reconstruction, filename=None, no_cameras=False, no_points=False)[source]

Save a reconstruction in PLY format.

save_report(report_str, path)[source]

Save report string to a file.

segmentation_ignore_values(image)[source]

List of label values to ignore.

Pixels with this labels values will be masked out and won’t be processed when extracting features or computing depthmaps.

class opensfm.dataset.UndistortedDataSet(base_dataset, undistorted_subfolder)[source]

Accessors to the undistorted data of a dataset.

Data include undistorted images, masks, and segmentation as well the undistorted reconstruction, tracks graph and computed depth maps.

All data is stored inside a single folder which should be a subfolder of the base, distorted dataset.

load_undistorted_combined_mask(image)[source]

Combine undistorted binary mask with segmentation mask.

Return a mask that is non-zero only where the binary mask and the segmentation mask are non-zero.

load_undistorted_detection(image)[source]

Load an undistorted image detection.

load_undistorted_image(image)[source]

Load undistorted image pixels as a numpy array.

load_undistorted_mask(image)[source]

Load undistorted mask pixels as a numpy array.

load_undistorted_segmentation(image)[source]

Load an undistorted image segmentation.

load_undistorted_segmentation_mask(image)[source]

Build a mask from the undistorted segmentation.

The mask is non-zero only for pixels with segmentation labels not in segmentation_ignore_values.

save_undistorted_detection(image, array)[source]

Save the undistorted image detection.

save_undistorted_image(image, array)[source]

Save undistorted image pixels.

save_undistorted_mask(image, array)[source]

Save the undistorted image mask.

save_undistorted_segmentation(image, array)[source]

Save the undistorted image segmentation.

undistorted_detection_exists(image)[source]

Check if the undistorted detection file exists.

undistorted_image_size(image)[source]

Height and width of the undistorted image.

undistorted_mask_exists(image)[source]

Check if the undistorted mask file exists.

undistorted_segmentation_exists(image)[source]

Check if the undistorted segmentation file exists.

Reconstruction Types

Basic types for building a reconstruction.

class opensfm.types.BrownPerspectiveCamera[source]

Define a perspective camera.

width

image width.

Type:int
height

image height.

Type:int
focal_x

estimated focal length for the X axis.

Type:real
focal_y

estimated focal length for the Y axis.

Type:real
c_x

estimated principal point X.

Type:real
c_y

estimated principal point Y.

Type:real
k1

estimated first radial distortion parameter.

Type:real
k2

estimated second radial distortion parameter.

Type:real
p1

estimated first tangential distortion parameter.

Type:real
p2

estimated second tangential distortion parameter.

Type:real
k3

estimated third radial distortion parameter.

Type:real
back_project(pixel, depth)[source]

Project a pixel to a fronto-parallel plane at a given depth.

back_project_many(pixels, depths)[source]

Project pixels to fronto-parallel planes at given depths.

get_K()[source]

The calibration matrix.

get_K_in_pixel_coordinates(width=None, height=None)[source]

The calibration matrix that maps to pixel coordinates.

Coordinates (0,0) correspond to the center of the top-left pixel, and (width - 1, height - 1) to the center of bottom-right pixel.

You can optionally pass the width and height of the image, in case you are using a resized versior of the original image.

pixel_bearing(pixel)[source]

Unit vector pointing to the pixel viewing direction.

pixel_bearing_many(pixels)[source]

Unit vector pointing to the pixel viewing directions.

pixel_bearings(pixels)[source]

Deprecated: use pixel_bearing_many.

project(point)[source]

Project a 3D point in camera coordinates to the image plane.

project_many(points)[source]

Project 3D points in camera coordinates to the image plane.

class opensfm.types.Camera[source]

Abstract camera class.

A camera is unique defined for its identification description (id), the projection type (projection_type) and its internal calibration parameters, which depend on the particular Camera sub-class.

id

camera description.

Type:str
projection_type

projection type.

Type:str
class opensfm.types.DualCamera(projection_type='unknown')[source]
Define a camera that seamlessly transition
between fisheye and perspective camera.
width

image width.

Type:int
height

image height.

Type:int
focal

estimated focal length.

Type:real
k1

estimated first distortion parameter.

Type:real
k2

estimated second distortion parameter.

Type:real
transition

parametrize between perpective (1.0) and fisheye (0.0)

Type:real
back_project(pixel, depth)[source]

Project a pixel to a fronto-parallel plane at a given depth.

back_project_many(pixels, depths)[source]

Project pixels to fronto-parallel planes at given depths.

get_K()[source]

The calibration matrix.

get_K_in_pixel_coordinates(width=None, height=None)[source]

The calibration matrix that maps to pixel coordinates.

Coordinates (0,0) correspond to the center of the top-left pixel, and (width - 1, height - 1) to the center of bottom-right pixel.

You can optionally pass the width and height of the image, in case you are using a resized version of the original image.

pixel_bearing(pixel)[source]

Unit vector pointing to the pixel viewing direction.

pixel_bearing_many(pixels)[source]

Unit vector pointing to the pixel viewing directions.

pixel_bearings(pixels)[source]

Deprecated: use pixel_bearing_many.

project(point)[source]

Project a 3D point in camera coordinates to the image plane.

project_many(points)[source]

Project 3D points in camera coordinates to the image plane.

class opensfm.types.FisheyeCamera[source]

Define a fisheye camera.

width

image width.

Type:int
height

image height.

Type:int
focal

estimated focal length.

Type:real
k1

estimated first distortion parameter.

Type:real
k2

estimated second distortion parameter.

Type:real
back_project(pixel, depth)[source]

Project a pixel to a fronto-parallel plane at a given depth.

back_project_many(pixels, depths)[source]

Project pixels to fronto-parallel planes at given depths.

get_K()[source]

The calibration matrix.

get_K_in_pixel_coordinates(width=None, height=None)[source]

The calibration matrix that maps to pixel coordinates.

Coordinates (0,0) correspond to the center of the top-left pixel, and (width - 1, height - 1) to the center of bottom-right pixel.

You can optionally pass the width and height of the image, in case you are using a resized version of the original image.

pixel_bearing(pixel)[source]

Unit vector pointing to the pixel viewing direction.

pixel_bearing_many(pixels)[source]

Unit vector pointing to the pixel viewing directions.

pixel_bearings(pixels)[source]

Deprecated: use pixel_bearing_many.

project(point)[source]

Project a 3D point in camera coordinates to the image plane.

project_many(points)[source]

Project 3D points in camera coordinates to the image plane.

class opensfm.types.GroundControlPoint[source]

A ground control point with its observations.

lla

latitue, longitude and altitude

coordinates

x, y, z coordinates in topocentric reference frame

has_altitude

true if z coordinate is known

observations

list of observations of the point on images

class opensfm.types.GroundControlPointObservation[source]

A ground control point observation.

shot_id

the shot where the point is observed

projection

2d coordinates of the observation

class opensfm.types.PerspectiveCamera[source]

Define a perspective camera.

width

image width.

Type:int
height

image height.

Type:int
focal

estimated focal length.

Type:real
k1

estimated first distortion parameter.

Type:real
k2

estimated second distortion parameter.

Type:real
back_project(pixel, depth)[source]

Project a pixel to a fronto-parallel plane at a given depth.

back_project_many(pixels, depths)[source]

Project pixels to fronto-parallel planes at given depths.

get_K()[source]

The calibration matrix.

get_K_in_pixel_coordinates(width=None, height=None)[source]

The calibration matrix that maps to pixel coordinates.

Coordinates (0,0) correspond to the center of the top-left pixel, and (width - 1, height - 1) to the center of bottom-right pixel.

You can optionally pass the width and height of the image, in case you are using a resized versior of the original image.

pixel_bearing(pixel)[source]

Unit vector pointing to the pixel viewing direction.

pixel_bearing_many(pixels)[source]

Unit vectors pointing to the pixel viewing directions.

pixel_bearings(pixels)[source]

Deprecated: use pixel_bearing_many.

project(point)[source]

Project a 3D point in camera coordinates to the image plane.

project_many(points)[source]

Project 3D points in camera coordinates to the image plane.

class opensfm.types.Point[source]

Defines a 3D point.

id

identification number.

Type:int
color

list containing the RGB values.

Type:list(int)
coordinates

list containing the 3D position.

Type:list(real)
reprojection_errors

the reprojection error per shot.

Type:dict(real)
class opensfm.types.Pose(rotation=array([0., 0., 0.]), translation=array([0., 0., 0.]))[source]

Defines the pose parameters of a camera.

The extrinsic parameters are defined by a 3x1 rotation vector which maps the camera rotation respect to the origin frame (rotation) and a 3x1 translation vector which maps the camera translation respect to the origin frame (translation).

rotation

the rotation vector.

Type:vector
translation

the rotation vector.

Type:vector
compose(other)[source]

Get the composition of this pose with another.

composed = self * other

get_Rt()[source]

Get pose as a 3x4 matrix (R|t).

get_origin()[source]

The origin of the pose in world coordinates.

get_rotation_matrix()[source]

Get rotation as a 3x3 matrix.

inverse()[source]

Get the inverse of this pose.

rotation

Rotation in angle-axis format.

set_origin(origin)[source]

Set the origin of the pose in world coordinates.

>>> pose = Pose()
>>> pose.rotation = np.array([0., 1., 2.])
>>> origin = [1., 2., 3.]
>>> pose.set_origin(origin)
>>> np.allclose(origin, pose.get_origin())
True
set_rotation_matrix(rotation_matrix, permissive=False)[source]

Set rotation as a 3x3 matrix.

>>> pose = Pose()
>>> pose.rotation = np.array([0., 1., 2.])
>>> R = pose.get_rotation_matrix()
>>> pose.set_rotation_matrix(R)
>>> np.allclose(pose.rotation, [0., 1., 2.])
True
>>> pose.set_rotation_matrix([[3,-4, 1], [ 5, 3,-7], [-9, 2, 6]])
Traceback (most recent call last):
...
ValueError: Not orthogonal
>>> pose.set_rotation_matrix([[0, 0, 1], [-1, 0, 0], [0, 1, 0]])
Traceback (most recent call last):
...
ValueError: Determinant not 1
transform(point)[source]

Transform a point from world to this pose coordinates.

transform_inverse(point)[source]

Transform a point from this pose to world coordinates.

transform_inverse_many(points)[source]

Transform points from this pose to world coordinates.

transform_many(points)[source]

Transform points from world coordinates to this pose.

translation

Translation vector.

class opensfm.types.Reconstruction[source]

Defines the reconstructed scene.

cameras

List of cameras.

Type:Dict(Camera)
shots

List of reconstructed shots.

Type:Dict(Shot)
points

List of reconstructed points.

Type:Dict(Point)
reference

Topocentric reference converter.

Type:TopocentricConverter
add_camera(camera)[source]

Add a camera in the list

Parameters:camera – The camera.
add_point(point)[source]

Add a point in the list

Parameters:point – The point.
add_shot(shot)[source]

Add a shot in the list

Parameters:shot – The shot.
get_camera(id)[source]

Return a camera by id.

Returns:If exists returns the camera, otherwise None.
get_point(id)[source]

Return a point by id.

Returns:If exists returns the point, otherwise None.
get_shot(id)[source]

Return a shot by id.

Returns:If exists returns the shot, otherwise None.
class opensfm.types.Shot[source]

Defines a shot in a reconstructed scene.

A shot here is refered as a unique view inside the scene defined by the image filename (id), the used camera with its refined internal parameters (camera), the fully camera pose respect to the scene origin frame (pose) and the GPS data obtained in the moment that the picture was taken (metadata).

id

picture filename.

Type:str
camera

camera.

Type:Camera
pose

extrinsic parameters.

Type:Pose
metadata

GPS, compass, capture time, etc.

Type:ShotMetadata
back_project(pixel, depth)[source]

Project a pixel to a fronto-parallel plane at a given depth.

The plane is defined by z = depth in the shot reference frame.

back_project_many(pixels, depths)[source]

Project pixels to fronto-parallel planes at given depths. The planes are defined by z = depth in the shot reference frame.

project(point)[source]

Project a 3D point to the image plane.

project_many(points)[source]

Project 3D points to the image plane.

viewing_direction()[source]

The viewing direction of the shot.

That is the positive camera Z axis in world coordinates.

class opensfm.types.ShotMesh[source]

Triangular mesh of points visible in a shot

vertices

(list of vectors) mesh vertices

faces

(list of triplets) triangles’ topology

class opensfm.types.ShotMetadata[source]

Defines GPS data from a taken picture.

orientation

the exif orientation tag (1-8).

Type:int
capture_time

the capture time.

Type:real
gps_dop

the GPS dop.

Type:real
gps_position

the GPS position.

Type:vector
class opensfm.types.SphericalCamera[source]

A spherical camera generating equirectangular projections.

width

image width.

Type:int
height

image height.

Type:int
pixel_bearing(pixel)[source]

Unit vector pointing to the pixel viewing direction.

pixel_bearing_many(pixels)[source]

Unit vector pointing to the pixel viewing directions.

pixel_bearings(pixels)[source]

Deprecated: use pixel_bearing_many.

project(point)[source]

Project a 3D point in camera coordinates to the image plane.

project_many(points)[source]

Project 3D points in camera coordinates to the image plane.

Features

Tools to extract features.

opensfm.features.extract_features(color_image, config)[source]

Detect features in an image.

The type of feature detected is determined by the feature_type config option.

The coordinates of the detected points are returned in normalized image coordinates.

Returns:
  • points: x, y, size and angle for each feature
  • descriptors: the descriptor of each feature
  • colors: the color of the center of each feature
Return type:tuple
opensfm.features.load_features(filepath, config)[source]

Load features from filename

opensfm.features.normalize_features(points, desc, colors, width, height)[source]

Normalize feature coordinates and size.

opensfm.features.resized_image(image, config)[source]

Resize image to feature_process_size.

opensfm.features.root_feature_surf(desc, l2_normalization=False, partial=False)[source]

Experimental square root mapping of surf-like feature, only work for 64-dim surf now

Matching

opensfm.matching.apply_adhoc_filters(data, matches, im1, camera1, p1, im2, camera2, p2)[source]

Apply a set of filters functions defined further below for removing static data in images.

opensfm.matching.match(im1, im2, camera1, camera2, data)[source]

Perform matching for a pair of images.

opensfm.matching.match_arguments(pairs, ctx)[source]

Generate arguments for parralel processing of pair matching

opensfm.matching.match_brute_force(f1, f2, config)[source]

Brute force matching and Lowe’s ratio filtering.

Parameters:
  • f1 – feature descriptors of the first image
  • f2 – feature descriptors of the second image
  • config – config parameters
opensfm.matching.match_brute_force_symmetric(fi, fj, config)[source]

Match with brute force in both directions and keep consistent matches.

Parameters:
  • fi – feature descriptors of the first image
  • fj – feature descriptors of the second image
  • config – config parameters
opensfm.matching.match_flann(index, f2, config)[source]

Match using FLANN and apply Lowe’s ratio filter.

Parameters:
  • index – flann index if the first image
  • f2 – feature descriptors of the second image
  • config – config parameters
opensfm.matching.match_flann_symmetric(fi, indexi, fj, indexj, config)[source]

Match using FLANN in both directions and keep consistent matches.

Parameters:
  • fi – feature descriptors of the first image
  • indexi – flann index if the first image
  • fj – feature descriptors of the second image
  • indexj – flann index of the second image
  • config – config parameters
opensfm.matching.match_images(data, ref_images, cand_images)[source]

Perform pair matchings between two sets of images.

It will do matching for each pair (i, j), i being in ref_images and j in cand_images, taking assumption that matching(i, j) == matching(j ,i). This does not hold for non-symmetric matching options like WORDS. Data will be stored in i matching only.

opensfm.matching.match_images_with_pairs(data, exifs, ref_images, pairs)[source]

Perform pair matchings given pairs.

opensfm.matching.match_unwrap_args(args)[source]

Wrapper for parallel processing of pair matching.

Compute all pair matchings of a given image and save them.

opensfm.matching.match_words(f1, words1, f2, words2, config)[source]

Match using words and apply Lowe’s ratio filter.

Parameters:
  • f1 – feature descriptors of the first image
  • w1 – the nth closest words for each feature in the first image
  • f2 – feature descriptors of the second image
  • w2 – the nth closest words for each feature in the second image
  • config – config parameters
opensfm.matching.match_words_symmetric(f1, words1, f2, words2, config)[source]

Match using words in both directions and keep consistent matches.

Parameters:
  • f1 – feature descriptors of the first image
  • w1 – the nth closest words for each feature in the first image
  • f2 – feature descriptors of the second image
  • w2 – the nth closest words for each feature in the second image
  • config – config parameters
opensfm.matching.robust_match(p1, p2, camera1, camera2, matches, config)[source]

Filter matches by fitting a geometric model.

If cameras are perspective without distortion, then the Fundamental matrix is used. Otherwise, we use the Essential matrix.

opensfm.matching.robust_match_calibrated(p1, p2, camera1, camera2, matches, config)[source]

Filter matches by estimating the Essential matrix via RANSAC.

opensfm.matching.robust_match_fundamental(p1, p2, matches, config)[source]

Filter matches by estimating the Fundamental matrix via RANSAC.

opensfm.matching.save_matches(data, images_ref, matched_pairs)[source]

Given pairwise matches (image 1, image 2) - > matches, save them such as only {image E images_ref} will store the matches.

opensfm.matching.unfilter_matches(matches, m1, m2)[source]

Given matches and masking arrays, get matches with un-masked indexes.

Incremental Reconstruction

Incremental reconstruction pipeline

class opensfm.reconstruction.ShouldBundle(data, reconstruction)[source]

Helper to keep track of when to run bundle.

class opensfm.reconstruction.ShouldRetriangulate(data, reconstruction)[source]

Helper to keep track of when to re-triangulate.

class opensfm.reconstruction.TrackTriangulator(graph, graph_inliers, reconstruction)[source]

Triangulate tracks in a reconstruction.

Caches shot origin and rotation matrix

triangulate(track, reproj_threshold, min_ray_angle_degrees)[source]

Triangulate track and add point to reconstruction.

triangulate_dlt(track, reproj_threshold, min_ray_angle_degrees)[source]

Triangulate track using DLT and add point to reconstruction.

triangulate_robust(track, reproj_threshold, min_ray_angle_degrees)[source]

Triangulate track in a RANSAC way and add point to reconstruction.

opensfm.reconstruction.align_two_reconstruction(r1, r2, common_tracks, threshold)[source]

Estimate similarity transform between two reconstructions.

opensfm.reconstruction.bootstrap_reconstruction(data, graph, camera_priors, im1, im2, p1, p2)[source]

Start a reconstruction using two shots.

opensfm.reconstruction.bundle(graph, reconstruction, camera_priors, gcp, config)[source]

Bundle adjust a reconstruction.

opensfm.reconstruction.bundle_local(graph, reconstruction, camera_priors, gcp, central_shot_id, config)[source]

Bundle adjust the local neighborhood of a shot.

opensfm.reconstruction.bundle_single_view(graph, reconstruction, shot_id, camera_priors, config)[source]

Bundle adjust a single camera.

opensfm.reconstruction.compute_image_pairs(track_dict, cameras, data)[source]

All matched image pairs sorted by reconstructability.

opensfm.reconstruction.direct_shot_neighbors(graph, reconstruction, shot_ids, min_common_points, max_neighbors)[source]

Reconstructed shots sharing reconstructed points with a shot set.

opensfm.reconstruction.get_image_metadata(data, image)[source]

Get image metadata as a ShotMetadata object.

opensfm.reconstruction.grow_reconstruction(data, graph, graph_inliers, reconstruction, images, camera_priors, gcp)[source]

Incrementally add shots to an initial reconstruction.

opensfm.reconstruction.incremental_reconstruction(data, graph)[source]

Run the entire incremental reconstruction pipeline.

opensfm.reconstruction.merge_reconstructions(reconstructions, config)[source]

Greedily merge reconstructions with common tracks.

opensfm.reconstruction.merge_two_reconstructions(r1, r2, config, threshold=1)[source]

Merge two reconstructions with common tracks IDs.

opensfm.reconstruction.paint_reconstruction(data, graph, reconstruction)[source]

Set the color of the points from the color of the tracks.

opensfm.reconstruction.pairwise_reconstructability(common_tracks, rotation_inliers)[source]

Likeliness of an image pair giving a good initial reconstruction.

opensfm.reconstruction.reconstructed_points_for_images(graph, reconstruction, images)[source]

Number of reconstructed points visible on each image.

Returns:A list of (image, num_point) pairs sorted by decreasing number of points.
opensfm.reconstruction.remove_outliers(graph, reconstruction, config, points=None)[source]

Remove points with large reprojection error.

A list of point ids to be processed can be given in points.

opensfm.reconstruction.resect(graph, graph_inliers, reconstruction, shot_id, camera, metadata, threshold, min_inliers)[source]

Try resecting and adding a shot to the reconstruction.

Returns:True on success.
opensfm.reconstruction.retriangulate(graph, graph_inliers, reconstruction, config)[source]

Retrianguate all points

opensfm.reconstruction.shot_lla_and_compass(shot, reference)[source]

Lat, lon, alt and compass of the reconstructed shot position.

opensfm.reconstruction.shot_neighborhood(graph, reconstruction, central_shot_id, radius, min_common_points, max_interior_size)[source]

Reconstructed shots near a given shot.

Returns:
  • interior: the list of shots at distance smaller than radius
  • boundary: shots sharing at least on point with the interior
Return type:a tuple with interior and boundary

Central shot is at distance 0. Shots at distance n + 1 share at least min_common_points points with shots at distance n.

opensfm.reconstruction.triangulate_gcp(point, shots)[source]

Compute the reconstructed position of a GCP from observations.

opensfm.reconstruction.triangulate_shot_features(graph, graph_inliers, reconstruction, shot_id, config)[source]

Reconstruct as many tracks seen in shot_id as possible.

opensfm.reconstruction.two_view_reconstruction(p1, p2, camera1, camera2, threshold, iterations)[source]

Reconstruct two views using the 5-point method.

Parameters:
  • p2 (p1,) – lists points in the images
  • camera2 (camera1,) – Camera models
  • threshold – reprojection error threshold
Returns:

rotation, translation and inlier list

opensfm.reconstruction.two_view_reconstruction_general(p1, p2, camera1, camera2, threshold, iterations)[source]

Reconstruct two views from point correspondences.

These will try different reconstruction methods and return the results of the one with most inliers.

Parameters:
  • p2 (p1,) – lists points in the images
  • camera2 (camera1,) – Camera models
  • threshold – reprojection error threshold
Returns:

rotation, translation and inlier list

opensfm.reconstruction.two_view_reconstruction_plane_based(p1, p2, camera1, camera2, threshold)[source]

Reconstruct two views from point correspondences lying on a plane.

Parameters:
  • p2 (p1,) – lists points in the images
  • camera2 (camera1,) – Camera models
  • threshold – reprojection error threshold
Returns:

rotation, translation and inlier list

opensfm.reconstruction.two_view_reconstruction_rotation_only(p1, p2, camera1, camera2, threshold)[source]

Find rotation between two views from point correspondences.

Parameters:
  • p2 (p1,) – lists points in the images
  • camera2 (camera1,) – Camera models
  • threshold – reprojection error threshold
Returns:

rotation and inlier list

Config

opensfm.config.default_config()[source]

Return default configuration

opensfm.config.load_config(filepath)[source]

Load config from a config.yaml filepath