Electronic Theses and Dissertations

Identifier

6774

Date

2021

Document Type

Thesis

Degree Name

Master of Science

Major

Electrical and Computer Engr

Concentration

Electrical Engineering

Committee Chair

Eddie Jacobs

Committee Member

Ana Doblas

Committee Member

Aaron Robinson

Committee Member

Lan Wang

Abstract

Different types of 3D sensors, such as LiDAR and RGB-D cameras, capture data with different resolution, range, and noise characteristics. It is often desired to merge these different types of data together into a coherent scene, but automatic alignment algorithms generally assume that the characteristics of each fragment are all similar. Our goal is to evaluate the performance of these algorithms on data with different characteristics to enable the integration of data from multiple types of sensors. We use the Redwood dataset, which has high-resolution scans of several different environments captured using a stationary LiDAR scanner. We first develop a method to emulate the capture of these environments as viewed by different types of sensor by leveraging OpenGL and a mesh creation process. Next, we take fragments of these captures which represent scenarios in which each type of sensor would be used, using our scanning experience to inform the selection process. Finally, we attempt to merge the fragments together using several automatic algorithms and evaluate how the results compare with the original scenes. We evaluate based on transformation similarity to ground truth, algorithm speed and ease of use, and subjective quality assessments.

Comments

Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to the local University of Memphis Electronic Theses & dissertation (ETD) Repository.

Share

COinS