Electronic Theses and Dissertations

Date

2022

Document Type

Thesis

Degree Name

Master of Science

Department

Computer Science

Committee Chair

Deepak Venugopal

Committee Member

Nirman Kumar

Committee Member

Myounggyu Won

Abstract

Visual Question Answering (VQA) is a challenge problem that can advance AI by integrating several important sub-disciplines including natural language understanding and computer vision. Large VQA datasets that are publicly available for training and evaluation have driven the growth of VQA models that have obtained increasingly larger accuracy scores. However, it is also important to understand how much a model understands the details that are provided in a question. For example, studies in psychology have shown that syntactic complexity places a larger cognitive load on humans. Analogously, we want to understand if models have the perceptual capability to handle modifications to questions. Therefore, we develop a new dataset using Amazon Mechanical Turk where we asked workers to add modifiers to questions based on object properties and spatial relationships. We evaluate this data on LXMERT which is a state-of-the-art model in VQA that focuses more extensively on language processing. Our conclusions indicate that there is a significant negative impact on the performance of the model when the questions are modified to include more detailed information.

Comments

Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to ProQuest

Notes

Open Access

Share

COinS