3D Question Answering

1City University of Hong Kong, 2Microsoft Cloud AI, 3University of California San Diego

(Submitted on 15 Dec 2021, accepted on 29 Nov 2022)

News: The dataset and the code have been released! The benchmark and data browser will be available in the future.


Illustration of natural-language, free-form, open-ended questions collected for 3D scenes via Amazon Mechanical Turk (AMT).

Questions in our ScanQA dataset include a comprehensive variety of appearance and geometry concepts. For example, Q1 requires information on not only the appearance of a single object, but also the geometry of that object and a relative spatial comparison between objects. Q2 captures the placement information from aggregation of more than two objects. Q3 requires ability for navigation to the view-point as well as spatial comparison to the average value.

Tips: You can rotate/move/zoom the scenes, and change point size with '+' or '-'.

Abstract

Visual Question Answering (VQA) has experienced tremendous progress in recent years. However, most efforts have only focused on 2D image question-answering tasks.

In this paper, we present the first attempt at extending VQA to its 3D counterpart, 3D question answering (3DQA), which can facilitate a machine's perception of 3D real-world scenarios.

Unlike 2D image VQA, 3DQA takes the color point cloud as input and requires both appearance and 3D geometrical comprehension to answer the 3D-related questions. To this end, we propose a novel transformer-based 3DQA framework "3DQA-TR", which consists of two encoders to exploit the appearance and geometry information, respectively. Finally, the multi-modal information about the appearance, geometry, and the linguistic question can attend to each other via a 3D-Linguistic Bert to predict the target answers.

To verify the effectiveness of our proposed 3DQA framework, we further develop the first 3DQA dataset "ScanQA", which builds on the ScanNet dataset and contains over 10K question-answer pairs for 806 scenes. To the best of our knowledge, ScanQA is the first large-scale dataset with natural-language questions and free-form answers in 3D environments that is fully human-annotated. We also use several visualizations and experiments to investigate the astonishing diversity of the collected questions and the significant differences between this task from 2D VQA and 3D captioning.

Extensive experiments on this dataset demonstrate the obvious superiority of our proposed 3DQA framework over existing VQA frameworks, and the effectiveness of our major designs.

Datset Information


It is the first dataset for 3D Question Answering (3DQA) task, which provides natural-language, free-form, and open-ended questions and answers in free-perspective 3D scans. In total, the ScanQA dataset provides 10,062 questions (12.48 questions per scene on average) of 1,613 scans from 806 scenes from ScanNet.

A rich collection of interesting and diverse questions with various viewpoints is collected under a carefully designed data collection strategy. To ensure the quality of questions and answers, measures such as an online robot checker, manual correction, and syntax check are used. In addition to the answers, the viewpoint, camera setting, and annotators' confidence will be provided.

3DQA-TR Framework and Results


Illustration of our pipeline. The 3DQA-TR framework has three jointly trainable parts: appearance encoder (red), geometry encoder (orange), and 3D-L BERT. The geometry encoder provides both structure information of each object and positional and scale information to model spatial relationship between objects. The appearance encoder is used to extract appearance information. Given the encoded appearance and geometry together with question embedding, the 3D-L BERT attends to the intra- and inter- modal interactions and predicts the answer.


Example predictions from 2D VQA baseline and our 3DQA-TR, and human answers in ScanQA.

Acknowledgement

The annotation webpage is based on ScanRefer. We thank the authors for sharing their codebase. I would like to give my particular thanks to all the annotators, and Jiaying Lin, for providing the initial idea, and the contribution to the annotation collection webpages and server.

BibTeX

@misc{ye2021tvcg3dqa,
      title={3D Question Answering},
      author={Shuquan Ye and Dongdong Chen and Songfang Han and Jing Liao},
      journal = {IEEE Transactions on Visualization & Computer Graphics},
      year={2021},
      issn = {1941-0506},
      pages = {1-16},
      doi = {10.1109/TVCG.2022.3225327}
  }