SceneNN: A Scene Meshes Dataset with aNNotations

Binh-Son Hua1, Quang-Hieu Pham2, Duc Thanh Nguyen3, Minh-Khoi Tran2, Lap-Fai Yu4, and Sai-Kit Yeung5

1The University of Tokyo 2Singapore University of Technology and Design 3Deakin University
4George Mason University 5The Hong Kong University of Science and Technology

We introduce an RGB-D scene dataset consisting of more than 100 indoor scenes. Our scenes are captured at various places, e.g., offices, dormitory, classrooms, pantry, etc., from University of Massachusetts Boston and Singapore University of Technology and Design.
All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses.

News
Dataset & Tools

CVPR 2018 (evaluating semantic segmentation with NYU-D v2 40 classes)

cloud
HKUST server
(Training data, 8.7 GB)
cloud
Google Drive
(Mirror link, 8.7 GB)
cloud
Google Drive
(Raw PLY files, 4.0 GB)

3DV 2016 (original instance annotation of 100+ scenes)

cloud
HKUST server
create
Annotation tool
(Windows x64)

Publication
Technical Report
Discussion

Please email us at scenenn [at] gmail.com for any inquiries. You can also post to the discussion board below.


Acknowledgements

We are grateful to the anonymous reviewers for their constructive comments. We thank Fangyu Lin for his assistance with the data capture and development of the WebGL viewer, and Guoxuan Zhang for his help with the early version of the annotation tool.

Lap-Fai Yu is supported by the University of Massachusetts Boston StartUp Grant P20150000029280 and by the Joseph P. Healey Research Grant Program provided by the Office of the Vice Provost for Research and Strategic Initiatives & Dean of Graduate Studies of the University of Massachusetts Boston. This research is supported by the National Science Foundation under award number 1565978. We also acknowledge NVIDIA Corporation for graphics card donation.

Sai-Kit Yeung is supported by Singapore MOE Academic Research Fund MOE2013-T2-1-159 and SUTD-MIT International Design Center Grant IDG31300106. We acknowledge the support of the SUTD Digital Manufacturing and Design (DManD) Centre which is supported by the National Research Foundation (NRF) of Singapore. This research is also supported by the National Research Foundation, Prime Minister's Office, Singapore under its IDM Futures Funding Initiative.