Bundle-Adjusting Accelerated Neural Graphics Primitives
BAA-NGP Overview:
Given a set of camera poses and associated images, Implicit neural representation (INR) models can be trained to synthesize novel, unseen views. However, when the ground truth camera poses are not available during training, either off-the-shelf methods such as COLMAP, or bundle-adjusting neural radiance field (BARF) are used to for estimating camera poses as pre-processing. While COLMAP suffers from feature matching failure, BARF is extremely slow to train. To address these challenges, we propose a framework called bundle-adjusting accelerated neural graphics primitives (BAA-NGP). Our approach leverages accelerated sampling and hash encoding to expedite both pose refinement/estimation and 3D scene reconstruction. Experimental results demonstrate that our method achieves a more than 10 to 20 times speed improvement in novel view synthesis compared to other bundle-adjusting neural radiance field methods without sacrificing the quality of pose estimation.
NeRF(3D): Real scenes (LLFF dataset)
NeRF(3D): synthetic objects (Blender dataset)
image | depth | image | depth |
BARF | BAA-NGP |
The figure shows test view synthesis. Similar to BARF, we start with imperfect camera pose estimation, and perform camera pose refinement and view synthesis simultaneously. Our method converges faster with clearer background and better details.