Neural Underwater Scene Representation

Yunkai Tang1,2,‡, Chengxuan Zhu3,‡, Renjie Wan4,*, Chao Xu3, Boxin Shi1,2,*
1 Natl. Engineering Research Center of Visual Technology, School of Computer Science, Peking University 2 Natl. Key Lab. for Multimedia Information Processing, School of Computer Science, Peking University 3 Natl. Key Lab of General AI, School of Intelligence Science and Technology, Peking University 4 Department of Computer Science, Hong Kong Baptist University Equal contribution, * Corresponding authors.

Underwater Video Rendering

Ours
DynamicNeRF

Abstract

Among the numerous efforts towards digitally recovering the physical world, Neural Radiance Fields (NeRFs) have proved effective in most cases. However, underwater scene introduces unique challenges due to the absorbing water medium, the local change in lighting and the dynamic contents in the scene. We aim at developing a neural underwater scene representation for these challenges, modeling the complex process of attenuation, unstable in-scattering and moving objects during light transport. The proposed method can reconstruct the scenes from both established datasets and in-the-wild videos with outstanding fidelity.

Method Pipeline

Method Pipeline

Dataset

Image 1

Curacao

Image 2

Redsea

Image 3

Japanese

Image 4

Panama

Image 5

Coral

Image 6

Composite

Image 7

Sardine

Image 8

Turtle

Underwater Restoration

Image 9
Image 10
Image 11
Image 12

BibTeX

@inproceedings{tang2024uwnerf,
  author    = {Yunkai Tang, Chengxuan Zhu, Renjie Wan, Chao Xu, Boxin Shi},
  title     = {Neural Underwater Scene Representation},
  journal   = {CVPR},
  year      = {2024},
}