NeRF in the wild

No, nothing about shooting foam darts. Rather this paper talks about essentially crowd sourcing images of places/spaces, and then using neural nets to construct a synthetic 3D scene. The tricks here are dealing with varying lighting and camera angles, as well as getting rid of transient occlusions (e.g. cars or people in the shot). While standard photogrammetry techniques have gotten pretty good and constructing 3D out of still images, being able to re-simulate correct lighting is no easy task.

Applications? Well, for a VR developer who wants geo-specific terrain in an environment, this could be created by algorithm rather than by hand (and by hand means a high cost). More broadly, historians, city planners, architects, tourists – the sky is the limit. And in a Covid-19 world where travel is limited, the ability to virtually immerse in a near or far away land becomes pretty enticing.

Share your thoughts