Advances in the space of creating 3D models from 2D photographs are getting downright amazing. This month a team of computer vision researchers from UC Berkeley, UC San Diego and Google Research showed off their NeRF technique–that’s Neural Radiance Fields–for “view synthesis” on a variety of objects captured as 2D images, and the level of detail extracted is astonishing:
Their research paper is here, and they’ve posted the code to Github.