#HTE

We’re Getting Closer to Creating 3D Models from Single 2D Photographs

As industrial designer Eric Strebel showed us in “How to 3D Scan an Object, Without a 3D Scanner,” if you take a crapload of 2D photographs, smart software can stitch those into a workable 3D model:

image

But what if you’ve only got a single 2D shot? Since this is the year 2020, you’d think that by now we’d have Esper machines. That’s what Deckard used in the original Blade Runner, set in 2019, to zoom around in a 2D photo to discover data hidden in the third dimension:

image

We might not have Esper machines yet, but we’re getting closer. Last fall Cornell University computer vision researchers Simon Niklaus, Long Mai, Jimei Yang and Feng Liu released a collaborative paper called “3D Ken Burns Effect from a Single Image,” detailing a neural network they’d created to pull the trick off. An unaffiliated experimental coder named Jonathan Fly subsequently applied older depth-mapping techniques to the results in the research paper, and yielded this:

“Some images do well even with the older stuff, others are riddled with artifacts,” Fly writes. “I did use dramatic angles to make the failures stand out, in part because I do love a good artifact.”

via BoingBoing

image
https://www.core77.com/posts/95474/Were-Getting-Closer-to-Creating-3D-Models-from-Single-2D-Photographs