Taking photo editing into the next dimension
As anyone who has tried to edit out an unphotogenic skip from a pretty street scene knows, photo editing is tricky. It would often be useful to be able to move things around afterwards, but once you’ve chosen your shot, you’re stuck with it. Or are you? UCL Computer Science researchers present a framework to create ‘interactive images’, which let you do ‘real-world’ editing: drag, stretch and move objects in the photo as if they were in front of you in real life.
Until now this sort of manipulation has involved recreating a detailed virtual model of the scene, papering it in textures, and then manipulating the digital model and re-taking a shot – but that all takes time. Youyi Zheng and Niloy Mitra from the Virtual Environments and Computer Graphics Group present their solution to bring added depth information automatically back into a 2D image on Thursday 9th August at SIGGRAPH 2012. See it at work in this video.
A key insight came when they realized that many man-made environments are structured, letting them make simplifications to make the analysis more tractable. Their technique puts boxes around objects in the image and then conducts a few reality checks to make a best guess as to what goes where: making sure things are not interpenetrating, and that perspective still works. There are also additional cues provided by such human preferences as tables being parallel to the floor, verticals staying upright, and so on.
Then, each of those blocks can be manipulated while the code keeps track of what’s in the background, what is hidden from the line of sight, where shadows should go, and so on. Finally, an optimisation is performed on the whole scene to find the camera position and box parameters that will best match the edges detected in the source image.
The researchers envisage this being useful for photographers, videographers and artists, as well as home shoppers who would like to see how their new purchases will look in their own home, since this technique is fast enough to carry out in real time.
You can read the full publication text and obtain the source code on the publication’s webpage: ‘Interactive images: cuboid proxies for smart image manipulation‘.