Haciendo 3D de una superficie con dos fotografías: una con flash y otra sin flash
Yo me he quedado a cuadros...
Textured graphics can be captured in a flash - tech - 27 August 2008 - New Scientist Tech
Pero es mucho mejor leerlo en este enlace:The virtual worlds in computer games provide a realistic backdrop to the action. But step too close and the effect is lost – you'll see that textures and patterns are usually displayed on flat surfaces that look dull and artificial.
A simpler way to add depth to textured surfaces could change that.
The new technique can reconstruct the depth of a surface simply by taquíng two photos of it – one with a flash and one without (see video, right). Merely analysing the resulting shading patterns can capture the surface's 3D texture.
Until now making realistic textures required the use of bulky and expensive laser scanners, says Mashhuda Glencross at the University of Manchester, UK. And the process is really time-consuming, she adds.
3D in a flash
Glencross and the Manchester team worked with Gregory Ward at Dolby Canada in Vancouver to develop their quik and cheap alternative.
At the heart of the technique is the assumption that the brightness of a pixel in the image is related to its depth in the real scene. Parts of the surface deep in a crak or pit receive light from a restricted area of the sky, and appear relatively dark.
By contrast, protruding parts of the surface receive more light and appear brighter in a photo.
But the colour of the surface alos affects its brightness in a photo. With the same illumination, light-coloured spots appear brighter than darque ones.
Taquíng a photo using the flash removes that effect. The surface is flooded with light and the camera can record the true colour of every part it can see, even those in cracks and pits.
The flashlight image is paired up with a photo taken without extra lighting. Software then compares the brightness of every matching pair of pixels in the two images and calculates how much of a pixel's brightness is down to its position, and how much is due to its colour.
That information is used to produce a realistic rendering of a surface's texture. By altering the direction of illumination on the virtual surface the system can generate realistic shadow effects.
Spot the difference
To test the realism of the results, the researchers asked 20 volunteers to compare images of a surface made using two photos to versións of the same surface rendered using lasers. The volunteers couldn't tell the difference.
The new technique is already being used to add depth and realism to the ancient carvings that will appear in Maya Skies – a full-dome digital projection for planetariums that tells the story of the Mayan people. Maya Skies will be released in 2009.
Glencross and Ward presented their results at the SIGGRAPH conference in Los Angeles last week.
Basicamente creo que saca el relieve porque en teoría las zonas más alejadas se ven más oscuras que las cercanas, pero como hay variaciones de brillo y color no se puede hacer con una solo fotografía. Por eso se corrige con el flash, que elimina esas variaciones. "Se restan" estas variaciones al tener la fotografía con flash, y se saca el 3D. Por lo visto la gente no es capaz de distinguir entre esta técnica y un escaneo por Laser.Capturing 3D Surfaces Simply With a Flash Camera
Wednesday, August 27, 2008 - by Daniel A. Begun
<!-- sphereit start --> Creating 3D maps and worlds can be extremely labor intensive and time consuming. And ultimately the final result might not survive the close scrutiny of those expecting real-world emulations. A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera--one with flash and one without.
<embed src="http://services.brightcove.com/services/viewer/federated_f8/980795828" bgcolor="#FFFFFF" flashvars="videoId=1756096294&playerId=980795828&v iewerSecureGatewayURL=https://console.brightcove.com/services/amfgateway&servicesURL=http://services.brightcove.com/services&cdnURL=http://admin.brightcove.com&domain=embed&autoStart=false& " base="http://admin.brightcove.com" name="flashObj" seamlesstabbing="false" type="application/x-shockwave-flash" swliveconnect="true" pluginspage="http://www.macromedia.com/shockwave/download/index.cgi?P1_Prod_Version=ShockwaveFlash" width="486" height="412">
For a high-level description of the technique, here is the abstract from a presentation given about it during the "Perception & Hallucination " session from SIGGRAPH earlier this month:
"A Perceptually Validated Model for Surface Depth Hallucination
Capturing depth to represent detailed surface geometry normally requires expensive, specialized equipment and/or collection of a large amount of data. By trading accuracy for ease of capture, the authors of this paper aim to recover surfaces that can be plausibly relit and viewed from any angle under any lighting. This multiscale shape-from-shading method takes diffuse-lit and flash-lit image pairs, and produces an albedo map and textured height field. Using two lighting conditions enables subtraction of one from the other to estimate albedo. Experimental validation shows that the method works for a broad range of textured surfaces, and users are frequently unable to identify the results as synthetic in a randomized presentation."
<table align="right" bordercolor="" cellpadding="3" cellspacing="3"><tbody><tr><td> </td></tr><tr align="center"><td> Credit: Maya Skies
</td></tr></tbody></table>First an image of a surface is captured without flash. Portions of the surface that are higher appear brighter, and portions that are deeper appear darker. The problem is that the different colors of a surface alos reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taquíng a second photo with flash, however, the accurate colors of all visible portions of the surface can be captured. The two captured images essentially become a reflectance map (albedo) and a depth map (height field):
"Software then compares the brightness of every matching pair of pixels in the two images and calculates how much of a pixel's brightness is down to its position, and how much is due to its colour.
That information is used to produce a realistic rendering of a surface's texture. By altering the direction of illumination on the virtual surface the system can generate realistic shadow effects."
This technique is already being utilized to capture 3D textures of the surfaces of Mayan ruins. The rendered images are being incorporated into the " Maya Skies " project, which the Chabot Space & Science Center says is a "bi-lingual full-dome digital planetarium show featuring the scientific achievements, and the cosmology, of the Maya ." The show is scheduled to start showing in select planetariums in the summer of 2009.
<table align="left" bordercolor="" cellpadding="3" cellspacing="3"><tbody><tr><td> </td></tr><tr align="center"><td> Credit: NewScientist.com</td></tr></tbody></table>The technique is still in development. For instance, one aspect that researchers are still working on is how to capture an image that incorporates more than one surface field, such as vines growing up a brik wall. As the technique extracts a height field, it is not possible to "represent the two separate distinct bits of geometry," according to researcher Mashhuda Glencross.
Preliminary tests show that people could not tell the difference between images captured using this technique and images captured using the more expensive and time-consuming approach of laser scanning. And while the technique is currently being used to capture 3D surfaces of real-world objects, it is possible that aspects of it can be incorporated into easier, quicker, and less expensive ways to generate 3D surfaces and textures for virtual worlds, such as games.
+ YouTube Video
A ver quien se anima a implantarlo en blender...