Improvements in GPU performance have
made it possible to create visual effects that were formerly reserved
for offline renderers. Two such developments that interest me are
reflections and refractions. Intuitively, these effects are not
particularly complicated. Reflection involves bouncing a ray around
an object's surface normal and refraction involves changing the angle
of a ray based on the refractive indexes of two media. Unfortunately,
these visual effects can be very difficult to compute efficiently
because they require many intersection tests for multiple rays. As a
result, there has been a great deal of research lately into
developing real-time solutions. As I discuss such techniques, I
will be referring to reflections only because refractions are
sufficiently similar.
One of the most common and earliest techniques for simulating reflections is to put a reflective object in the center of a cube map. A cube map is a six sided texture that, for conceptual purposes, is infinitely large. The fragment shader simply takes the eye vector and reflects it off of the fragment’s surface normal. Next, it finds the texture coordinate where the ray intersects the cube map and draws that color onto the fragment. This approach creates a mostly realistic visual effect, but it cannot reflect arbitrary objects in a dynamic scene. More info on reflection mapping here: http://en.wikipedia.org/wiki/Reflection_mapping
Another interesting approach uses
billboard impostors to simulate reflected geometry. This technique
involves projecting an object onto a texture and intersecting rays
with that texture during the reflection process, akin to what we do
in the cube map. This approach has obvious speed limitations,
especially for scenes with numerous objects. More info on billboard
impostors here:
http://graphicsrunner.blogspot.com/2008/04/reflections-with-billboard-impostors.html
My goal for this project is to simulate
reflections and refractions between many different objects that move
and deform. One novel approach that has minimal dependence on scene
complexity and detail is screen space reflections (SSR). Although
there are a few ways to achieve this effect, the most understandable
is to do two separate render passes. First, render the scene with no
reflections. Second, use depth and color information from the
previous pass to determine the reflected colors. Interestingly, the
second pass is accomplished with ray-tracing techniques. We convert
the view-space reflection vector to screen space and advance the ray
incrementally. For each step, compare the screen space depth with the
existing depth from the previous render pass. If the depth of the
reflected ray is less than the existing depth, then we take the color
at that position and apply it to the original reflective fragment.
One drawback to this technique is it cannot reflect geometry that is
not visible in the screen.
I'm excited to start work on this
project because it's applicable to many different graphics programs I see
myself working on in the future. As I learn more about SSR I will
update this post to fix any inaccuracies.
References (not all are relevant to
this post):
No comments:
Post a Comment