Here is my progress report a couple weeks into the project:
I've been doing a lot of reading on frame buffers (FBO's). I found a nice tutorial about shadow mapping that uses FBO's and basically explains how they are constructed, written, and accessed. Although shadow mapping it not what I am doing for this project, there are several technical similarities. You can read the tutorial here: http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html .
Anyway, I now have a better sense of how I'll implement screen space reflections.
How to use FBO's:
1. Give each object a Material. This would include diffuse color,
specular color, specular intensity, transparency, reflectivity, refractivity, and
maybe more.
2. Create an FBO that stores depth and color information.
3. The first render call will write to the FBO. This will use my regular material shader but with reflections turned off (set through a unform buffer object). I might split the two render calls into two different shaders since the lighting computations are redundant the second time.
4. Render a second time but use color texture data from the FBO to determine reflected pixel colors.
How to calculate reflections in a shader:
1. Reflect the view vector off of the fragment's surface normal.
2. Reflected ray is marched at a pixel length interval across the screen until it hits the end of the window or collides with an object (explained in step 4).
3. Convert view vector to screen space by dividing the clip space value by its w component, followed by scaling by .5 and shifting by .5 (to get it into screen space coordinates for texture access).
4. If the sampled texture depth value falls between the old and the new ray depth values, there has been an intersection. Take the color at that position from the FBO's color texture and apply it to the original fragment.
5. Mix reflected color value with existing color value based on the object's reflectivity constant.
This is the high level breakdown of how I will implement this. I'm sure there is something I'm missing or wrong about, but as of now it seems like a pretty good approach that shouldn't take too long to at least get an initial working version.
Thursday, March 29, 2012
Wednesday, March 14, 2012
Description / Proposal
Improvements in GPU performance have
made it possible to create visual effects that were formerly reserved
for offline renderers. Two such developments that interest me are
reflections and refractions. Intuitively, these effects are not
particularly complicated. Reflection involves bouncing a ray around
an object's surface normal and refraction involves changing the angle
of a ray based on the refractive indexes of two media. Unfortunately,
these visual effects can be very difficult to compute efficiently
because they require many intersection tests for multiple rays. As a
result, there has been a great deal of research lately into
developing real-time solutions. As I discuss such techniques, I
will be referring to reflections only because refractions are
sufficiently similar.
One of the most common and earliest techniques for simulating reflections is to put a reflective object in the center of a cube map. A cube map is a six sided texture that, for conceptual purposes, is infinitely large. The fragment shader simply takes the eye vector and reflects it off of the fragment’s surface normal. Next, it finds the texture coordinate where the ray intersects the cube map and draws that color onto the fragment. This approach creates a mostly realistic visual effect, but it cannot reflect arbitrary objects in a dynamic scene. More info on reflection mapping here: http://en.wikipedia.org/wiki/Reflection_mapping
Another interesting approach uses
billboard impostors to simulate reflected geometry. This technique
involves projecting an object onto a texture and intersecting rays
with that texture during the reflection process, akin to what we do
in the cube map. This approach has obvious speed limitations,
especially for scenes with numerous objects. More info on billboard
impostors here:
http://graphicsrunner.blogspot.com/2008/04/reflections-with-billboard-impostors.html
My goal for this project is to simulate
reflections and refractions between many different objects that move
and deform. One novel approach that has minimal dependence on scene
complexity and detail is screen space reflections (SSR). Although
there are a few ways to achieve this effect, the most understandable
is to do two separate render passes. First, render the scene with no
reflections. Second, use depth and color information from the
previous pass to determine the reflected colors. Interestingly, the
second pass is accomplished with ray-tracing techniques. We convert
the view-space reflection vector to screen space and advance the ray
incrementally. For each step, compare the screen space depth with the
existing depth from the previous render pass. If the depth of the
reflected ray is less than the existing depth, then we take the color
at that position and apply it to the original reflective fragment.
One drawback to this technique is it cannot reflect geometry that is
not visible in the screen.
I'm excited to start work on this
project because it's applicable to many different graphics programs I see
myself working on in the future. As I learn more about SSR I will
update this post to fix any inaccuracies.
References (not all are relevant to
this post):
Subscribe to:
Posts (Atom)