Hi Adrian,

for all 2D effects there exists already a pipeline with couple of examples 
(gauss blurr, depth of field, hdr and so on), however you maybe have heard 
about it already. Take a look into osgPPU.

I took a small look into the paper and saw that the main method is just to 
combine N previous images in some sense together to achieve nice results. As 
you said, using the history buffer. Using pure osg components, one could for 
example setup N cameras with corresponding textures and switch them framewise. 
Then current rendering camera use the input of other N-1 cameras to produce its 
new output. Using osgPPU, I would suggest to implement new class 
UnitHistoryBuffer, which will do this trick, by collecting the inputs in some 
buffer, e.g. 3D texture or 2D texture array. Output of this unit can then be 
combined with current rendering camera in the way as described in the paper. 
This shouldn't be a big trick, yeah, maybe I can do this for the upcoming v0.4 
release ;)

I am wonder if one could use this technique for some sreen space effects, not 
only for motionblur, but also for optical flow detection or something similar 
??? Maybe for something like hybrid ray tracing approach, where some kind of 
heuristic function could use this information to retrace only neccessary rays.

cheers,
art

------------------
Read this topic online here:
http://osgforum.tevs.eu/viewtopic.php?p=5545#5545





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to