Hello all

You should use lispsm or pssm shadow : even for terrain this gives nice
looking shadows (with pssm you can limit the max far distance, use this say
250m, and you will get nice shadows) but once we have the latest idea
implemented, the shadow quality gets much better, this would be of sure
true, but may the performance will
be very bad. so i am looking for high performance and robust technic.

I don't really like to implement this with osgPPU, because of the problem
robert told (see submission discussion) not tha osgPPU is bad, but it would
better be nicely integrated into osg and osgShadow, not only for osgShadow
and its quality the histroy buffer could have high impact, also for others
shaders.

Then for the default shader in openscenegraph may we should think to
redesign them, because actually we have shaders in shadow, ... , but each of
the re-writes the very basics, may we could think about of GLSL shader
modules, code modules then we can put them together, this would be greate. i
have some idea how this could be solved. but i have to think about more of
this idea.

/adrian

2009/2/1 Simon Loic <[email protected]>

> It would be very interested to see the resulting shadows of this method in
> osg. I'am currently using the basic shadow technic and the rendering is so
> aliased! Maybe I'am not using it in the right way.
>
>
> On Fri, Jan 30, 2009 at 4:44 PM, Art Tevs <[email protected]> wrote:
>
>> Hi Adrian,
>>
>> for all 2D effects there exists already a pipeline with couple of examples
>> (gauss blurr, depth of field, hdr and so on), however you maybe have heard
>> about it already. Take a look into osgPPU.
>>
>> I took a small look into the paper and saw that the main method is just to
>> combine N previous images in some sense together to achieve nice results. As
>> you said, using the history buffer. Using pure osg components, one could for
>> example setup N cameras with corresponding textures and switch them
>> framewise. Then current rendering camera use the input of other N-1 cameras
>> to produce its new output. Using osgPPU, I would suggest to implement new
>> class UnitHistoryBuffer, which will do this trick, by collecting the inputs
>> in some buffer, e.g. 3D texture or 2D texture array. Output of this unit can
>> then be combined with current rendering camera in the way as described in
>> the paper. This shouldn't be a big trick, yeah, maybe I can do this for the
>> upcoming v0.4 release ;)
>>
>> I am wonder if one could use this technique for some sreen space effects,
>> not only for motionblur, but also for optical flow detection or something
>> similar ??? Maybe for something like hybrid ray tracing approach, where some
>> kind of heuristic function could use this information to retrace only
>> neccessary rays.
>>
>> cheers,
>> art
>>
>> ------------------
>> Read this topic online here:
>> http://osgforum.tevs.eu/viewtopic.php?p=5545#5545
>>
>>
>>
>>
>>
>> _______________________________________________
>> osg-users mailing list
>> [email protected]
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
>
> --
> Loïc Simon
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>


-- 
********************************************
Adrian Egli
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to