Hi Allen,

just for the next, if you have questions regarding to osgPPU, then take first a 
look into the documentation. osgPPU project can be found at 
http://projects.tevs.eu/osgppu the doxygen documentation is at 
http://www.tevs.eu/doc/osgPPU/

osgPPU is pretty well documented or at least comment in the code. I tried to 
write a lot of comments for beginners, so take a look on them.

Now to your questions:
 

allensaucier wrote:
> 
> 1. what line of code actually performs the glow effect?
>     a. I am a novice with shaders and osgPPU
>     b. I found how to increase the intensity of the glow within the shader 
> source code; it is the * 2 effect on glowColor
>     c. I can not find the line of code that actually performs the glow
> 

There is no direct one line or function. It is just matter of the algorithm 
used behind. For glow effect the object which has to glow is rendered separatly 
in a buffer. This buffer is then blurred (shaders) and added to the main view 
also in a shader. So you have to understand first how the algorithm works and 
then you will find the lines of code you actually need.


> 
> 2.  Within the shader source code, the variables view and glow confuse me.  
> Nothing within glow.cpp refers directly to either of these variables and so I 
> do not know 
>         1) how the glow effect actually happens.
>         2) how the different input to shaderSrc is actually differentiated 
> between view and glow.
>       Would someone please tell me? :-*
> 
>      I have noticed that the resultShader receives input from unitCam1 and 
> from blury which gives the "view" and "glow" outputs, though I do not fully 
> understand how.
> 

See previously. View is the main view, glow is the glowed view.


> 
> 3. I have continued to notice the use of processor as the beginning of the 
> pipeline.  When adding a child to a pipeline element, does that automatically 
> send output to the child?
> 

If your child is a osgPPU::Unit, then yes, it should more or less send 
automatically the output of the parent to the child. Think on the osgPPU 
pipeline as on some graph with nodes as your processing units, which do 
something with the input and producing the output to the next nodes. Processor 
is the root node of the pipeline and has to be always used. It is needed to 
allow running of osgPPU's own node visitors to be able to flawlessly cooperate 
with OpenSceneGraph.


> 
> 4. I have noticed that there are 2 camera inputs into the pipeline. One is 
> the main camera directly attached to processor.  The other is a new camera, 
> slaveCamera, which is attached to unitSlaveCamera.  What stops the main 
> camera's output from going into the unitSlaveCamera?
> 

Main camera renders the whole scene. Slave camera renders only the objects 
which needs to glow. 


> 
> 5. what does osgPPU::UnitCameraAttachmentBypass mean?
> 6. what does osgPPU::UnitCamera mean?
> 

UniCameraAttachmentBypass do bypass camera's attachments (i.e. render textures) 
into the osgPPU pipeline. As I said before, every unit is a node computing 
outputs by using inputs. Inputs are always textures. So in order to bring it to 
the pipeline from OpenSceneGraph's cameras, we place this unit either directly 
under Processor or under UnitCamera. UnitCamera do bypass the osg's camera to 
the pipeline. It has no direct output and hence UnitCameraAttachmentBypass is 
required to get camera attachments and put it into the pipeline. Units placed 
under AttachmentBypass will use camera's texture as input.


So, I hope I was able to clarify this. Please take a look into documentation. 
Based on the examples and the reference api documentation (doxygen) you should 
be able to understand what is every unit good for.

regards,
art

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=26311#26311





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to