>>can you give a more detailed explanation of the steps in your algorithm
>>(what you render with the respective shaders and store into textures) ?

Well I put the camera of a FBO from the light point of view, then render the
scene at a low resolution (let say 32x32) and my fragment shader write in a
texture the world coordinates (already interpolated by the vertex shader) of
the scene. These coordinates aren't multiplied by any matrix.
Then I pass this texture to another shader (the one of the
PolygonForeground) and I want to use the coordinates as they are the
positions of 32x32 light sources...but I also need the "current"
modelviewmatrix of the scene view from the mgr->camera to multiply the
coordinates, put them in eye coordinates and the calculate the
lightDirection as normalize(lightPos - ecPos).
Maybe I can simply abandon the PolygonForeground and directly use the
texture and chang the SHLChunk of all objects in the scene....
It's simplier to use a PolygonForeground but the shader in it has another
Modelviewmatrix...

Erik
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to