[osg-users] change vertex shader input dynamically

2008-10-06 Thread Fabian Bützow

Hi everyone,

I want to change a scalefactor in a vertex program dynamically.
The scalefactor is changed by the user via an InputHandler.

float scaleFactor= 1.0;
ShaderInputHandler* input= new ShaderInputHandler(scaleFactor);
//add handler to view ..
geometryStateSet->addUniform(new Uniform("scaleFactor", scaleFactor));

I tried to add an update callback, but that didnt work..?!
Could you help me?
Cheers, Fabian.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] CompositeViewer, update traversal, prerender branch

2008-09-21 Thread Fabian Bützow

Hi Robert,
thanks for your help in this case and thanks for all your other replies, 
you have been a great source of motivation and advice.


However, my problem has not been completely solved, yet. You were right, 
with the single graphics context and window for all views, the textures 
are updated now in each View. Cheers! The problem is, that i want to 
project the render result with 2 or 3 projectors onto an object, each 
projector with an individual View. Is it possible to extend the one 
graphics window over several screens/projectors? With that i could undo 
the window decoration and fake two independent views..? Otherwise i 
probably have to use multiple graphics windows and share the context 
(what id like to avoid, since you said that would cause even more problems).


Cheers,
Fabian

ps: i try to summarise, my (restricted) knowledge of graphics 
window/context etc.. Maybe others can contribute to that as well.
When you dont specify a graphics context for a View, osg generates a new 
unique one for each View. Graphics contexts comprise basically 
everything you need for standard rendering, buffers etc. The scene data 
of the View is copied. Each graphics context is displayed in a new 
graphics window. (One graphics context->one window, multiple 
Views->multiple graphics contexts->multiple graphics windows).


An update traversal is done only once per scenegraph(?), starting from 
the View that is placed highest in the scenegraph. After that cull and 
draw traversals are executed for each View.


[Prerender cameras nodes under a View in the scenegraph are rendered 
once for each View, the results of this step are not shared among the 
Views.]


hmm->more assumptions than knowledge..i should better stop ;)

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] CompositeViewer, update traversal, prerender branch

2008-09-19 Thread Fabian Bützow

Ok, sorry Robert, now more specifically:

Hardware/software: WinXP, Opteron 2 GHz, 1 GB Ram, Geforce 8800 GS, OSG 2.4.

The prerender node has 5 camera nodes, a textured rectangle is attached 
to each camera node. The cameras render to texture in a defined 
RenderOrder. The textures of the rectangles are connected in a way that 
the output of one RTT camera is the input of the next. The pointer to 
the final texture is stored in a data class. The live-camera image is 
retrieved via CMUfirewire API, and updated in a StateAttribute::Callback 
function attached to the initial live-camera texture.


The Views are defined by the setUpViewInWindow() functions and added to 
a CompositeViewer. The scene data of the overall View (A) is set to the 
root node (thats the View that shows the updated texture), the scene 
data of the other two Views(B,C) is set to the subnodes in the scene 
graph. The camera position of each View is set via the 
getCamera()->setProjection() / ViewMatrix() methods. Rectangles are 
attached to the subnodes that should display the output of the prerender 
step (they get the texture via the pointer in the data class), but the 
textures dont get updated in the subviews B,C (they remain in their 
initialised state). I call CompositeViewer->frame() in the render loop. 
(No slave cameras or anything like that.)


The strange thing is, when I set the scene data of View B or C to the 
root node, the rendered textures get updated. (Maybe then, for this View 
the prerender branch is also traversed, causing the loss of fps, but on 
the other hand the Callback of the live-camera texture is called only 
once per frame..)


Thanks for your interest,
cheers,
Fabian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] CompositeViewer, update traversal, prerender branch

2008-09-18 Thread Fabian Bützow

Hi everybody,

Ive got some problems with displaying the result of a prerender step in 
different views.


The root node has two basic branches:
a prerender branch that renders and processes a live camera image, and a 
main render branch, that has two subnodes. The output texture of the 
prerender step is applied to the subnodes in different ways. A View is 
set to each subnode to display the specific view of the scene. A third 
View is set to the root node to observe the overall behavior of the 
texture (applied to both subnodes).


The problem is, that the Texture gets updated in the overall view, but 
not in the subnode views (it shows the initialised static texture, not 
even the updated camera image). When I set the scene data of the subnode 
views to the root, the texture gets updated, but the framerate drops. 
When i assign no View to the scene root, the prerender branch isnt 
traversed at allAs far as i know, the update traversal is called 
once per frame for a composite viewer, that should update the texture in 
all nodes. Please help me with that one.


Cheers,
Fabian



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] rendering in FB and to texture, shadow mapping test

2008-09-17 Thread Fabian Bützow

Hi,

i have a scene with two cameras that render in different views (same 
resolution).

Cam2 is only to render those fragments, that are seen by cam1.

Basically its a simple shadow mapping procedure:
The depth test is done in the fragment shader of cam2, if the fragment 
is "in the light" it is drawn, otherwise discarded. Since i need to 
render the cam1 view anyway, i could create the shadow map on the fly. 
Therefore i need to render to texture and to the normal framebuffer of 
view1 simultaneously. But i dont know how to do this (without an 
additional rendering pass).


Maybe you could help me,
cheers
Fabian


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] performance issues with RTT

2008-09-08 Thread Fabian Bützow

g'day,

im also working on RTT prerender stuff,
and i experienced the same FRAME_BUFFER_OBJECT-slow-down-the system as 
Viggo.

Im also running a windows system.

However, my goals a slightly different from yours, but i am suffering 
from low frame rates as well. I want to do image pre-processing of a 
live camera image. The camera img is attached to a screen filling 
rectangle and rendered with a fragment shader. The output texture is 
input of the next RTT step.


I'm wondering if the low framerates are caused by insufficient hardware, 
programming mistakes or osg itself ;)


There are 6 RTT steps (640x480), each with 9 texture look-ups a fragment 
(filter mask) and very simple noise processing. I'm running an Opteron 
2,21 Ghz, 1GB RAM, Geforce 8400 GS. I achieve (incl camera image 
grabbing) <8fps.


Do you think this is ok, or should the system be working faster?

cheers,
Fabian





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] z-up & camera position & osgthirdpersonview

2008-09-06 Thread Fabian Bützow

Hi,
let me summarise my thoughts about the osg-opengl-coordinate system issue:
(and please comment on that)

Thinking in local coordinate systems, every geometry has its own coordinate system, 
starting in WCS (0,0,0). For each coordinate system is X east, Y north and Z up.
When you add transformation nodes into the scenegraph between root and geometry, 
each transformation adds up to a model matrix(top-down) that transforms the coordinate system.

(imagine: the geometry is drawn into that modified coordinate system)

After that, the view Matrix of the camera is applied. (camera coordinates are 
in osg Z-up)
Basically, that means that the local coordinates are transformed into the eye 
coordinates.

Still, z is up?!
(virtual camera is in origin, looking along positive y, right-hand-system)


I found this quote from Robert:
"Once the scene is transformed into eye space by the View matrix of the
Camera the coordinate system of the eye space is standard OpenGL, +ve
Y up the screen, +ve X to the right, +Z out from the screen."

And now im getting a little confused..
Now an addtional rotation (x, 90°) should be applied, 
to rotate the coordinate system from osg into X east, Y up, Z south.


That would mean camera still looks along positive y.?? 
that would be strange when it comes to the viewing volume and the perspective division..?

(Y non linear scaled??, something's wrong here..)

see, im confused ;)
im sure you can help me,

cheers
Fabian






___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] z-up & camera position & osgthirdpersonview

2008-09-06 Thread Fabian Bützow

Hi everybody,

i played a bit around with the osgthirdpersonview..

my goal was to draw an additional line for the up vector (Vec3(0.0, 0.0, 
1.0))-

hence i added the fllowing code to the example:

camera->getViewMatrixAsLookAt(*eye, *center, *up); //gives the up vector

//draws a line from origin to up vector
(*v)[9].set(*up);
GLushort idxLoops2[2] = {9, 0 };
geom->addPrimitiveSet( new osg::DrawElementsUShort( 
osg::PrimitiveSet::LINE_LOOP, 2, idxLoops2 ) );


This didnt bring the desired effect,
and I disabled the inverse viewmatrix tranformation to see where the 
orginal viewfrustum would be drawn.
Surpringsliy (for me ;)) the camera looks along the negative z-axis as 
in std opengl...

My up vector however pointed still to z up (and not as expected to y north)

what do i have to do to draw the upvector approprietly?

cheers & a nice weekend
Fabian




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] uniform vec2 array with setElement for convolution

2008-09-01 Thread Fabian Bützow

Hello,
i want to pass a uniform vec2 array as convolution mask to a image 
processing shader.
A former post says I should use setElement.. here my trial (it doesent 
work ;) ):


osg:
   Uniform* filter= new Uniform();
   filter->setName("kernel");  
   filter->setElement(0, Vec2(-1,1));

   ...
   filter->setElement(8, Vec2(1,-1));
   meanStateSet->addUniform(filter);

shader:
   uniform sampler2DRect inTex;
   uniform vec2 kernel[9];
   void main(){
   int i;float sum= 0.0;
   for (i = 0; i < 9; ++i){
   sum+= (texture2DRect(inTex, gl_TexCoord[0].st + kernel[i])).r;
   }
   gl_FragColor = (sum/9);
   }

when i change kernel[i] to (for instance) vec2(0,0) it works...

any ideas?
what do i forget to set?

cheers,
Fabian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Show shader prerender results in different Views

2008-08-27 Thread Fabian Bützow

Hi,

my goal is to prerender a live camera image with several shader and 
display the results of each shader step.


I have a couple of prerender RTT cameras that have textured geometry 
quads as children, the shaders are attached to the geometry. The output 
texture of a camera is the input texture for the next rendered geometry 
quad. All the cameras are attached to the root:


To display the shader result, I created another geometry quad below the 
root, with an output texture attached to it.
A View is set to the root with a camera observing the result geometry 
quad. That works fine!


But:
I want to display every shader step in a different window. Hence i 
created a compositeViewer.
(Since i dont know if cameras can do both RTT and normal rendering i did 
the following:)
For each shader output I created an additional geometry quad, textured 
with the  respective texture and attached to the root via a geode.


However, a View attached to that geode does not show the output texture 
of the shader, but the initialised texture value before the shader wrote 
to the texture.


How can that be when simultanously the result geometry quad (mentioned 
above, with no View attached directly) shows the correct output??


Cheers,  Fabian




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Efficient live camera rendering on textureRectangle using callback?

2008-08-21 Thread Fabian Bützow

Hello,
i want to display a live camera stream on a texture rectangle which i 
pass to a bunch of shaders.

I just want to ask if the way im doing it is the most efficient one?
The best solution would be to upload the live camera image once to the GPU
and pass the texture to the several shader..

is this accomplished by the following code?
(the picture Quads state attribute is linked to the shader)

struct DrawableUpdateCallback : public osg::Drawable::UpdateCallback
{

   DrawableUpdateCallback(){};
   DrawableUpdateCallback(CameraBase* camera, TextureRectangle* texture)
   {
   this->camera= camera;
   this->texture= texture;
   };

   virtual void update(osg::NodeVisitor*, osg::Drawable* drawable)
   {
   Image* img= texture->getImage();
   img->setImage(camera->getWidth(), camera->getHeight(), 1, 
GL_LUMINANCE, GL_LUMINANCE, GL_UNSIGNED_BYTE, camera->getRawFrame(), 
Image::NO_DELETE, 1);

   img->dirty();
   }

   CameraBase* camera;
   TextureRectangle* texture;
};

//in my class:
ref_ptr camImg= new Image();
PixelBufferObject* pbo= new PixelBufferObject(camImg.get());
camImg->setPixelBufferObject(pbo);

ref_ptr texture= new TextureRectangle(camImg.get());

StateSet* state= pictureQuad->getOrCreateStateSet();
state->setTextureAttributeAndModes(0, texture.get(), 
osg::StateAttribute::ON);
pictureQuad->setUpdateCallback(new DrawableUpdateCallback(camera, 
texture.get()));


cheers,
Fabian

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] render to pixelbuffer, compute average cam image

2008-08-07 Thread Fabian Bützow

Hello everybody,

1) im new to osg, hello ;)

2) i want to compute an average image out of several camera images.

My plan is using glsl:
draw the cam image as a texture to a screen-filling rectangle, set 
orthographic projection.
render the rectangle several times, and divide the pixels by the number 
of render passes in the fragment shader.
The results of the render passes need to be saved & added up in a 
pixelbuffer.


question: how can i render to the pixelbuffer? (and is the plan ok? ;))
And is there something like a pixelbuffer?

i looked through several mailinglist posts, but found nothing concrete, 
and no tutorial at all concerning the buffer question..

so pls help!

cheers,
Fabian

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org