Re: [osg-users] [build] Getting error C2988: unrecognizable template declaration/definition while compiling osgdb_vrml

2010-12-16 Thread Martin Naylor
Hi,
Its sound like a bug in the compiler,
http://support.microsoft.com/kb/240866.
See if the workaround will fix it?

Regards

Martin.


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Holger
Krumm
Sent: 16 December 2010 07:41
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] [build] Getting error C2988: unrecognizable template
declaration/definition while compiling osgdb_vrml

Hi everybody!

I try to compile the osgdb_vrml reader plugin. As of now I have managed to
compile a openvrml.lib from the current version OpenVRML 0.18.5 (not via
subversion but downloaded the source directly from website).
When I try to compile I get a error C2988: unrecognizable template
declaration/definition in openvrml\local\float.h. It seems to have
something to do with the definition of OPENVRML_LOCAL beyond.


Code:

 template typename Float
OPENVRML_LOCAL inline Float fabs(const Float f)
{
return f  0.0 ? -f : f;
}




Did anyone ran into same troubles like me?

Any pointer is appreciated, unfortunately my C++ knowledge is limited
(little bit rusty, you know :) )


Thanks!

Cheers,
Holger

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34907#34907





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Setting camera Viewmatrix with TrackBallManipulator Matrix gives nothing but black screen

2010-12-16 Thread Trajce (Nick) Nikolov
if this is your code (with all the comments) then here is what you should
do:

- forget about your osg::Camera* camera = new Camera; // there is already a
Camera attached to the View
- use view.getCamera()-setProjectionMatrixAsPerspective(45,1,1,1000);
- no need to attach any CameraManipulator if you want to set the view matrix
by your own (although that is the purpose of the CameraManipulator, to
change the view matrix - probably you do it this way when you get more
familiar with the code)
- in your loop you do
viewer.getView(0)-getCamera()-setViewMatrixAsLookAt(eye,center,up)

-Nick


On Wed, Dec 15, 2010 at 7:02 PM, Bart Jan Schuit osgfo...@tevs.eu wrote:

 Hi,

 I'm trying to setup some cameras without a manipulator. When I assign a
 Trackballmanipulator on the vies, I get the cow projected on the screen. But
 as soon as I manually setup the cameras, I get a completely black screen.
 I extract eye, center and up from Tman (Trackballmanipulator) by
 Tman-getMatrix().lookat(eye, center, up).

 This gives some nice coordinates, but when I put these in a camera without
 a manipulator like Tman, I just get a black screen. What am I doing wrong
 here?


 Code:

 int main( int argc, char **argv )
 {

// use an ArgumentParser object to manage the program arguments.
   osg::ArgumentParser arguments(argc,argv);



osg::Group* scene = new osg::Group();
osg::Node* groundNode = NULL;
groundNode = osgDB::readNodeFile(cow.osg);

scene-addChild(groundNode);

osgViewer::CompositeViewer viewer(arguments);

if (arguments.read(-2))
{

// view one
{

osg::Vec3d eye = osg::Vec3d(0,0,250);
osg::Vec3d center = osg::Vec3d(0,0,250);
osg::Vec3d up = osg::Vec3d(0,0,-1);
osg::Quat rotation;
osg::Matrixd viewmat;

osg::Camera* camera = new osg::Camera;
osgViewer::View* view = new osgViewer::View;
view-setName(View one);
viewer.addView(view);
//camera-setProjectionMatrix( osg::Matrix::ortho2D(0,512,0,512) );
  //not doing anything
//camera-setReferenceFrame( osg::Transform::ABSOLUTE_RF );
//camera-setViewMatrix( osg::Matrix::identity() );
view-setCameraManipulator(Tman);
//Tman-setAutoComputeHomePosition(false);
view-setUpViewOnSingleScreen(0);
view-setSceneData(scene);
//view-setCamera(camera);

}

// view two
{
osg::Matrixd viewmat;
osg::Camera* camera = new osg::Camera;
osgViewer::View* view = new osgViewer::View;
view-setName(View two);
viewer.addView(view);
view-setUpViewOnSingleScreen(1);
view-setSceneData(scene);
//view-setCamera(camera);
view-setCameraManipulator(Tman);
view-setName(right);
osg::Vec3d eye = osg::Vec3d(0,0,25);
osg::Vec3d center = osg::Vec3d(0,0,25);
osg::Vec3d up = osg::Vec3d(0,0,-1);
}
}
viewer.realize();



while(!viewer.done())
{

osg::Vec3d eye = osg::Vec3d(0,0,50);
osg::Vec3d center = osg::Vec3d(0,0,50);
osg::Vec3d up = osg::Vec3d(0,0,-1);
Tman-setHomePosition(eye,center,up); //not working. Doesn't
 matter how I set eye, center etc.
viewer.frame();
}
 }




 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=34890#34890





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgPPU CUDA Example - slower than expected?

2010-12-16 Thread Thorsten Roth

Hi,

as I explained in some other mail to this list, I am currently working 
on a graph based image processing framework using CUDA. Basically, this 
is independent from OSG, but I am using OSG for my example application :-)


For my first implemented postprocessing algorithm I need color and depth 
data. As I want the depth to be linearized between 0 and 1, I used a 
shader for that and also I render it in a separate pass to the color. 
This stuff is then fetched from the GPU to the CPU by directly attaching 
osg::Images to the cameras. This works perfectly, but is quite a bit 
slow, as you might already have suspected, because the data is also 
processed in CUDA kernels later, which is quite a back and forth ;-)


In fact, my application with three filter kernels based on CUDA (one 
gauss blur with radius 21, one image subtract and one image pseudo-add 
(about as elaborate as a simple add ;-)) yields about 15 fps with a 
resolution of 1024 x 1024 (images for normal and absolute position 
information are also rendered transferred from GPU to CPU here).


So with these 15 frames, I thought it should perform FAR better when 
avoiding that GPU - CPU copying stuff. That's when I came across the 
osgPPU-cuda example. As far as I am aware, this uses direct mapping of 
PixelBuferObjects to cuda memory space. This should be fast! At least 
that's what I thought, but running it at a resolution of 1024 x 1024 
with a StatsHandler attached shows that it runs at just ~21 fps, not 
getting too much better when the cuda kernel execution is completely 
disabled.


Now my question is: Is that a general (known) problem which cannot be 
avoided? Does it have anything to do with the memory mapping functions? 
How can it be optimized? I know that, while osgPPU uses older CUDA 
memory mapping functions, there are new ones as of CUDA 3. Is there a 
difference in performance?


Any information on this is appreciated, because it will really help me 
to decide wether I should integrate buffer mapping or just keep the 
copying stuff going :-)


Best Regards
-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] PixelDataBufferObject functions incorrectly named

2010-12-16 Thread Michael Platings
Hi all,

In PixelDataBufferObject::bindBufferInWriteMode() I was surprised to see
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB,...). According to the OpenGL spec
Pack refers to *reading* pixels from the GPU into main memory.
Is this a bug? The comment for the function explicitly says note:
GL_PIXEL_PACK_BUFFER_ARB so I'm guessing it's intentional, and maybe in
some particular situation it makes sense. However in the general case the
naming is incorrect and has in fact lead me to call the wrong function.

If this isn't a bug then to prevent others being similarly mislead I suggest
we do one of the following:
a) Rename the functions to bindBufferInPackMode/bindBufferInUnpackMode
b) Just have one bindBuffer function that takes the target as a parameter.
c) Just have one bindBuffer function that uses the target returned by
BufferObject::getTarget()

Cheers
-Michael
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgPPU CUDA Example - slower than expected?

2010-12-16 Thread Thorsten Roth
By the way: There are two CUDA-capable devices in the computer, but I 
have tried using the rendering device as well as the CUDA-only device 
- no difference!


-Thorsten

Am 16.12.2010 12:25, schrieb Thorsten Roth:

Hi,

as I explained in some other mail to this list, I am currently working
on a graph based image processing framework using CUDA. Basically, this
is independent from OSG, but I am using OSG for my example application :-)

For my first implemented postprocessing algorithm I need color and depth
data. As I want the depth to be linearized between 0 and 1, I used a
shader for that and also I render it in a separate pass to the color.
This stuff is then fetched from the GPU to the CPU by directly attaching
osg::Images to the cameras. This works perfectly, but is quite a bit
slow, as you might already have suspected, because the data is also
processed in CUDA kernels later, which is quite a back and forth ;-)

In fact, my application with three filter kernels based on CUDA (one
gauss blur with radius 21, one image subtract and one image pseudo-add
(about as elaborate as a simple add ;-)) yields about 15 fps with a
resolution of 1024 x 1024 (images for normal and absolute position
information are also rendered transferred from GPU to CPU here).

So with these 15 frames, I thought it should perform FAR better when
avoiding that GPU - CPU copying stuff. That's when I came across the
osgPPU-cuda example. As far as I am aware, this uses direct mapping of
PixelBuferObjects to cuda memory space. This should be fast! At least
that's what I thought, but running it at a resolution of 1024 x 1024
with a StatsHandler attached shows that it runs at just ~21 fps, not
getting too much better when the cuda kernel execution is completely
disabled.

Now my question is: Is that a general (known) problem which cannot be
avoided? Does it have anything to do with the memory mapping functions?
How can it be optimized? I know that, while osgPPU uses older CUDA
memory mapping functions, there are new ones as of CUDA 3. Is there a
difference in performance?

Any information on this is appreciated, because it will really help me
to decide wether I should integrate buffer mapping or just keep the
copying stuff going :-)

Best Regards
-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgPPU CUDA Example - slower than expected?

2010-12-16 Thread Thorsten Roth
Ok..I correct this: There is a difference of ~1 frame ;) ...now I will 
stop replying to my own messages :D


Am 16.12.2010 12:31, schrieb Thorsten Roth:

By the way: There are two CUDA-capable devices in the computer, but I
have tried using the rendering device as well as the CUDA-only device
- no difference!

-Thorsten

Am 16.12.2010 12:25, schrieb Thorsten Roth:

Hi,

as I explained in some other mail to this list, I am currently working
on a graph based image processing framework using CUDA. Basically, this
is independent from OSG, but I am using OSG for my example application
:-)

For my first implemented postprocessing algorithm I need color and depth
data. As I want the depth to be linearized between 0 and 1, I used a
shader for that and also I render it in a separate pass to the color.
This stuff is then fetched from the GPU to the CPU by directly attaching
osg::Images to the cameras. This works perfectly, but is quite a bit
slow, as you might already have suspected, because the data is also
processed in CUDA kernels later, which is quite a back and forth ;-)

In fact, my application with three filter kernels based on CUDA (one
gauss blur with radius 21, one image subtract and one image pseudo-add
(about as elaborate as a simple add ;-)) yields about 15 fps with a
resolution of 1024 x 1024 (images for normal and absolute position
information are also rendered transferred from GPU to CPU here).

So with these 15 frames, I thought it should perform FAR better when
avoiding that GPU - CPU copying stuff. That's when I came across the
osgPPU-cuda example. As far as I am aware, this uses direct mapping of
PixelBuferObjects to cuda memory space. This should be fast! At least
that's what I thought, but running it at a resolution of 1024 x 1024
with a StatsHandler attached shows that it runs at just ~21 fps, not
getting too much better when the cuda kernel execution is completely
disabled.

Now my question is: Is that a general (known) problem which cannot be
avoided? Does it have anything to do with the memory mapping functions?
How can it be optimized? I know that, while osgPPU uses older CUDA
memory mapping functions, there are new ones as of CUDA 3. Is there a
difference in performance?

Any information on this is appreciated, because it will really help me
to decide wether I should integrate buffer mapping or just keep the
copying stuff going :-)

Best Regards
-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] How can I change the RGBA value of the shadow?

2010-12-16 Thread Sebastian Messerschmidt

Am 15.12.2010 08:57, schrieb Duan Linghao:

Hi,
I want to control the color of shadow.How can I change the RGBA value of the 
shadow?
...


You'll have to be a little more specific.
Why shadowing technique are you using?
In case you use standard shadow-mapping you can set a emissive ambient 
color to modify the shadow color.
Take a look at the various implementations of osgShadow, there you'll 
most likely find the shader value to modify or you can go with your own 
fragment shader.


cheers
Sebastian

Thank you!

Cheers,
Duan

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34875#34875





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Change Background color on IPhone

2010-12-16 Thread Laith Dhawahir
Hi Guys,
Do anyone know how to change background color in OSG for IPhone...
its not work to use viewer.setClearColor
... 

Thank you!

Cheers,
Laith

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34916#34916





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [ANN] MS Kinect - official drivers available

2010-12-16 Thread Christian Richardt
I've used the OpenNI framework and it doesn't work with the Kinect out
of the box. The reason for that is that OpenNI targets PrimeSense's
reference platform, not the Kinect, which is a product based on it and
not actually produced by PrimeSence, as far as I know.

However, some people have already modified the drivers to recognise
the Kinect and use it with OpenNI [1]. To use it, first install that
modified driver and then the normal OpenNI. The sample applications of
OpenNI then recognise the Kinect. PrimeSense also distribute NITE,
which is a closed source package that extends OpenNI with a pretty
good skeleton tracker and a few other nice things.

Christian.

[1] https://github.com/avin2/SensorKinect/tree/master/Bin (Windows
only, but build general build instructions inside)

On Thu, Dec 16, 2010 at 7:59 AM, Torben Dannhauer tor...@dannhauer.info wrote:
 Hi dimi,

 Primesense produces the MS Kincect product. In germany there is a very famous 
 publisher of IT journals (Publisher 'Heise' www.heise.de, journals IX, CT)

 They reported a lot regarding MS Kinect and the open source drivers.
 They also reported the release of the openni drivers. I haven't tested it yet 
 because I'll buy it as a christmas gift to myself next week (yes, sometimes 
 christmas is a great plea :D )

 But regarding the journal qualities of heise, I'm absolutely sure it will 
 work out of the box.

 I can give feedback in two weeks :)

 Best regards,
 Torben

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=34906#34906





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Blender osgExport nodemask

2010-12-16 Thread Riccardo Corsi
Hi Cedric and all,

I'm currently using Blender and osgExport and I've seen that the exporter
doesn't assign any nodemask to the exported scenegraph.
Setting nodemasks from blender might be useful to identify/preprocess some
imported models in osg.

I'm a total noob in python, but the modified version of osgobject.py does
the trick of setting a default nodemaks.

Do you think it's hard to retrieve the nodemask value from the config file,
or expose a control on the exporter GUI?
That would be a nice add on!

Thank you,
ricky


osgobject.py
Description: Binary data
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [ANN] MS Kinect - official drivers available

2010-12-16 Thread Torben Dannhauer
Hi Christian,

thanks for this report, I ordered my Kinect some hours ago :)

Best regards,
Torben

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34919#34919





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Blender osgExport nodemask

2010-12-16 Thread Cedric Pinson
Hi Ricky,

Sure it could make sense to setup a default nodemask when exporting from
the gui. I can't apply your patch in the current state. It would need to
be more general, like putting it in the gui or option ...

I have added an issue
https://bitbucket.org/cedricpinson/osgexport/issue/3/add-default-nodemask

An alternative should be to setup a specific name to your root node from
blender, then apply a visitor that will tag your graph with a specific
nodemask when loading it in osg.

Cedric

On Thu, 2010-12-16 at 13:29 +0100, Riccardo Corsi wrote:
 Hi Cedric and all,
 
 I'm currently using Blender and osgExport and I've seen that the
 exporter doesn't assign any nodemask to the exported scenegraph.
 Setting nodemasks from blender might be useful to identify/preprocess
 some imported models in osg.
 
 I'm a total noob in python, but the modified version of osgobject.py
 does the trick of setting a default nodemaks. 
 
 Do you think it's hard to retrieve the nodemask value from the config
 file, or expose a control on the exporter GUI?
 That would be a nice add on!
 
 Thank you,
 ricky
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

-- 
Provide OpenGL, WebGL and OpenSceneGraph services
+33 659 598 614 Cedric Pinson mailto:cedric.pin...@plopbyte.net
http://www.plopbyte.net


signature.asc
Description: This is a digitally signed message part
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [ANN] MS Kinect - official drivers available

2010-12-16 Thread dimi christop
Yes that sounds reasonable..
After the initial report of Torben I began to wonder how M$ became suddenly so 
generous and Open Source
Thank you Christian for the news and for reminding us that there is no such 
thing as a free meal..

Dimi




- Original Message 
From: Christian Richardt christian.richa...@gmail.com
To: osg-users@lists.openscenegraph.org
Sent: Thu, December 16, 2010 2:24:20 PM
Subject: Re: [osg-users] [ANN] MS Kinect - official drivers available

I've used the OpenNI framework and it doesn't work with the Kinect out
of the box. The reason for that is that OpenNI targets PrimeSense's
reference platform, not the Kinect, which is a product based on it and
not actually produced by PrimeSence, as far as I know.

However, some people have already modified the drivers to recognise
the Kinect and use it with OpenNI [1]. To use it, first install that
modified driver and then the normal OpenNI. The sample applications of
OpenNI then recognise the Kinect. PrimeSense also distribute NITE,
which is a closed source package that extends OpenNI with a pretty
good skeleton tracker and a few other nice things.

Christian.

[1] https://github.com/avin2/SensorKinect/tree/master/Bin (Windows
only, but build general build instructions inside)

On Thu, Dec 16, 2010 at 7:59 AM, Torben Dannhauer tor...@dannhauer.info wrote:
 Hi dimi,

 Primesense produces the MS Kincect product. In germany there is a very famous 
publisher of IT journals (Publisher 'Heise' www.heise.de, journals IX, CT)

 They reported a lot regarding MS Kinect and the open source drivers.
 They also reported the release of the openni drivers. I haven't tested it yet 
because I'll buy it as a christmas gift to myself next week (yes, sometimes 
christmas is a great plea :D )

 But regarding the journal qualities of heise, I'm absolutely sure it will 
 work 
out of the box.

 I can give feedback in two weeks :)

 Best regards,
 Torben

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=34906#34906





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



  
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Fullscreen dual monitor spanning

2010-12-16 Thread Christina Werner
I added eventhandler, but it doesn't work for me!

It does not matter how many times I pess 'f', my application will only be shown 
on one screen.

Can somebody help?
Other ideas? I think it should be possible to span it over 2 fullscreens.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34922#34922





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] background visible on point sprite spheres

2010-12-16 Thread Don Leich

Hi all,

I've got a problem that I haven't been able to find a solution
for and could use some help.  I'm using the standard texture file
OpenSceneGraph-Data/Images/sphere.gif as the source image for
point sprites.  The file is an image of a shaded sphere against
a fully transparent background.

I can set a state to properly render small 2-D sphere images with
GL_POINTS primitive type.  I needed to add sprites to my scene
graph after some other content that requires setting a different
state first.  The point sprites after this other content will show
the shaded sphere image correctly, but will now also render the
sphere image background even though it should be fully transparent.

Adding osg::StateAttribute::OVERRIDE to the blend function state
was a thought, but no help.

fn-setFunction(osg::BlendFunc::SRC_ALPHA,
osg::BlendFunc::ONE_MINUS_SRC_ALPHA);

_state-setAttributeAndModes(fn,
osg::StateAttribute::OVERRIDE|osg::StateAttribute::ON);

A dump and compare of .osg files didn't yield any insight.  Does
anyone have a suggestion for a possible fix here or maybe a way
to debug the state with OSG internals?  What besides BlendFunc
should be in influence here?  Does it sound like I'm just not
applying the state where I think I am?

Thanks,
-Don Leich

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Fullscreen dual monitor spanning

2010-12-16 Thread David Glenn

Chriss10 wrote:
 I added eventhandler, but it doesn't work for me!
 
 It does not matter how many times I pess 'f', my application will only be 
 shown on one screen.
 
 Can somebody help?
 Other ideas? I think it should be possible to span it over 2 fullscreens.


Well I'll tell you what I had to resort to, but it's a hack not a fix.

First I render the view to a window (not the whole screen). There is many ways 
to do that - the examples show you how. 

Then, when I start things off I resize the window frame a bit beyond the scale 
of the two screens. This can be done in code but if all else fails, after you 
start the program resize it with your mouse- I told you it was a hack! 

Note: Make sure that resize of the window GUI is linked to your OSG resize for 
this to work!

This is the best that I've been able to do in the Linux realm - might work in 
MS Windows. 

I use this to render a 3D projection with two Polarized projections and it 
works, but as I said it's a hack! I'm looking into a more practical solution as 
time permits – This 3D Projection stuff is more of a hobby scale right now. 

This is kind of one of those types of subjects  that maybe an easy answer, but 
it's ether  just a bit beyond the box to find it in this forum, or beyond the 
interest of others to answer, or that’s what I’m beginning to think!  

For me, I’ve done some very weird stuff with this OSG code, much to the 
puzzlement of some of the OSG authors here and when you do this you sometimes 
have to stick to your guns and find the answers in other ways and forums!

D Glenn


D Glenn (a.k.a David Glenn) - Moving Heaven and Earth!

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34924#34924





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] background visible on point sprite spheres

2010-12-16 Thread Yurii Monakov
Hi Don!

I think that you can try enabling GL_BLEND mode in your StateSet (if
it is not already enabled).

Best regards,
Yurii Monakov

2010/12/16 Don Leich d...@ilight.com:
 Hi all,

 I've got a problem that I haven't been able to find a solution
 for and could use some help.  I'm using the standard texture file
 OpenSceneGraph-Data/Images/sphere.gif as the source image for
 point sprites.  The file is an image of a shaded sphere against
 a fully transparent background.

 I can set a state to properly render small 2-D sphere images with
 GL_POINTS primitive type.  I needed to add sprites to my scene
 graph after some other content that requires setting a different
 state first.  The point sprites after this other content will show
 the shaded sphere image correctly, but will now also render the
 sphere image background even though it should be fully transparent.

 Adding osg::StateAttribute::OVERRIDE to the blend function state
 was a thought, but no help.

    fn-setFunction(osg::BlendFunc::SRC_ALPHA,
        osg::BlendFunc::ONE_MINUS_SRC_ALPHA);

    _state-setAttributeAndModes(fn,
        osg::StateAttribute::OVERRIDE|osg::StateAttribute::ON);

 A dump and compare of .osg files didn't yield any insight.  Does
 anyone have a suggestion for a possible fix here or maybe a way
 to debug the state with OSG internals?  What besides BlendFunc
 should be in influence here?  Does it sound like I'm just not
 applying the state where I think I am?

 Thanks,
 -Don Leich

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] VertexBufferObject usage very slow...

2010-12-16 Thread Sean Spicer
Hi Everyone,

Working off the OSG trunk this afternoon, I tried some experiments
with VertexBufferObjects and our geometry (all on the fast path).  The
only deltas in our code are as follows...all timing as measured by OSG
stats:

geometry-setUseDisplayList(true)
geometry-setUseVertexBufferObjects(false)
=== Draw time = 2ms, FrameTime = 12ms

geometry-setUseDisplayList(false)
geometry-setUseVertexBufferObjects(false)
=== Draw time = 13ms, FrameTime= 19ms

geometry-setUseDisplayList(false)
geometry-setUseVertexBufferObjects(true)
=== Draw time = 67ms !!!  FrameTime = 109ms

What is going on here ?  We are always on the fast path - however, our
vertex arrays are large (65535 verts).  VBOs *should* be way faster
than immediate mode...any ideas ?

sean
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Blender osgExport nodemask

2010-12-16 Thread Riccardo Corsi
Hi Cedric,

of course the patch doesn't make sense like this,
if I get a chance to work more on that I'll share the code.
Thanks,
ricky

On Thu, Dec 16, 2010 at 14:19, Cedric Pinson cedric.pin...@plopbyte.netwrote:

 Hi Ricky,

 Sure it could make sense to setup a default nodemask when exporting from
 the gui. I can't apply your patch in the current state. It would need to
 be more general, like putting it in the gui or option ...

 I have added an issue
 https://bitbucket.org/cedricpinson/osgexport/issue/3/add-default-nodemask

 An alternative should be to setup a specific name to your root node from
 blender, then apply a visitor that will tag your graph with a specific
 nodemask when loading it in osg.

 Cedric

 On Thu, 2010-12-16 at 13:29 +0100, Riccardo Corsi wrote:
  Hi Cedric and all,
 
  I'm currently using Blender and osgExport and I've seen that the
  exporter doesn't assign any nodemask to the exported scenegraph.
  Setting nodemasks from blender might be useful to identify/preprocess
  some imported models in osg.
 
  I'm a total noob in python, but the modified version of osgobject.py
  does the trick of setting a default nodemaks.
 
  Do you think it's hard to retrieve the nodemask value from the config
  file, or expose a control on the exporter GUI?
  That would be a nice add on!
 
  Thank you,
  ricky
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

 --
 Provide OpenGL, WebGL and OpenSceneGraph services
 +33 659 598 614 Cedric Pinson mailto:cedric.pin...@plopbyte.net
 http://www.plopbyte.net

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] background visible on point sprite spheres

2010-12-16 Thread Don Leich

Thanks Yurii,

I did have that, also with OVERRIDE | ON.

_state-setMode( GL_BLEND,
 osg::StateAttribute::OVERRIDE|osg::StateAttribute::ON );

_state-setRenderBinDetails(10, DepthSortedBin,
  osg::StateSet::OVERRIDE_RENDERBIN_DETAILS );

Still stumped, but distracted by other things today.

-Don


Yurii Monakov wrote:

Hi Don!

I think that you can try enabling GL_BLEND mode in your StateSet (if
it is not already enabled).

Best regards,
Yurii Monakov

2010/12/16 Don Leich d...@ilight.com:


Hi all,

I've got a problem that I haven't been able to find a solution
for and could use some help.  I'm using the standard texture file
OpenSceneGraph-Data/Images/sphere.gif as the source image for
point sprites.  The file is an image of a shaded sphere against
a fully transparent background.

I can set a state to properly render small 2-D sphere images with
GL_POINTS primitive type.  I needed to add sprites to my scene
graph after some other content that requires setting a different
state first.  The point sprites after this other content will show
the shaded sphere image correctly, but will now also render the
sphere image background even though it should be fully transparent.

Adding osg::StateAttribute::OVERRIDE to the blend function state
was a thought, but no help.

  fn-setFunction(osg::BlendFunc::SRC_ALPHA,
  osg::BlendFunc::ONE_MINUS_SRC_ALPHA);

  _state-setAttributeAndModes(fn,
  osg::StateAttribute::OVERRIDE|osg::StateAttribute::ON);

A dump and compare of .osg files didn't yield any insight.  Does
anyone have a suggestion for a possible fix here or maybe a way
to debug the state with OSG internals?  What besides BlendFunc
should be in influence here?  Does it sound like I'm just not
applying the state where I think I am?

Thanks,
-Don Leich

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org








___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [ANN] MS Kinect - official drivers available

2010-12-16 Thread Torben Dannhauer
Hello Dimi,

the driver ist NOT provided by Microsoft, but by Primesense, the producer

In my opinion Microsofts attitude towards the Kinect drivers is just a good 
face to the matters. Kinect was hacked, and they had to decide whether they 
pursue the modifications or to allow them and open Kinect for further usage 
beyond XBox. 
I personally was very surprised that the decided to do the latter - Microsoft 
isn't an example of good open source cooperation for me.


Best regards,
Torben

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34930#34930





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org