[osg-users] Texture Buffer Object Initialization Problem

2018-02-26 Thread Eon Strife
Hi,

I am trying to attach additional per-vertex information to each vertex. For 
each vertex, there are 3 sets of RGBA values I want to attach (each called 
controlSetA, controlSetB, controlSetC) to. The controlSet values are from an 
additional file in JSON format, and it is not embedded in the original 3D 
model. So, my plan here is that I will load those controlSet values and keep 
each controlSet in a Texture Buffer Object, and I will sample them in GLSL 
vertex shader.

However I encounter an problem :

Just say my 3D model is a sphere consists of 382 vertices. So the JSON file has 
1528 values ( 382 x 4 channels = 1528) for each Control Set. In total : 1528 x 
3 sets = 4584 values).  So, in "tboImage->allocateImage", I request a texture 
with dimension 382 x 1 x 1 with RGBA float texels. During the runtime in this 
initialization function, I see from Visual Studio watch that it has correct 382 
x 1 x 1 dimension. However, when I check it using codeXL, the texture's 
dimension is only 47 x 1 x 1 (if I am not wrong). Moreover, if I try to 
visualize the value in rendering, I can see that the sphere is painted with the 
value until about one quarter of the surface (from bottom) only.

I am using OpenSceneGraph version 3.5.6, 64-bit, Visual Studio 2013

The following is the relevant code


Code:


std::vector controlSetNames = { "controlSetA", "controlSetB", 
"controlSetC" };
std::vector controlSetIDs = { 3, 4, 6 };

...

itrcontrolSet = 0
//for each control set
{
int vertexCount = 0;
std::vector values;

//just pseudocode here
//for each vertex
//{
//  vertexCount++;
//  for each channel
//  {
//  values.push_back(channel->GetFloat());
//  }
//}

//create the texture image
osg::ref_ptr tboImage = new osg::Image;
tboImage->allocateImage(vertexCount, 1, 1, GL_RGBA, GL_FLOAT);
float *dataF = (float*)tboImage->data();

//copy the json data to it
//memcpy(dataF, texels, sizeof(GL_FLOAT) * values.size());
for (int i = 0; i < values.size(); i++)
*(dataF++) = values[i];

//create texture buffer object
osg::ref_ptr tbo = new osg::TextureBuffer;
tbo->setImage(tboImage.get());
tbo->setInternalFormat(GL_RGBA32F_ARB);
tbo->setTextureWidth(vertexCount);
tbo->dirtyTextureParameters();

//attach to the TBO
osg::ref_ptr stateset = foundNode->getOrCreateStateSet();
osg::Uniform* texControlSet = new 
osg::Uniform(controlSetNames[itrcontrolSet].c_str(), 
controlSetIDs[itrcontrolSet]);
stateset->addUniform(texControlSet); 
stateset->setTextureAttributeAndModes(controlSetIDs[itrcontrolSet], 
tbo.get());

itrcontrolSet++;
}




And here's the shader code

Code:

#version 330 compatibility
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_explicit_uniform_location : enable

layout( location = 3) uniform samplerBuffer controlSetA;
layout( location = 4) uniform samplerBuffer controlSetB;
layout( location = 6) uniform samplerBuffer controlSetC;

...

layout( location = 0 ) out vec4 out_vColor0;
layout( location = 1 ) out vec4 out_vColor1;
layout( location = 2 ) out vec4 out_vColor2;
layout( location = 3 ) out vec4 out_vColor3;

..

void main()
{

...
vec4 in_vColor0 = gl_Color; 

out_vColor0 = saturate(in_vColor0);
out_vColor1 = texelFetch(controlSetA, gl_VertexID);
out_vColor2 = texelFetch(controlSetB, gl_VertexID);
out_vColor3 = texelFetch(controlSetC, gl_VertexID);

...

}





Thank you!

Cheers,
Eon
[/code]

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=72984#72984





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Issues with Qt based application and occulusion queries

2018-02-26 Thread Daniel Trstenjak
Hi,

the issue is, that if the qt application window is minimized, then
the occlusion queries always return 0 as the number of passed pixels.

I don't think that this is an OpenSceneGraph issue, but I'm hoping to get
some hints here what the reasons of the issue might be, because at the
moment I'm pretty much out of ideas.

We're having a Qt based application, a QGLWidget for the rendering area and
using a 'osgUtil::SceneView' for the rendering of the scene.

For the occlusion queries we don't use OpenSceneGraph functionality, but
directly use the OpenGL extensions GL_NV_occlusion_query or 
GL_ARB_occlusion_query.
Both extensions behave in the same way for this case.

An application session can be recorded into a script - which might also
include taking snapshots of the 3d scene - and the script can be
executed afterwards and should produce the same snapshots.

The application has a "no gui" mode, which runs the application with a
minimized qt window and is mostly used for the script execution. Now
everything regarding snapshots works in the "no gui" mode, beside of
the occlusion queries, which always return 0 as the number of passed pixels.

Now we also have a "no display" mode of the application, to be able to
run scripts on servers without graphics hardware, by using the Mesa
OpenGL library, and in this case the occlusion queries work like expected.
The biggest difference on the Qt side in this case seems to be that the
instanciated QApplication is of type TTY.

It's also possible to run the version using the Mesa library in "no gui"
mode, in which case I'm getting yet another result, the occlusion query
then returns some passed pixels, but the result is wrong.

At the moment I'm pretty much perplexed how I should interpret these
different results and I could reproduce them on different linux machines
with different graphics cards and drivers.

We're using at the moment OpenSceneGraph-3.0.1 and Qt-4.6.3 (we're
porting to Qt-5.9.4, which gives the same results).

My local machine:
System: Ubuntu 16.04
Graphics Card: GeForce GTX 970
Driver: NVIDIA 375.39

I greatly appreciate any kind of ideas what might be the issue here.

Thanks!


Greetings,
Daniel
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org