Re: [osg-users] 3D osg::Image allocation size problem

2016-07-22 Thread Sebastian Messerschmidt

Hi

Josiah



Am 22.07.2016 um 13:00 schrieb Josiah Jideani:

Hi,

I am developing a scientific visualization application using Qt and 
Openscenegraph.  I am trying to create a 3D osg::Image to add to an osgVolume.  
I am having problems allocating the image data when I call the allocateImage 
member function (see the code snippet below).

The allocation works for equal dimensions less than 640.

When I try to allocate anything above 640x640x640 but less than 800x800x800, it 
seems to allocate successfully because image_s, image_t and image_r hold the 
correct sizes however when I try to write to the image data (the nested for 
loops) a segmentation fault is thrown at data[0] = 0.0f when s = 0; t = 0; and 
r = some random but valid number.
Putting the numbers together, 640^3 with 4*4 bytes (float and rgba) hits 
the 4 gig limit.
I believe the image size used internally by OSG is an unsigned int(which 
on most platforms is 32bits) , so that is probably what you're 
hitting.[1] Could you check what the calculated image size returns, when 
you encounter the allocation fails?

Typ

If my theory is right, we would have to change the data type used for 
calculation to long/size_t. (Seems like the calculation and the 
constructor are the only places that fail to use the correct size)



Cheers
Sebastian

[1] 
http://trac.openscenegraph.org/projects/osg//browser/OpenSceneGraph/trunk/src/osg/Image.cpp?rev=10890

*unsigned* *int* *Image::getTotalSizeInBytesIncludingMipmaps*() *const


*


I can allocate and write to the image data with sizes between 800x800x800 and 
1024x1024x1024, but a segmentation fault is thrown from the object code after 
the call to the viewer's frame() method.

And finally for sizes above 1024 the allocation completely fails as image_s 
image_t and image_r all hold 0.

Any clue on how to solve this? It was my understanding that the maximum size of 
the image is limited by the maximum 3D texture size of the graphics card which 
for the Quadro K4200 that I'm using is 4096x4096x4096.  So why am I only able 
to allocate a 640x640x640 image?

These are the specifications of my system:
Operating system: Opensuse Leap 42.1
RAM: 128GB
Graphics Card: Quadro K4200
Qt: Qt 4.7.1
OSG version: 3.2.3

Are you trying to compensate something? ;-)





Thank you!

Cheers,
Josiah


Code:

osg::ref_ptr image = new osg::Image
image->allocateImage(1024, 1024, 1024, GL_RGBA, GL_FLOAT);

int image_s = image->s();
int image_t = image->t();
int image_r = image->r();

for(int s = 0; s < image_s; s++)
{
 for(int t = 0; t < image_t; t++)
 {
 for(int r = 0; r < image_r; r++)
 {
 float* data = (float*) image->data(s,t,r);
 data[0] = 0.0f;
 data[1] = 0.0f;
 data[2] = 1.0f;
 data[3] = 0.1f;
 }
 }
}




--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68195#68195





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] 3D osg::Image allocation size problem

2016-07-22 Thread Robert Osfield
Hi Josiah,

Without a stack trace it's very hard to know exactly what is amiss.
My best guess is memory allocation error that isn't being handling
elegantly.  Have a look into the Image::allocateImage() implementation
to see what is happening during the allocation.

As check on paper how much memory your allocation will require on the
CPU and GPU. Please note that the driver keeps a copy of the texture
object in driver memory so you'll need to double the memory costs in
main memory when doing your calculations.  This calculation should
give you an ideal of what you can do in main memory without using
paging.

For the settings you listed in your email the calculation would be:

   1024x1024x1024 x 4 x 4

Which is... 16GB just for the osg::Image, then double this for driver
copy and you then require 32GB, and the graphics card will need more
than 16GB as well...

So you have to be realistic about what you can achieve



Robert.

On 22 July 2016 at 12:00, Josiah Jideani  wrote:
> Hi,
>
> I am developing a scientific visualization application using Qt and 
> Openscenegraph.  I am trying to create a 3D osg::Image to add to an 
> osgVolume.  I am having problems allocating the image data when I call the 
> allocateImage member function (see the code snippet below).
>
> The allocation works for equal dimensions less than 640.
>
> When I try to allocate anything above 640x640x640 but less than 800x800x800, 
> it seems to allocate successfully because image_s, image_t and image_r hold 
> the correct sizes however when I try to write to the image data (the nested 
> for loops) a segmentation fault is thrown at data[0] = 0.0f when s = 0; t = 
> 0; and r = some random but valid number.
>
> I can allocate and write to the image data with sizes between 800x800x800 and 
> 1024x1024x1024, but a segmentation fault is thrown from the object code after 
> the call to the viewer's frame() method.
>
> And finally for sizes above 1024 the allocation completely fails as image_s 
> image_t and image_r all hold 0.
>
> Any clue on how to solve this? It was my understanding that the maximum size 
> of the image is limited by the maximum 3D texture size of the graphics card 
> which for the Quadro K4200 that I'm using is 4096x4096x4096.  So why am I 
> only able to allocate a 640x640x640 image?
>
> These are the specifications of my system:
> Operating system: Opensuse Leap 42.1
> RAM: 128GB
> Graphics Card: Quadro K4200
> Qt: Qt 4.7.1
> OSG version: 3.2.3
>
>
>
> Thank you!
>
> Cheers,
> Josiah
>
>
> Code:
>
> osg::ref_ptr image = new osg::Image
> image->allocateImage(1024, 1024, 1024, GL_RGBA, GL_FLOAT);
>
> int image_s = image->s();
> int image_t = image->t();
> int image_r = image->r();
>
> for(int s = 0; s < image_s; s++)
> {
> for(int t = 0; t < image_t; t++)
> {
> for(int r = 0; r < image_r; r++)
> {
> float* data = (float*) image->data(s,t,r);
> data[0] = 0.0f;
> data[1] = 0.0f;
> data[2] = 1.0f;
> data[3] = 0.1f;
> }
> }
> }
>
>
>
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68195#68195
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] 3D osg::Image allocation size problem

2016-07-22 Thread Josiah Jideani
Hi,

I am developing a scientific visualization application using Qt and 
Openscenegraph.  I am trying to create a 3D osg::Image to add to an osgVolume.  
I am having problems allocating the image data when I call the allocateImage 
member function (see the code snippet below).

The allocation works for equal dimensions less than 640.

When I try to allocate anything above 640x640x640 but less than 800x800x800, it 
seems to allocate successfully because image_s, image_t and image_r hold the 
correct sizes however when I try to write to the image data (the nested for 
loops) a segmentation fault is thrown at data[0] = 0.0f when s = 0; t = 0; and 
r = some random but valid number.

I can allocate and write to the image data with sizes between 800x800x800 and 
1024x1024x1024, but a segmentation fault is thrown from the object code after 
the call to the viewer's frame() method.

And finally for sizes above 1024 the allocation completely fails as image_s 
image_t and image_r all hold 0.

Any clue on how to solve this? It was my understanding that the maximum size of 
the image is limited by the maximum 3D texture size of the graphics card which 
for the Quadro K4200 that I'm using is 4096x4096x4096.  So why am I only able 
to allocate a 640x640x640 image?

These are the specifications of my system:
Operating system: Opensuse Leap 42.1
RAM: 128GB
Graphics Card: Quadro K4200
Qt: Qt 4.7.1
OSG version: 3.2.3



Thank you!

Cheers,
Josiah


Code:

osg::ref_ptr image = new osg::Image
image->allocateImage(1024, 1024, 1024, GL_RGBA, GL_FLOAT);

int image_s = image->s();
int image_t = image->t();
int image_r = image->r();

for(int s = 0; s < image_s; s++)
{
for(int t = 0; t < image_t; t++)
{
for(int r = 0; r < image_r; r++)
{
float* data = (float*) image->data(s,t,r);
data[0] = 0.0f;
data[1] = 0.0f;
data[2] = 1.0f;
data[3] = 0.1f;
}
}
}




--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68195#68195





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] floating point pbuffers - not supported by current PixelBufferWin32 implementation

2016-07-22 Thread Robert Osfield
Hi Christian,

I haven't looked into the topic but my inclination would be to add an
option into osg::GraphicsContext::Traits for requesting the data type
(signed, unsigned, float, double) of the colour and depth buffers as
well as the existing number of bits than have the creation of the
graphics context attempt to honour this.

Robert.

On 22 July 2016 at 15:23, Christian Buchner  wrote:
>
>
> I am finding that with the following modification to PixelBufferWin32.cpp I
> can get my floating point PBuffer easily (no nvidia specific extensions
> required)
>
> fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
> if (_traits->red == 32 && _traits->green == 32 && _traits->blue == 32)
> #define WGL_TYPE_RGBA_FLOAT_ARB 0x21A0
> fAttribList.push_back(WGL_TYPE_RGBA_FLOAT_ARB);
> else
> fAttribList.push_back(WGL_TYPE_RGBA_ARB);
>
> Right now the presence of 32 bit color components in the context traits
> triggers the use of floating point texture format.
>
> My use case would be fast readback of scientific results from a GLSL shader,
> performing only off-screen rendering.  I am basing this on the
> osgscreencapture example.
>
> Christian
>
>
> 2016-07-22 14:48 GMT+02:00 Christian Buchner :
>>
>> Hi all,
>>
>> I spent the last 3 hours trying to coerce OSG to give me a floating point
>> pbuffer. Just setting the required bits for color components to 32 bits in
>> the graphicscontext traits isn't working.
>>
>> Turns out, on nVidia cards you also have to give the
>> WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on
>> Windows. The following code does this:
>>
>> std::vector fAttribList;
>>
>> fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);
>> fAttribList.push_back(true);
>> fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
>> fAttribList.push_back(WGL_TYPE_RGBA_ARB);
>>
>> fAttribList.push_back(WGL_RED_BITS_ARB);
>> fAttribList.push_back(32);
>> fAttribList.push_back(WGL_GREEN_BITS_ARB);
>> fAttribList.push_back(32);
>> fAttribList.push_back(WGL_BLUE_BITS_ARB);
>> fAttribList.push_back(32);
>> fAttribList.push_back(WGL_ALPHA_BITS_ARB);
>> fAttribList.push_back(32);
>> fAttribList.push_back(WGL_STENCIL_BITS_ARB);
>> fAttribList.push_back(8);
>> fAttribList.push_back(WGL_DEPTH_BITS_ARB);
>> fAttribList.push_back(24);
>> fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);
>> fAttribList.push_back(true);
>> fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);
>> fAttribList.push_back(true);
>> fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);
>> fAttribList.push_back(false);
>>
>> fAttribList.push_back(0);
>>
>> unsigned int nformats = 0;
>> int format;
>> WGLExtensions* wgle = WGLExtensions::instance();
>> wgle->wglChoosePixelFormatARB(hdc, [0], NULL, 1, ,
>> );
>> std::cout << "Suitable pixel formats: " << nformats << std::endl;
>>
>> On my GTX 970 card here this returns exactly one suitable pixel format (3
>> if you drop the DOUBLE_BUFFER_ARB requirement even)..
>>
>> It seems that the implementation of PixelBufferWin32 cannot currently be
>> given any user-defined attributes to the wglChoosePixelFormatARB function.
>> Is this a capability that we should consider adding? Or should we
>> automatically sneak in this vendor specific flag if the color components the
>> traits specify have 32 bits and a previous call to wglChoosePixelFormatARB
>> returned 0 matches?
>>
>> I am leaving this up for debate.
>>
>> Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?
>>
>> For now, I can simply patch my local copy of the OSG libraries to support
>> floating point pbuffers on nVidia cards.
>>
>> Christian
>>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] floating point pbuffers - not supported by current PixelBufferWin32 implementation

2016-07-22 Thread Christian Buchner
I am finding that with the following modification to PixelBufferWin32.cpp I
can get my floating point PBuffer easily (no nvidia specific extensions
required)

fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
if (_traits->red == 32 && _traits->green == 32 && _traits->blue == 32)
#define WGL_TYPE_RGBA_FLOAT_ARB 0x21A0
fAttribList.push_back(WGL_TYPE_RGBA_FLOAT_ARB);
else
fAttribList.push_back(WGL_TYPE_RGBA_ARB);

Right now the presence of 32 bit color components in the context traits
triggers the use of floating point texture format.

My use case would be fast readback of scientific results from a GLSL
shader, performing only off-screen rendering.  I am basing this on the
osgscreencapture example.

Christian


2016-07-22 14:48 GMT+02:00 Christian Buchner :

> Hi all,
>
> I spent the last 3 hours trying to coerce OSG to give me a floating point
> pbuffer. Just setting the required bits for color components to 32 bits in
> the graphicscontext traits isn't working.
>
> Turns out, on nVidia cards you also have to give the
> WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on
> Windows. The following code does this:
>
> std::vector fAttribList;
>
> fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);
> fAttribList.push_back(true);
> fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
> fAttribList.push_back(WGL_TYPE_RGBA_ARB);
>
> fAttribList.push_back(WGL_RED_BITS_ARB);
> fAttribList.push_back(32);
> fAttribList.push_back(WGL_GREEN_BITS_ARB);
> fAttribList.push_back(32);
> fAttribList.push_back(WGL_BLUE_BITS_ARB);
> fAttribList.push_back(32);
> fAttribList.push_back(WGL_ALPHA_BITS_ARB);
> fAttribList.push_back(32);
> fAttribList.push_back(WGL_STENCIL_BITS_ARB);
> fAttribList.push_back(8);
> fAttribList.push_back(WGL_DEPTH_BITS_ARB);
> fAttribList.push_back(24);
> fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);
> fAttribList.push_back(true);
> fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);
> fAttribList.push_back(true);
> fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);
> fAttribList.push_back(false);
>
> fAttribList.push_back(0);
>
> unsigned int nformats = 0;
> int format;
> WGLExtensions* wgle = WGLExtensions::instance();
> wgle->wglChoosePixelFormatARB(hdc, [0], NULL, 1, ,
> );
> std::cout << "Suitable pixel formats: " << nformats << std::endl;
>
> On my GTX 970 card here this returns exactly one suitable pixel format (3
> if you drop the DOUBLE_BUFFER_ARB requirement even)..
>
> It seems that the implementation of PixelBufferWin32 cannot currently be
> given any user-defined attributes to the wglChoosePixelFormatARB function.
> Is this a capability that we should consider adding? Or should we
> automatically sneak in this vendor specific flag if the color components
> the traits specify have 32 bits and a previous call to
> wglChoosePixelFormatARB returned 0 matches?
>
> I am leaving this up for debate.
>
> Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?
>
> For now, I can simply patch my local copy of the OSG libraries to support
> floating point pbuffers on nVidia cards.
>
> Christian
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] floating point pbuffers - not supported by current PixelBufferWin32 implementation

2016-07-22 Thread Christian Buchner
Hi all,

I spent the last 3 hours trying to coerce OSG to give me a floating point
pbuffer. Just setting the required bits for color components to 32 bits in
the graphicscontext traits isn't working.

Turns out, on nVidia cards you also have to give the
WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on
Windows. The following code does this:

std::vector fAttribList;

fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);
fAttribList.push_back(true);
fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
fAttribList.push_back(WGL_TYPE_RGBA_ARB);

fAttribList.push_back(WGL_RED_BITS_ARB);
fAttribList.push_back(32);
fAttribList.push_back(WGL_GREEN_BITS_ARB);
fAttribList.push_back(32);
fAttribList.push_back(WGL_BLUE_BITS_ARB);
fAttribList.push_back(32);
fAttribList.push_back(WGL_ALPHA_BITS_ARB);
fAttribList.push_back(32);
fAttribList.push_back(WGL_STENCIL_BITS_ARB);
fAttribList.push_back(8);
fAttribList.push_back(WGL_DEPTH_BITS_ARB);
fAttribList.push_back(24);
fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);
fAttribList.push_back(true);
fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);
fAttribList.push_back(true);
fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);
fAttribList.push_back(false);

fAttribList.push_back(0);

unsigned int nformats = 0;
int format;
WGLExtensions* wgle = WGLExtensions::instance();
wgle->wglChoosePixelFormatARB(hdc, [0], NULL, 1, ,
);
std::cout << "Suitable pixel formats: " << nformats << std::endl;

On my GTX 970 card here this returns exactly one suitable pixel format (3
if you drop the DOUBLE_BUFFER_ARB requirement even)..

It seems that the implementation of PixelBufferWin32 cannot currently be
given any user-defined attributes to the wglChoosePixelFormatARB function.
Is this a capability that we should consider adding? Or should we
automatically sneak in this vendor specific flag if the color components
the traits specify have 32 bits and a previous call to
wglChoosePixelFormatARB returned 0 matches?

I am leaving this up for debate.

Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?

For now, I can simply patch my local copy of the OSG libraries to support
floating point pbuffers on nVidia cards.

Christian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org