I am finding that with the following modification to PixelBufferWin32.cpp I
can get my floating point PBuffer easily (no nvidia specific extensions
required)

    fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
    if (_traits->red == 32 && _traits->green == 32 && _traits->blue == 32)
#define WGL_TYPE_RGBA_FLOAT_ARB 0x21A0
        fAttribList.push_back(WGL_TYPE_RGBA_FLOAT_ARB);
    else
        fAttribList.push_back(WGL_TYPE_RGBA_ARB);

Right now the presence of 32 bit color components in the context traits
triggers the use of floating point texture format.

My use case would be fast readback of scientific results from a GLSL
shader, performing only off-screen rendering.  I am basing this on the
osgscreencapture example.

Christian


2016-07-22 14:48 GMT+02:00 Christian Buchner <christian.buch...@gmail.com>:

> Hi all,
>
> I spent the last 3 hours trying to coerce OSG to give me a floating point
> pbuffer. Just setting the required bits for color components to 32 bits in
> the graphicscontext traits isn't working.
>
> Turns out, on nVidia cards you also have to give the
> WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on
> Windows. The following code does this:
>
>     std::vector<int> fAttribList;
>
>     fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);
>     fAttribList.push_back(true);
>     fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
>     fAttribList.push_back(WGL_TYPE_RGBA_ARB);
>
>     fAttribList.push_back(WGL_RED_BITS_ARB);
>     fAttribList.push_back(32);
>     fAttribList.push_back(WGL_GREEN_BITS_ARB);
>     fAttribList.push_back(32);
>     fAttribList.push_back(WGL_BLUE_BITS_ARB);
>     fAttribList.push_back(32);
>     fAttribList.push_back(WGL_ALPHA_BITS_ARB);
>     fAttribList.push_back(32);
>     fAttribList.push_back(WGL_STENCIL_BITS_ARB);
>     fAttribList.push_back(8);
>     fAttribList.push_back(WGL_DEPTH_BITS_ARB);
>     fAttribList.push_back(24);
>     fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);
>     fAttribList.push_back(true);
>     fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);
>     fAttribList.push_back(true);
>     fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);
>     fAttribList.push_back(false);
>
>     fAttribList.push_back(0);
>
>     unsigned int nformats = 0;
>     int format;
>     WGLExtensions* wgle = WGLExtensions::instance();
>     wgle->wglChoosePixelFormatARB(hdc, &fAttribList[0], NULL, 1, &format,
> &nformats);
>     std::cout << "Suitable pixel formats: " << nformats << std::endl;
>
> On my GTX 970 card here this returns exactly one suitable pixel format (3
> if you drop the DOUBLE_BUFFER_ARB requirement even)..
>
> It seems that the implementation of PixelBufferWin32 cannot currently be
> given any user-defined attributes to the wglChoosePixelFormatARB function.
> Is this a capability that we should consider adding? Or should we
> automatically sneak in this vendor specific flag if the color components
> the traits specify have 32 bits and a previous call to
> wglChoosePixelFormatARB returned 0 matches?
>
> I am leaving this up for debate.
>
> Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?
>
> For now, I can simply patch my local copy of the OSG libraries to support
> floating point pbuffers on nVidia cards.
>
> Christian
>
>
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to