Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread Ryan Pavlik
That reminds me - in my lab we've run into some issues where having
hyperthreading enabled for high-throughput operations like heavy GPU work
can have a negative performance impact. Not sure of the specifics, and I
don't see why it would cause your time-delay slowdown, but perhaps some
scheduling-related task interacts with hyperthreading.

Ryan

On Thu, Jun 16, 2011 at 2:25 PM, Dardo D Kleiner - CONTRACTOR <
dklei...@cmf.nrl.navy.mil> wrote:

> Try pinning the process to particular core(s) (e.g. via
> pthread_setaffinity_np(3) or cpuset(7)).  We have a particular piece of
> hardware on which GL performance (in general, not necessarily gl_readpixels)
> tanks when the Linux scheduler decides to place the process on core #7 (i.e.
> the last core of a dual-socket, quad-core, non-HT machine).  This behavior
> observed on at least 7 identical machines.
>
> What fun that was to diagnose...
>
> - Dardo
>
>
> On 6/16/11 1:57 PM, John Vidar Larring wrote:
>
>> Hi Robert,
>>
>> Thanks for your help. See comments inline below:
>>
>> Robert Osfield wrote:
>>
>>>
>>> The OSG svn/trunk and recent 2.9.x dev series has support for setting
>>> a texture pool size
>>> that is able to limit the amount of texture memory used, you can set
>>> this via the env var:
>>>
>>> set OSG_TEXTURE_POOL_SIZE=20
>>>
>>> When sets the size to 200 thousands bytes. Set it to different values
>>> and study the reported available memory on the GPU.
>>>
>>> Whether reducing the amount of memory used on the GPU will have an
>>> effect I can't say - it all depends upon what is actually the
>>> bottleneck.
>>>
>>
>> Unfortunately, our application is still not fully converted to OSG, so
>> we're note able to take advantage of the OSG_TEXTURE_POOL_SIZE straight off
>> the bat. But hopefully within this time next year we
>> will. For the time being will have to live with a mix of old-school GL and
>> OSG.
>>
>>  In terms of possible causes of a stall you could check to see if there
>>> are any processes on the client machine that are running periodically
>>> - virus checkers etc.
>>>
>>
>> Thanks, but already checked. We're running on linux and we've monitored
>> all processes without finding any culprits here.
>>
>>  Also check with NVidia. They might have an idea.
>>>
>>
>> So far, no response from Nvidia so far. The osg-users list has by far
>> given us the best clues/help/support, so kudos to the community!!!
>>
>> Best regards,
>> John
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>>
>>  ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>



-- 
Ryan Pavlik
HCI Graduate Student
Virtual Reality Applications Center
Iowa State University

rpav...@iastate.edu
http://academic.cleardefinition.com
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread Dardo D Kleiner - CONTRACTOR
Try pinning the process to particular core(s) (e.g. via pthread_setaffinity_np(3) or cpuset(7)).  We have a particular piece of hardware on which GL performance (in general, not necessarily 
gl_readpixels) tanks when the Linux scheduler decides to place the process on core #7 (i.e. the last core of a dual-socket, quad-core, non-HT machine).  This behavior observed on at least 7 identical 
machines.


What fun that was to diagnose...

- Dardo

On 6/16/11 1:57 PM, John Vidar Larring wrote:

Hi Robert,

Thanks for your help. See comments inline below:

Robert Osfield wrote:


The OSG svn/trunk and recent 2.9.x dev series has support for setting
a texture pool size
that is able to limit the amount of texture memory used, you can set
this via the env var:

set OSG_TEXTURE_POOL_SIZE=20

When sets the size to 200 thousands bytes. Set it to different values
and study the reported available memory on the GPU.

Whether reducing the amount of memory used on the GPU will have an
effect I can't say - it all depends upon what is actually the
bottleneck.


Unfortunately, our application is still not fully converted to OSG, so we're 
note able to take advantage of the OSG_TEXTURE_POOL_SIZE straight off the bat. 
But hopefully within this time next year we
will. For the time being will have to live with a mix of old-school GL and OSG.


In terms of possible causes of a stall you could check to see if there
are any processes on the client machine that are running periodically
- virus checkers etc.


Thanks, but already checked. We're running on linux and we've monitored all 
processes without finding any culprits here.


Also check with NVidia. They might have an idea.


So far, no response from Nvidia so far. The osg-users list has by far given us 
the best clues/help/support, so kudos to the community!!!

Best regards,
John
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread John Vidar Larring

Hi Robert,

Thanks for your help. See comments inline below:

Robert Osfield wrote:


The OSG svn/trunk and recent 2.9.x dev series has support for setting
a texture pool size
that is able to limit the amount of texture memory used, you can set
this via the env var:

  set OSG_TEXTURE_POOL_SIZE=20

When sets the size to 200 thousands bytes.  Set it to different values
and study the reported available memory on the GPU.

Whether reducing the amount of memory used on the GPU will have an
effect I can't say - it all depends upon what is actually the
bottleneck.


Unfortunately, our application is still not fully converted to OSG, so 
we're note able to take advantage of the OSG_TEXTURE_POOL_SIZE straight 
off the bat. But hopefully within this time next year we will. For the 
time being will have to live with a mix of old-school GL and OSG.



In terms of possible causes of a stall you could check to see if there
are any processes on the client machine that are running periodically
- virus checkers etc.


Thanks, but already checked. We're running on linux and we've monitored 
 all processes without finding any culprits here.



Also check with NVidia.  They might have an idea.


So far, no response from Nvidia so far. The osg-users list has by far 
given us the best clues/help/support, so kudos to the community!!!


Best regards,
John
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread Robert Osfield
Hi John,

On Thu, Jun 16, 2011 at 2:59 PM, John Vidar Larring
 wrote:
> Yepp, the system has 24GB main memory. With the current config the system is
> stable at about 6.8GB memory consumption, leaving swap untouched. Good
> question though. Just did a double check to be sure:-)
>
> Gfx RAM, on the other side, is almost constantly exhausted. According to
> "watch nvidia-smi -q" there is only a few MB of gfx mem available throughout
> execution (varies between 5-100MB free). However, this is also true for our
> own systems which are not able to reproduce the issue.

The OSG svn/trunk and recent 2.9.x dev series has support for setting
a texture pool size
that is able to limit the amount of texture memory used, you can set
this via the env var:

  set OSG_TEXTURE_POOL_SIZE=20

When sets the size to 200 thousands bytes.  Set it to different values
and study the reported available memory on the GPU.

Whether reducing the amount of memory used on the GPU will have an
effect I can't say - it all depends upon what is actually the
bottleneck.

In terms of possible causes of a stall you could check to see if there
are any processes on the client machine that are running periodically
- virus checkers etc.

Also check with NVidia.  They might have an idea.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread John Vidar Larring

Hi Ralf,

Thanks for your quick reply!

> Have you investigated the memory consumption when this happens,
> thinking you could be starting to hit page file?

Yepp, the system has 24GB main memory. With the current config the 
system is stable at about 6.8GB memory consumption, leaving swap 
untouched. Good question though. Just did a double check to be sure:-)


Gfx RAM, on the other side, is almost constantly exhausted. According to 
"watch nvidia-smi -q" there is only a few MB of gfx mem available 
throughout execution (varies between 5-100MB free). However, this is 
also true for our own systems which are not able to reproduce the issue.


Best regards,
John

Ralf Stokholm wrote:

Hi John

Have you investigated the memory consumption when this happens, thinking 
you could be starting to hit page file?


Brgs

Ralf

On 16 June 2011 15:06, John Vidar Larring > wrote:


Hi Sergey,

Thanks for the async reading tip! It gives us a general speed-up,
but it does not fix the issue at hand. But thanks anyway for great
community support!

We used osg::Timer to time the execution of the following calls in
the code you suggested below: glReadPixels, glMapBuffer, memcpy and
the total time of the whole code segment per frame and we got the
following results (all numbers are in milliseconds):

*** PXREAD  MAPBUF  MEMCPY  TOTAL ***
   0.038   0.004   7.274   7.316  
   0.038   0.004   7.262   7.304  
   0.039   0.004   7.275   7.318  
   0.037   0.004   7.276   7.317  
[...snip... about 20-30 minutes later ...]
   0.040.004   7.277   7.321  
   0.037   0.005   7.293   7.335  
   0.039   0.004   7.279   7.322  
   0.038   0.004   7.274   7.316  
   0.037   81.524  7.285   88.846 <--- ???

   0.039   802.79  7.253   810.082<--- ???
   0.039   800.092 7.247   807.378
   0.037   802.858 7.269   810.164
   0.036   800.102 7.279   807.417
   0.036   803.398 7.25810.684
   0.038   799.461 7.255   806.754
   0.037   802.716 7.247   810
   0.038   799.323 7.25806.611

   0.037   799.439 7.265   806.741
   0.039   803.154 7.248   810.441
[...snip... stays like this until program is terminated]

No Errors, performance just suddenly drops from excellent to a
virtual halt. This is a total mystery to me, and I haven't seen
anyone else on the web posting similar problems... sigh!

So we are trying to come up with theories as to what can potentially
hurt glReadPixels performance (either sync or async) so badly.

Drivers and HW are among the usual suspects, but the client have the
same issue on multiple machines while we are having trouble
reproducing, on a range of similar HW with the same SW, drivers and
config.

Are there any odd comination of GL calls than can
interrupt/interfere with glReadPixels?

Again any suggestion or ideas are greatly appreciated?

Best regards,
John


Sergey Kurdakov wrote:

Hi John,
I cannot say, what the problem is,

but the possible solution is to use pbo with glReadPixels so
that you read previous frame - and all reading is async - not
slowing app at all

here is a code for you
//somewhere after rendering

glReadBuffer(GL_FRONT);
// get next buffer's index
 unsigned int iNextBuffer = (_iCurrentBuffer + 1) % N_MAX_BUFFERS;
 // kick of readback of current front-buffer
 // into the next buffer
   glBindBuffer(GL_PIXEL_PACK_BUFFER,
_aPixelBuffer[iNextBuffer]);//bind pbo1
   glReadPixels(0, 0, m_width, m_height, GL_BGRA,  
   GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));//readback

   // map the current buffer
containing the
   // image read back the previous
time around
   glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT,
_aPixelBuffer[_iCurrentBuffer]);//bind pbo2
  _pPixels = static_cast(glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY));

if(_pPixels)
{
 //something like:
 memcpy(image,_pPixels,m_width*m_height*4);
// unmap the buffer
if (!glUnmapBuffer(GL_PIXEL_PACK_BUFFER))
   {
   std::cerr << "Couldn't unmap pixel buffer. Exiting\n";
   assert(false);
   }
}

   // unbind readback buffer to not interfere with
 // any other (traditional) readbacks.
   glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
 // make next-buffer the current buffer
   _iCurrentBuffer = iNextBuffer;
glReadBuffer(GL_BACK);


make sure you have inited buffers

glGenBuffers(N_MAX_BUFFERS, _aPixelBuffer);
for (unsigned int iBuffer = 0; iBuf

Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread Ralf Stokholm
Hi John

Have you investigated the memory consumption when this happens, thinking you
could be starting to hit page file?

Brgs

Ralf

On 16 June 2011 15:06, John Vidar Larring  wrote:

> Hi Sergey,
>
> Thanks for the async reading tip! It gives us a general speed-up, but it
> does not fix the issue at hand. But thanks anyway for great community
> support!
>
> We used osg::Timer to time the execution of the following calls in the code
> you suggested below: glReadPixels, glMapBuffer, memcpy and the total time of
> the whole code segment per frame and we got the following results (all
> numbers are in milliseconds):
>
> *** PXREAD  MAPBUF  MEMCPY  TOTAL ***
>0.038   0.004   7.274   7.316
>0.038   0.004   7.262   7.304
>0.039   0.004   7.275   7.318
>0.037   0.004   7.276   7.317
> [...snip... about 20-30 minutes later ...]
>0.040.004   7.277   7.321
>0.037   0.005   7.293   7.335
>0.039   0.004   7.279   7.322
>0.038   0.004   7.274   7.316
>0.037   81.524  7.285   88.846 <--- ???
>0.039   802.79  7.253   810.082<--- ???
>0.039   800.092 7.247   807.378
>0.037   802.858 7.269   810.164
>0.036   800.102 7.279   807.417
>0.036   803.398 7.25810.684
>0.038   799.461 7.255   806.754
>0.037   802.716 7.247   810
>0.038   799.323 7.25806.611
>0.037   799.439 7.265   806.741
>0.039   803.154 7.248   810.441
> [...snip... stays like this until program is terminated]
>
> No Errors, performance just suddenly drops from excellent to a virtual
> halt. This is a total mystery to me, and I haven't seen anyone else on the
> web posting similar problems... sigh!
>
> So we are trying to come up with theories as to what can potentially hurt
> glReadPixels performance (either sync or async) so badly.
>
> Drivers and HW are among the usual suspects, but the client have the same
> issue on multiple machines while we are having trouble reproducing, on a
> range of similar HW with the same SW, drivers and config.
>
> Are there any odd comination of GL calls than can interrupt/interfere with
> glReadPixels?
>
> Again any suggestion or ideas are greatly appreciated?
>
> Best regards,
> John
>
>
> Sergey Kurdakov wrote:
>
>> Hi John,
>> I cannot say, what the problem is,
>>
>> but the possible solution is to use pbo with glReadPixels so that you read
>> previous frame - and all reading is async - not slowing app at all
>>
>> here is a code for you
>> //somewhere after rendering
>>
>> glReadBuffer(GL_FRONT);
>> // get next buffer's index
>>  unsigned int iNextBuffer = (_iCurrentBuffer + 1) % N_MAX_BUFFERS;
>>  // kick of readback of current front-buffer
>>  // into the next buffer
>>glBindBuffer(GL_PIXEL_PACK_BUFFER, _aPixelBuffer[iNextBuffer]);//bind
>> pbo1
>>glReadPixels(0, 0, m_width, m_height, GL_BGRA,
>>  GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));//readback
>>// map the current buffer containing the
>>// image read back the previous time around
>>glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT,
>> _aPixelBuffer[_iCurrentBuffer]);//bind pbo2
>>   _pPixels = static_cast> *>(glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY));
>>
>> if(_pPixels)
>> {
>>  //something like:
>>  memcpy(image,_pPixels,m_width*m_height*4);
>> // unmap the buffer
>> if (!glUnmapBuffer(GL_PIXEL_PACK_BUFFER))
>>{
>>std::cerr << "Couldn't unmap pixel buffer. Exiting\n";
>>assert(false);
>>}
>> }
>>
>>// unbind readback buffer to not interfere with
>>  // any other (traditional) readbacks.
>>glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
>>  // make next-buffer the current buffer
>>_iCurrentBuffer = iNextBuffer;
>> glReadBuffer(GL_BACK);
>>
>>
>> make sure you have inited buffers
>>
>> glGenBuffers(N_MAX_BUFFERS, _aPixelBuffer);
>> for (unsigned int iBuffer = 0; iBuffer < N_MAX_BUFFERS; ++iBuffer)
>>{
>>glBindBuffer(GL_PIXEL_PACK_BUFFER, _aPixelBuffer[iBuffer]);
>>glBufferData(GL_PIXEL_PACK_BUFFER, m_width*m_height*4, NULL,
>> GL_STATIC_READ);
>> errorcode = glGetError();
>>} glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
>>
>> and that you delete them after use
>>
>> glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, 0);
>> glDeleteBuffersARB(N_MAX_BUFFERS, _aPixelBuffer);
>>
>>
>> Regards
>> Sergey
>>
>>  ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>



-- 
Ralf Stokholm
Director R&D
Email: a...@arenalogic.com
Phone: +45 28 30 83 52
Web: www.arenalogic.com

This transmission and any accompanying documents are intended only for
confidential use of the designated recipient(s) identified above.  This
message contains proprietary information belonging to Arenalogic Aps.  If
you have received this message by error, please notify the sender.
_

Re: [osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-16 Thread John Vidar Larring

Hi Sergey,

Thanks for the async reading tip! It gives us a general speed-up, but it 
does not fix the issue at hand. But thanks anyway for great community 
support!


We used osg::Timer to time the execution of the following calls in the 
code you suggested below: glReadPixels, glMapBuffer, memcpy and the 
total time of the whole code segment per frame and we got the following 
results (all numbers are in milliseconds):


*** PXREAD  MAPBUF  MEMCPY  TOTAL ***
0.038   0.004   7.274   7.316   
0.038   0.004   7.262   7.304   
0.039   0.004   7.275   7.318   
0.037   0.004   7.276   7.317   
[...snip... about 20-30 minutes later ...]
0.040.004   7.277   7.321   
0.037   0.005   7.293   7.335   
0.039   0.004   7.279   7.322   
0.038   0.004   7.274   7.316   
0.037   81.524  7.285   88.846 <--- ???
0.039   802.79  7.253   810.082<--- ???
0.039   800.092 7.247   807.378 
0.037   802.858 7.269   810.164 
0.036   800.102 7.279   807.417 
0.036   803.398 7.25810.684 
0.038   799.461 7.255   806.754 
0.037   802.716 7.247   810 
0.038   799.323 7.25806.611 
0.037   799.439 7.265   806.741 
0.039   803.154 7.248   810.441 
[...snip... stays like this until program is terminated]

No Errors, performance just suddenly drops from excellent to a virtual 
halt. This is a total mystery to me, and I haven't seen anyone else on 
the web posting similar problems... sigh!


So we are trying to come up with theories as to what can potentially 
hurt glReadPixels performance (either sync or async) so badly.


Drivers and HW are among the usual suspects, but the client have the 
same issue on multiple machines while we are having trouble reproducing, 
on a range of similar HW with the same SW, drivers and config.


Are there any odd comination of GL calls than can interrupt/interfere 
with glReadPixels?


Again any suggestion or ideas are greatly appreciated?

Best regards,
John

Sergey Kurdakov wrote:
Hi John, 


I cannot say, what the problem is,

but the possible solution is to use pbo with glReadPixels so that you 
read previous frame - and all reading is async - not slowing app at all


here is a code for you
//somewhere after rendering

glReadBuffer(GL_FRONT);
// get next buffer's index
 unsigned int iNextBuffer = (_iCurrentBuffer + 1) % N_MAX_BUFFERS;
 // kick of readback of current front-buffer
 // into the next buffer
glBindBuffer(GL_PIXEL_PACK_BUFFER, 
_aPixelBuffer[iNextBuffer]);//bind pbo1
glReadPixels(0, 0, m_width, m_height, GL_BGRA, 
 GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));//readback

// map the current buffer containing the
// image read back the previous time around
glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT, 
_aPixelBuffer[_iCurrentBuffer]);//bind pbo2
   _pPixels = static_cast*>(glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY));


if(_pPixels)
{
 
//something like:

 memcpy(image,_pPixels,m_width*m_height*4);
// unmap the buffer
 if (!glUnmapBuffer(GL_PIXEL_PACK_BUFFER))
{
std::cerr << "Couldn't unmap pixel buffer. Exiting\n";
assert(false);
}
}


// unbind readback buffer to not interfere with

 // any other (traditional) readbacks.
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
 // make next-buffer the current buffer
_iCurrentBuffer = iNextBuffer;
glReadBuffer(GL_BACK);


make sure you have inited buffers

glGenBuffers(N_MAX_BUFFERS, _aPixelBuffer);
for (unsigned int iBuffer = 0; iBuffer < N_MAX_BUFFERS; ++iBuffer)
{
glBindBuffer(GL_PIXEL_PACK_BUFFER, _aPixelBuffer[iBuffer]);
glBufferData(GL_PIXEL_PACK_BUFFER, m_width*m_height*4, NULL, 
GL_STATIC_READ);

errorcode = glGetError();
}   
  glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);


and that you delete them after use

glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, 0);
glDeleteBuffersARB(N_MAX_BUFFERS, _aPixelBuffer);


Regards
Sergey


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-15 Thread Sergey Kurdakov
Hi John,

I cannot say, what the problem is,

but the possible solution is to use pbo with glReadPixels so that you read
previous frame - and all reading is async - not slowing app at all

here is a code for you
//somewhere after rendering

glReadBuffer(GL_FRONT);
// get next buffer's index
 unsigned int iNextBuffer = (_iCurrentBuffer + 1) % N_MAX_BUFFERS;
 // kick of readback of current front-buffer
 // into the next buffer
glBindBuffer(GL_PIXEL_PACK_BUFFER, _aPixelBuffer[iNextBuffer]);//bind
pbo1
glReadPixels(0, 0, m_width, m_height, GL_BGRA,
 GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));//readback
// map the current buffer containing the
// image read back the previous time around
glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT,
_aPixelBuffer[_iCurrentBuffer]);//bind pbo2
   _pPixels = static_cast(glMapBuffer(GL_PIXEL_PACK_BUFFER,
GL_READ_ONLY));

if(_pPixels)
{

//something like:
 memcpy(image,_pPixels,m_width*m_height*4);
// unmap the buffer
 if (!glUnmapBuffer(GL_PIXEL_PACK_BUFFER))
{
std::cerr << "Couldn't unmap pixel buffer. Exiting\n";
assert(false);
}
}


 // unbind readback buffer to not interfere with
 // any other (traditional) readbacks.
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
 // make next-buffer the current buffer
_iCurrentBuffer = iNextBuffer;
glReadBuffer(GL_BACK);


make sure you have inited buffers

glGenBuffers(N_MAX_BUFFERS, _aPixelBuffer);
for (unsigned int iBuffer = 0; iBuffer < N_MAX_BUFFERS; ++iBuffer)
{
glBindBuffer(GL_PIXEL_PACK_BUFFER, _aPixelBuffer[iBuffer]);
glBufferData(GL_PIXEL_PACK_BUFFER, m_width*m_height*4, NULL,
GL_STATIC_READ);
errorcode = glGetError();
}
  glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);

and that you delete them after use

glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, 0);
glDeleteBuffersARB(N_MAX_BUFFERS, _aPixelBuffer);


Regards
Sergey
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OT: Sudden 100x performance drop in glReadPixels

2011-06-15 Thread John Vidar Larring

Hi all,

Warning, this is off-topic osg-users list, but in desperation I just 
wanted to check if any of the OpenGL gurus on this list have ever 
experienced something similar:


Our application uses glReadPixels to read 1920x1080 rgb frame from from 
the nvidia graphics card so that we can send it out on SDI on a 
different video board. This has been working for us for years.


However, a client of ours is currently experiencing a significant 
rendering slowdown after using the application for some time 
(20-30minutes). After lots of debugging we have found that on the client 
machine, it is the glReadPixels call that suddenly starts using 800+ 
milliseconds to read a frame, while it normally only takes about 8+- ms.


Restarting the application temporarily resolves the issue. glReadPixels 
is back to 8ms avg for about 20-30 minutes, and then suddenly drops to 
800ms and stays there until application is restarted. No GL Errors are 
reported in conjunction with the performance drop.


The catch is that we have not been able to reproduce this on our own 
developer machine (same OS version, software, drivers, same application 
setup and data). The one known difference between client and dev 
machines are the graphics card (they use GT9800 and GTX285, while we 
have tested 8800 and GT460)?? Not being able to reproduce the issue 
makes debugging very cumbersome:(


Have anyone seen any similar sudden performance drop issue with 
glReadPixels? Any theories on what the cause could be and where to look?


Any help, hints or ideas are all more than welcome. You could answer me 
directly instead of bothering the osg-users list if you prefer. I'll 
post the solution if/when we find it.


Best regards,
John


--
This email was Anti Virus checked by Astaro Security Gateway. 
http://www.astaro.com
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org