Yep ok, that's nice!

Followup question(s):

I'm currently filling in my buffers by calling vaCreateBuffer with NULL as 
pointer argument to let the server allocate the memory for me, then I use 
vaMapBuffer when I fill it in. This works very nice and performance is good.

However I see that when I over-allocate my slice buffers the performance drops 
a bit. Currently I allocate 8kb for each slice and fill it in and then specify 
in the slice parameters exactly how much of the buffer I actually use. As I 
understand it the over-allocation shouldn't affect performance, but it does. 
Are the buffers copied even though I allocate them in the server?

More over, the documentation says the buffers are automatically de-allocated 
once they've been sent to vaRenderPicture. Is there any way to re-use the 
buffers and avoid automatic de-allocation? I really don't see why I should 
create/destroy all buffers each frame, seems like a complete waste of resources.

Kind regards, Andreas Larsson


20 mar 2013 kl. 01:50 skrev "Xiang, Haihao" <haihao.xi...@intel.com>
:

> On Tue, 2013-03-19 at 08:02 +0000, Andreas Larsson wrote: 
>> Hi!
>> 
>> Do I have to perform bitstream parsing and vaRenderPicture in separate 
>> threads to maintain best performance. I.e. do vaRenderPicture block or are 
>> those calls buffered and handled asynchronously by the driver/chip, like 
>> OpenGL?
> 
> VA runs in asynchronous mode.
> 
>> 
>> As it is, I generate MPEG2 data and for each slice I call vaRenderPicture 
>> before I generate the next slice, so if vaRenderPicture blocks this would 
>> drain my performance completly.
>> 
>> Kind regards, Andreas Larsson
>> 
>> _______________________________________________
>> Libva mailing list
>> Libva@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/libva
> 
> 

_______________________________________________
Libva mailing list
Libva@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/libva

Reply via email to