Am 02.09.2016 um 16:10 schrieb Leo Liu:
On 09/02/2016 09:50 AM, Christian König wrote:
Am 02.09.2016 um 15:27 schrieb Leo Liu:
On 09/02/2016 02:11 AM, Christian König wrote:
Am 02.09.2016 um 04:03 schrieb Michel Dänzer:
On 02/09/16 10:17 AM, Michel Dänzer wrote:
On 02/09/16 12:58 AM, Leo Liu wrote:
On 09/01/2016 11:54 AM, Nayan Deshmukh wrote:
I saw the code in dri3_glx.c and I could somewhat relate some
basic
code structure to the vl_winsys_dri3.c. But I am new to this
and not aware of the
terminology that you used about the buffers. Could you please
explain what needs
to be done in more detail or point me to where I can read about
it.
I believe it's from loader_dri3_helper.c with "is_different_gpu"
condition true, that will include back buffer and front buffer
case.
you could try only back buffer case for now.
From a high level, PRIME mainly affects presentation, not so
much the
video decoding / rendering. The important thing is that the
buffer used
for presentation via the Present extension is linear, not tiled.
I'm not
sure whether it makes more sense to allocate a separate linear
buffer
for this purpose, as is done for GLX, or for the vl code to make the
corresponding back (or front?) buffer linear in the first place.
A separate linear buffer is probably better, actually, since it will
also be pinned to system memory while it's being shared with
another GPU.
Yes, I agree. Nayan should also work on avoiding the extra copy
which currently occur because we can't allocate output buffers
directly in the format needed for presentation.
The general idea should be to to check during presentation if the
format in the output surface is displayable directly.
Also we have to consider drawable resized case.
Actually we don't. Take a look at the VDPAU spec the output surface
should be send for displaying without considering it's size.
E.g. when the window is 256x256 pixels, but the application allocated
an output surface of 1024x768 we should still send the whole surface
to the X server.
It's the job of the application to resize the output surfaces not the
one of the VDPAU state tracker.
I thought this get done by vl compositor from presentation, scaling up
or down from output surface to back buffer based on the resize.
No, that is incorrect. Take a look at the VDPAU spec:
Applications may choose to allow resizing of the presentation queue
target (which may be e.g. a regular Window when using an X11-based
implementation).
*clip_width* and *clip_height* may be used to limit the size of the
displayed region of a surface, in order to match the specific region
that was rendered to.
In turn, this allows the application to allocate over-sized (e.g.
screen-sized) surfaces, but render to a region that matches the
current size of the video window.
Using this technique, an application's response to window resizing may
simply be to render to, and display, a different region of the
surface, rather than de-/re-allocation of surfaces to match the
updated window size.
This means that we should send the original output surface size to X, no
matter what size it has or what size the window has it is displayed in.
That wasn't possible with DRI2, that's why we have that workaround with
the delayed rendering in the mixer.
But no worry it's only a minor issue and a good task for Nayan to get
deeper into the graphics stack.
Regards,
Christian.
Regards,
Leo
Regards,
Christian.
Regards,
Leo
If that is the case then handle of that surface should be send
directly to X.
If that isn't the case we reallocate the backing buffer, copy the
content of the output surface into it and then send the new handle
to X.
Regards,
Christian.
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev