On 11/30/2017 12:06 PM, Lyude Paul wrote:
On Thu, 2017-11-30 at 13:20 -0500, Rob Clark wrote:
On Thu, Nov 30, 2017 at 12:59 AM, James Jones <[email protected]> wrote:
On 11/29/2017 04:09 PM, Miguel Angel Vico wrote:

On Wed, 29 Nov 2017 16:28:15 -0500
Rob Clark <[email protected]> wrote:

Do we need to define both in-place and copy transitions?  Ie. what if
GPU is still reading a tiled or compressed texture (ie. sampling from
previous frame for some reason), but we need to untile/uncompress for
display.. of maybe there are some other cases like that we should
think about..

Maybe you already have some thoughts about that?


This is the next thing I'll be working on. I haven't given it much
thought myself so far, but I think James might have had some insights.
I'll read through some of his notes to double-check.


A couple of notes on usage transitions:

While chatting about transitions, a few assertions were made by others
that
I've come to accept, despite the fact that they reduce the generality of
the
allocator mechanisms:

-GPUs are the only things that actually need usage transitions as far as I
know thus far.  Other engines either share the GPU representations of
data,
or use more limited representations; the latter being the reason non-GPU
usage transitions are a useful thing.

-It's reasonable to assume that a GPU is required to perform a usage
transition.  This follows from the above postulate.  If only GPUs are
using
more advanced representations, you don't need any transitions unless you
have a GPU available.

This seems reasonable.  I can't think of any non-gpu related case
where you would need a transition, other than perhaps cache flush/inv.

 From that, I derived the rough API proposal for transitions presented on
my
XDC 2017 slides.  Transition "metadata" is queried from the allocator
given
a pair of usages (which may refer to more than one device), but the
realization of the transition is left to existing GPU APIs.  I think I put
Vulkan-like pseudo-code in the slides, but the GL external objects
extensions (GL_EXT_memory_object and GL_EXT_semaphore) would work as well.

I haven't quite wrapped my head around how this would work in the
cross-device case.. I mean from the API standpoint for the user, it
seems straightforward enough.  Just not sure how to implement that and
what the driver interface would look like.

I guess we need a capability-conversion (?).. I mean take for example
the the fb compression capability from your slide #12[1].  If we knew
there was an available transition to go from "Dev2 FB compression" to
"normal", then we could have allowed the "Dev2 FB compression" valid
set?

[1] https://www.x.org/wiki/Events/XDC2017/jones_allocator.pdf

Regarding in-place Vs. copy: To me a transition is something that happens
in-place, at least semantically.  If you need to make copies, that's a
format conversion blit not a transition, and graphics APIs are already
capable of expressing that without any special transitions or help from
the
allocator.  However, I understand some chipsets perform transitions using
something that looks kind of like a blit using on-chip caches and
constrained usage semantics.  There's probably some work to do to see
whether those need to be accommodated as conversion blits or usgae
transitions.

I guess part of what I was thinking of, is what happens if the
producing device is still reading from the buffer.  For example,
viddec -> gpu use case, where the video decoder is also still hanging
on to the frame to use as a reference frame to decode future frames?

I guess if transition from devA -> devB can be done in parallel with
devA still reading the buffer, it isn't a problem.  I guess that
limits (non-blit) transitions to decompression and cache op's?  Maybe
that is ok..

I don't know of a real case it would be a problem. Note you can transition to multiple usages in the proposed API, so for the video decoder example, you would transition from [video decode target] to [video decode target, GPU sampler source] for simultaneous texturing and reference frame usage.

For our hardware's purposes, transitions are just various levels of
decompression or compression reconfiguration and potentially cache
flushing/invalidation, so our transition metadata will just be some bits
signaling which compression operation is needed, if any.  That's the sort
of
operation I modeled the API around, so if things are much more exotic than
that for others, it will probably require some adjustments.



[snip]


Gralloc-on-$new_thing, as well as hwcomposer-on-$new_thing is one of my
primary goals.  However, it's a pretty heavy thing to prototype.  If
someone
has the time though, I think it would be a great experiment.  It would
help
flesh out the paltry list of usages, constraints, and capabilities in the
existing prototype codebase.  The kmscube example really should have added
at least a "render" usage, but I got lazy and just re-used texture for
now.
That won't actually work on our HW in all cases, but it's good enough for
kmscube.


btw, I did start looking at it.. I guess this gets a bit into the
other side of this thread (ie. where/if GBM fits in).  So far I don't
think mesa has EGL_EXT_device_base, but I'm guessing that is part of
There is wip from ajax to add support for this actually, although it didn't do
much correctly the last time I played with it:

https://cgit.freedesktop.org/~ajax/mesa/log/?h=egl-ext-device

I was also hoping to write a simple egl device testing extension that lists
devices and that sort of stuff, as well made an entire seperate repo to start
holding glxinfo, eglinfo, and group said tool in with that. Haven't actually
written any code for this yet though

Yes, or there's also this:

https://github.com/KhronosGroup/EGL-Registry/pull/23

Which combined with:

https://www.khronos.org/registry/EGL/extensions/MESA/EGL_MESA_platform_surfaceless.txt

provides a semantic alternative method to instantiate a platform-less EGLDisplay on an EGLDevice. It's functionally equivalent to EGL_EXT_platform_device, but some people find it more palatable. I'm roughly indifferent between the two at this point, but I slightly prefer EGL_EXT_platform_device just because we already have it implemented and have a bunch of internal code and external customers using it, so we have to maintain it anyway.

And just a reminder to avoid opening old wounds: EGLDevice is in no way tied to EGLStreams. EGLDevice is basically just a slightly more detailed version of EGL_MESA_platform_surfaceless.

Thanks,
-James

what you had in mind as alternative to GBM ;-)


BR,
-R
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to