Jakob Bornecrantz wrote:
2010/6/3 Chia-I Wu <olva...@gmail.com>:
2010/6/3 Kristian Høgsberg <k...@bitplanet.net>:
2010/6/3 Chia-I Wu <olva...@gmail.com>:
2010/6/3 Kristian Høgsberg <k...@bitplanet.net>:
But it is less flexible IMHO. Also, I am not convinced that EGLImageKHR to be
queryable, which is stemmed from using EGLImageKHR to represent pipe_resource.
Using an EGLImageKHR also implies that an implementation must implement
EGLImage in EGL/GLES/VG, where the latter seems to still miss a way to render
into an EGLImage. Therefore, my idea is to use pbuffer to represent
pipe_resource. This is in line with eglCreatePbufferFromClientBuffer. To be
precise,
No, EGLImage is the right abstraction for this. An EGLImage is a
two-dimensional pixel array, which is exactly what we need here, since
it corresponds directly with what a DRM buffer is. A pbuffer has more
state, such as depth and stencil buffers. The whole point of this
extension was to be able to use a DRM buffer as an FBO renderbuffer.
My eglkms example doesn't demonstrate the main use case I have in mind
with the EGLImage/DRM integration, which is page flipping and giving
the application control over buffers. The idea is that the
application can allocate two EGLImages for double buffering and
alternate between attaching the two EGLImages as render buffers for an
FBO.
The extensions I proposed, and also the patches, allow creating EGLImages from
pbuffers. Pbuffers have more states and this allows them to be used with
eglMakeCurrent, other than as FBO attachments.
The application is free to allocate more buffers for triple buffering
or to discard the back buffer and depth buffer if rendering goes idle.
The lifecycle of the EGLImage isn't tied to the lifecycle of the FBO
the way the pbuffer colorbuffer lifecycle is tied to the pbuffers.
Creating an EGLImage from a pbuffer would extend the lifetime of the pbuffer
the way an EGLImage extends the lifetime of a native pixmap or a texture
object.
Also, using EGLConfig to describe the pixel format doesn't actaully
describe the layout of the pixels. We need a precise description,
similar to the MESA_FORMAT_* tokens.
The idea is to use EGLConfig for creation, and add querys for orthogonal
attributes of a pixel format such as the channel sizes, channel order, and etc.
The precise description may be derived from the querying results. This is to
avoid listing each format, but I do not have a strong opinion here.
There is really only a handful of formats that we care about and
having all this query machinery to go through just to get to the
equivalent of MESA_FORMAT_ARGB8888 seems awkward.
If VG is an important use case for you and the lack of EGLImage
integration is a problem, I suggest defining an extension to specify
EGLImage use in VG instead. Or we can add both, but just the pbuffer
approach isn't sufficient.
Pbuffer approach is like applying s/EGLImage/EGLSurface/ on EGLImage approach,
plus allowing an EGLImage to be created from a pbuffer. This makes it a
superset instead.
I don't want to create an full EGLSurface when all I need is an
EGLImage. I know I can create an EGLImage from the surface and then
destroy the surface, but don't you think that's backwards? We can add
an extension that lets you create an EGL pbuffer surface from an
EGLImage if you need a surface for VG. What is your concern about
eglQueryImage exactly?
Existing EGLImage extensions create an EGLImage from an existing resource.
eglQueryImage is needed in the new extension because it does not follow this
practice. That bothers me.
There are other ways to achieve the same goal while not breaking the practice.
Pbuffer is one example. Using an alternative entry point such as
img = eglCreateDRMImageKHR(dpy, &name, &stride, ...);
is another. I do prefer these two approaches over eglQueryImage.
Currently these extensions try to address two use cases. Creating
Images to be shared between processes. Creating resources to be used
as a scanout target. Leaving the sharing aside until later. Lets look
at some of the problems associated with that.
Resources suitable for scanout can be used as a render target and as a
sampling source. So renderbuffer and/or texture is a good
representation of it, it is also how we want to operate on it, but
they are both GL objects. A EGL image however can be both of these
objects and more and is far more flexible then a EGL surface. So lets
use EGL image as the sharing object.
The problem is that we just can't go from a any texture that happens
to be large enough to a scanout buffer. Most hardware have stricter
limitations on stride, tiling, placement, etc on scanout buffers then
textures/rendertargets or those limitation make certain uses slower or
use more memory compared to normal textures/renderbuffers. So we can't
just go around creating all texture and renderbuffer as suitable for
scanout just because sometime we might use it as a scanout, because it
would be slower and use more memory. Now we don't have a way to say a
texture/renderbuffer should be scanoutable in GL. And creating a EGL
image from a texture/rendertarget and then mutating it into a
scanoutable format is bad, it would effectively orphan the original
resource unless we do major hacking in the driver which I don't want
to do. Enter MESA_create_image and MESA_image_syste_use to solve this
problem. Ok now we have a EGL image that can be used as a scanout and
be shared to all the API's we want to use to render to it.
What's the use-case for a scanout buffer that's used as a texture?
That seems like an unusual case.
-Brian
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev