Re: 2024 X.Org Foundation Membership deadline for voting in the election
On Tue, 26 Mar 2024 11:42:48 -0400 Christopher Michael wrote: > The 2024 X.Org Foundation membership renewal period has been extended > one additional week and elections will start the following week on 01 > April 2024. > > Please note that only current members can vote in the upcoming election, > and that the deadline for new memberships or renewals to vote in the > upcoming election is 01 April 2024 at 23:59 UTC. > > If you are interested in joining the X.Org Foundation or in renewing > your membership, please visit the membership system site at: > https://members.x.org/ > > Christopher Michael, on behalf of the X.Org elections committee Hi everyone, given that the year's first email reminding everyone to renew their memberships was sent on Feb 7 when the renewal was NOT open yet, I wonder how many people thought they had already renewed and are now thinking they don't need to do anything? I fell for that: On Feb 7, I went to members.x.org to check my status, it said I was registered for "2023-2024" and there was no button to renew, so I closed the page confident that I was a member for 2024. After all, it said 2024. This was a mistake I realised only after being personally poked to renew. I know for sure of one other person falling for the same. Now, the members page for this year says "Application for the period: 02/2024-02/2025". Thanks to the people adding the month to reduce confusion. Thanks, pq pgpb8Gq24dJkq.pgp Description: OpenPGP digital signature
Re: [Mesa-dev] [PATCH 2/2] egl: add EGL_platform_device support
On Sat, 27 Apr 2019 09:38:27 -0400 Marek Olšák wrote: > Those are all valid reasons, but I don't wanna expose swrast for AMD's > customers. Hi Marek, is you objection that you will never want to see any software renderer in the list, or that you don't want to see a software renderer only as long as it doesn't actually work? Why do you not "wanna expose swrast for AMD's customers"? Are you talking about swrast specifically or all the software renderers in Mesa? I'm not an AMD customer if you mean someone paying for support rather than just buying their hardware, so I cannot speak for them. However, I would be very happy to have a software renderer available to be picked the same way as any other hardware renderer, so that I can use it in graphical test suites (a point from Anholt) testing also the EGL glue in addition to the pixels produced. The thing will be run on boxes with AMD GPUs, and in those cases there must be a way to *not* use the AMD GPU, and use the software renderer instead when wanted. Not by environment variables or anything hacky like that, but by deliberately choosing the software renderer in the program. It will need an EGLSurface too. How would this be made to work in the future? Thanks, pq > > Marek > > On Sat, Apr 27, 2019, 5:45 AM Mathias Fröhlich > wrote: > > > Hi Marek, > > > > On Wednesday, 24 April 2019 02:01:42 CEST Marek Olšák wrote: > > > Adam, did you notice my original suggestion "If there is at least 1 drm > > > device, swrast won't be in the list."? which means swrast would be in the > > > list for your "dumb" GPUs. > > > > Imagine a box with a low end drm capable hardware chip like you find > > sometimes > > in server type boxes (intel/matrox...). Otherwise the box is equipped with > > lots of cpu > > power. This is something that you will find a lot in that major > > engineering application > > environment. Your application will be glad to find the swrast renderer > > that is finally > > more capable than the 'GPU' mostly there to drive an administration > > console. > > You do not want to lock a swrast 'device' (or however you want to call it) > > out by > > a may be less capable 'console GPU'. > > > > Beside that having a second type of 'normalized renderer' like Eric was > > telling > > about is an other one. > > > > As well as sometimes it may make sense to utilize the GPU > > with one set of work and a second GPU with an other set of work in > > parallel. > > When you only find a single gpu device in one box, you may be glad to find > > a swrast device that you can make use of in parallel with the gpu without > > the need > > to put up different code paths in your application. > > > > May be I can come up with other cases, but thats the 5 minutes for now ... > > > > best > > > > Mathias > > > > > > pgpYCtED5brQZ.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Thoughts after hitting 100 merge requests?
On Sat, 12 Jan 2019 12:10:42 +0100 apinheiro wrote: > > On 11/1/19 18:05, Jason Ekstrand wrote: > > 5. There's no way with gitlab for Reviewed-by tags to get > > automatically applied as part of the merging process. This makes > > merging a bit more manual than it needs to be but is really no worse > > than it was before. > > > Well, I would say that it slightly worse. For small series, it is true > that I manually added the Rb when I got a review. But for big series, > when it got reviewed, I used patchwork to get back the series, but with > the Rb in place. Hi, I thought I'd share a couple of scripts I tend to use: $ git append-message origin/master..HEAD will open an editor and add anything you write to the end of all the commits in the given range on the current branch. Nice for adding R-b tags in a batch to the whole branch. $ git fetch-mr 77 $ git fetch-mr origin 77 both will fetch the MR number 77 from the remote "origin" as a new local branch "mr-origin-77", and if such named branch already exists, the old one gets renamed to "mr-origin-77-old". It will abort if the -old branch already exists as well. I do much of my patch review in gitk and my editor, so fetching the MR branch is useful. Since I have the -old branch, I can very easily see what changed compared to the last time I reviewed it. HTH, pq git-append-message Description: Binary data git-fetch-mr Description: Binary data pgpmeMh3lXVzL.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland: break double/tripple buffering feedback loops
On Tue, 18 Dec 2018 18:59:10 +0100 Lucas Stach wrote: > Am Dienstag, den 18.12.2018, 17:43 + schrieb Emil Velikov: > > > On Tue, 18 Dec 2018 at 11:16, Lucas Stach wrote: > > > > > > Currently we dispose any unneeded color buffers immediately if we detect > > > that > > > there are more unlocked buffers than we need. This can lead to feedback > > > loops > > > between the compositor and the application causing rapid toggling between > > > double and tripple buffering. Scenario: 2 buffers already qeued to the > > > compositor, egl/wayland allocates a new back buffer to avoid trottling, > > > slowing down the frame, this allows the compositor to catch up and unlock > > > both buffers, then EGL detects that there are more buffers than currently > > > need, freeing the buffer, restartig the loop shortly after. > > > > > > To avoid wasting CPU time on rapidly freeing and reallocating color > > > buffers > > > break those feedback loops by letting the unneeded buffers sit around for > > > a > > > short while before disposing them. > > > > > > > > Signed-off-by: Lucas Stach > > > --- > > > src/egl/drivers/dri2/platform_wayland.c | 13 + > > > 1 file changed, 9 insertions(+), 4 deletions(-) > > > > > > diff --git a/src/egl/drivers/dri2/platform_wayland.c > > > b/src/egl/drivers/dri2/platform_wayland.c > > > index 34e09d7ec167..3fa08c1639d1 100644 > > > --- a/src/egl/drivers/dri2/platform_wayland.c > > > +++ b/src/egl/drivers/dri2/platform_wayland.c ... > > > if (dri2_surf->back) > > > @@ -615,10 +617,13 @@ update_buffers(struct dri2_egl_surface *dri2_surf) > > > > > > /* If we have an extra unlocked buffer at this point, we had to do > > > triple > > > * buffering for a while, but now can go back to just double > > > buffering. > > > -* That means we can free any unlocked buffer now. */ > > > +* That means we can free any unlocked buffer now. To avoid toggling > > > between > > > +* going back to double buffering and needing to allocate third > > > buffer too > > > +* fast we let the unneeded buffer sit around for a short while. */ > > > for (int i = 0; i < ARRAY_SIZE(dri2_surf->color_buffers); i++) { > > > if (!dri2_surf->color_buffers[i].locked && > > > - dri2_surf->color_buffers[i].wl_buffer) { > > > + dri2_surf->color_buffers[i].wl_buffer && > > > + dri2_surf->color_buffers[i].age > 18) { > > > > The age check here seems strange - both number used and it's relation > > to double/triple buffering. > > Have you considered tracking/checking how many buffers we have? > > A hysteresis value of 18 is just something that worked well in > practice. It didn't appear to defer the buffer destruction for too long > while keeping the feedback loop well under control. Hi, it would be really nice if there was a code comment explaining where the magic 18 comes from. There is no way to deduce it from the code alone. Thanks, pq pgpExDUSlaVvu.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Lets talk about autotools
On Thu, 29 Nov 2018 17:44:47 + Emil Velikov wrote: > Hi all, > > I can see why people may opt to not use or maintain the autotools build. > Although I would kindly ask that we do not remove it just yet. > > In Mesa, we have different parts not used by different teams. As such > we tend to remove stuff when nobody is around to maintain it anymore. > > That said, I'm planning to continue maintaining it and would appreciate > if we keep it in-tree. > > As people may be concerned about bugreports and alike we can trivially > add a warning (as configure is invoked) to forwards any issues to my > email. Additionally (or alternatively) we can have an autotools bugzilla > category with me as the default assignee. > > What do you guys think? Hi Emil, could you explain your motivation behind keeping the autotools build around? It seems like the points you made earlier are mooted by now. Do you intend to remove autotools build from all CI, so that you alone will be the lone user of it, meaning that no other developer should care about its existence at all? Wanting to keep it in-tree does not sound like it, but I wanted to confirm what your expectations are from other developers. Thanks, pq pgpsamMcKzNtw.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] gallium/util: don't let children of fork & exec inherit our thread affinity
On Wed, 31 Oct 2018 16:41:47 -0400 Marek Olšák wrote: > On Wed, Oct 31, 2018 at 11:26 AM Michel Dänzer wrote: > > > On 2018-10-31 12:39 a.m., Gustaw Smolarczyk wrote: > > > śr., 31 paź 2018 o 00:23 Marek Olšák napisał(a): ... > > >> As far as we know, it hurts *only* Blender. > > > > Why are you still saying that in the light of > > https://bugs.freedesktop.org/108330 ? > > > > Another candidate for a drirc setting. Isn't GTK 4 going to use the GPU (OpenGL or Vulkan) for the UI rendering? If GTK 4 uses OpenGL, would you not need to blacklist all GTK 4 apps, because there would be a high chance that GTK initializes OpenGL before the app creates any worker threads? And these would be all kinds of GTK apps that divide work over multiple cores, not limited to what one would call GL apps or games. I don't know GTK, so I hope someone corrects or verifies this. Thanks, pq pgpw5N8ytpdOT.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/3] vulkan: Define new VK_MESA_query_timestamp extension [v2]
On Tue, 10 Jul 2018 11:02:23 -0700 "Keith Packard" wrote: > Pekka Paalanen writes: > > > On Sat, 23 Jun 2018 12:13:53 -0500 > > Jason Ekstrand wrote: > > > >> I haven't thought through this comment all that hard but would it make > >> sense to have three timestamps, CPU, GPU, CPU so that you have error bars > >> on the GPU timestamp? At the very least, two timestamps would be better > >> than one so that, when we pull it into the kernel, it can provide > >> something > >> more accurate than userspace trying to grab a snapshot. > > > > Hi, > > > > three timestamps sounds like a good idea to me, but you might want to > > reach out to media developers (e.g. gstreamer) who have experience in > > synchronizing different clocks and what that will actually take. > > Oh, I know that's really hard, and I don't want to solve that problem > here. I explicitly *don't* solve it though -- I simply expose the > ability to get correlated values in the two application-visible time > domains that the Vulkan API already exposes (surface time and GPU > time). How to synchronize the application as those two clocks drift > around is outside the domain of this extension. Hi Keith, I did not mean you would be solving that problem. I meant that it would be good to figure out what people actually want from the API to be able to solve the problem themselves. Thanks, pq pgpoWRFSEEnE6.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/3] vulkan: Define new VK_MESA_query_timestamp extension [v2]
On Sat, 23 Jun 2018 12:13:53 -0500 Jason Ekstrand wrote: > I haven't thought through this comment all that hard but would it make > sense to have three timestamps, CPU, GPU, CPU so that you have error bars > on the GPU timestamp? At the very least, two timestamps would be better > than one so that, when we pull it into the kernel, it can provide something > more accurate than userspace trying to grab a snapshot. Hi, three timestamps sounds like a good idea to me, but you might want to reach out to media developers (e.g. gstreamer) who have experience in synchronizing different clocks and what that will actually take. When reading the CPU timestamp, I believe it would be necessary for userspace to tell which clock it should be reading. I'm not sure all userspace is always using CLOCK_MONOTONIC. But if you have a better rationale, that would be interesting to record and document. Thanks, pq > On June 23, 2018 10:15:34 Keith Packard wrote: > > > This extension adds a single function to query the current GPU > > timestamp, just like glGetInteger64v(GL_TIMESTAMP, ). This > > function is needed to complete the implementation of > > GOOGLE_display_timing, which needs to be able to correlate GPU and CPU > > timestamps. > > > > v2: Adopt Jason Ekstrand's coding conventions > > > > Declare variables at first use, eliminate extra whitespace between > > types and names. Wrap lines to 80 columns. > > > > Add extension to list in alphabetical order > > > > Suggested-by: Jason Ekstrand > > > > Signed-off-by: Keith Packard > > --- > > include/vulkan/vk_mesa_query_timestamp.h | 41 > > src/vulkan/registry/vk.xml | 15 + > > 2 files changed, 56 insertions(+) > > create mode 100644 include/vulkan/vk_mesa_query_timestamp.h pgpyNg48v3npf.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] wayland/egl: initialize window surface size to window size
On Tue, 05 Jun 2018 19:22:47 +0200 "Juan A. Suarez Romero" wrote: > On Tue, 2018-06-05 at 12:41 +0100, Daniel Stone wrote: > > Hi Juan, > > > > On 5 June 2018 at 09:51, Juan A. Suarez Romero wrote: > > > > > On Mon, 2018-06-04 at 13:22 +0100, Daniel Stone wrote: > > > > Yes, that's correct, and I believe eglMakeCurrent() does perform that > > > > attachment. The NVIDIA/EGLStreams implementation is extremely broken > > > > in that regard, in a way which does break genuine real-world > > > > applications. We had a long discussion on wayland-devel@ when the > > > > EGLStreams implementation was first posted, detailing the ways in > > > > which their implementation causes genuine problems for real-world > > > > users, and can provoke client deadlocks. I am very opposed to > > > > supporting or codifying that behaviour. > > > > > > So I understand eglSwapBuffers() is required to do the attachment. > > > > It is required to do an attachment, yeah. > > > > Nice. The other failing test in CTS is not calling eglSwapBuffers() > before doing the first wl_gl_window_get_attached_size(), and hence > when we call it we get null values. > > So it seems the fix must be done in CTS, bycalling eglSwapBuffers() > to ensure the window has an attached buffer, before getting the size. Hi, I would like to try to clarify the mental model here. The term "attached" is somewhat of a misnomer. But this may also be confusing, because I am bringing in underlying details of Wayland which no other window system has AFAIK. It is true the Wayland request is called wl_surface_attach(wl_buffer). Alone it does essentially nothing, it requires a wl_surface_commit() to become effective. eglSwapBuffers() always does both before returning, so that is fine. Intuitively "attach" implies that one creates an association between two objects and then one can continue working on the attached object (buffer). This is where the misnomer becomes apparent: it would be better to talk about submitting, not attaching. eglSwapBuffers is submitting the current WIP buffer to the compositor to replace the contents of the Wayland wl_surface (window). One cannot resize or draw into the submitted buffer as long as it is held reserved by the Wayland compositor, to be released with wl_buffer.release event. A Wayland window (wl_surface) has no initial size. Only submitting (attach+commit) a wl_buffer will give the wl_surface a size, that is, calling eglSwapBuffers resizes the window. Only the client itself can resize the window e.g. by calling eglSwapBuffers, the compositor or any third party cannot resize the window, so there is no way to query the window size via Wayland. From an EGL application point of view, a window has two different concepts of size: the size that is currently being used (with e.g. input events) which is the size of the wl_surface and returned by wl_egl_window_get_attached_size(); and the size of your current unfinished drawing which will become the wl_surface size on the next call to eglSwapBuffers. wl_egl_window_get_attached_size() would more appropriately be described as returning the size of the last submitted buffer. When I talked about wl_surface size in the above, I was actually using a generally incorrect simplification. The wl_buffer size determines the wl_surface size (now I am using the specific Wayland terms here), but they may not be equal. Buffer scale and transform and wl_viewport are Wayland protocol features that change how the wl_surface size is computed from the wl_buffer size. However, all these are outside of EGL, in the EGL user instead, and EGL itself will be unaware of these. Therefore EGL Wayland can only concern itself with buffer dimensions, the actual wl_surface size will remain unknown to it. Therefore, when interpreting the EGL specifications, when it talks about "window size", on Wayland you can really only think about the WIP or submitted buffer size. I would say that for EGL purposes, the current native window size is the last submitted buffer size and it is zero initially. For the CTS, one may want to test wl_egl_window_get_attached_size and the WIDTH/HEIGHT queries separately to ensure both are as expected (I agree to Daniel's interpretation here). Thanks, pq pgpm3_heJi0F4.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Nouveau driver problem when using EGL_LINUX_DMA_BUF_EXT
On Mon, 9 Apr 2018 09:56:43 -0400 Ilia Mirkinwrote: > On Mon, Apr 9, 2018 at 6:57 AM, Volker Vogelhuber > wrote: > > I would have guessed, that the use case would be quite common, as having a > > Live video source rendered via OpenGL shouldn't be a very special topic and > > having a zero copy approach should be better than having to copy the data > > Having a live video source is a pretty special topic. (Esp having one > that you ever use.) Hi, FWIW, in Wayland architecture this would be the preferred path to play back video. A hardware decoder or a camera provides frames via V4L2 API, which the player app sends to a Wayland compositor as dmabuf fds, which then imports them via EGL to GL for GPU compositing or directly to KMS for hardware overlay plane. Weston the compositor does the EGL import, and should "soon" do the KMS import as well automatically when possible. It also has a test app weston-simple-dmabuf-v4l which uses a V4L device to provide dmabuf and send them to the compositor for importing. It has instructions on how to set up vivid for testing. Volker, did you try querying V4L2 for the right pitch, ensure you got bytes vs. pixels correct, and feed that into EGL_DMA_BUF_PLANE0_PITCH_EXT? If Nouveau cannot handle that correctly, it would hopefully refuse the import. Thanks, pq pgpzMNDE85Bfy.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH mesa] u_debug: do not compile asserts when they are disabled
On Thu, 30 Nov 2017 12:16:06 + Eric Engestromwrote: > Commit f0ba7d897d1c22202531a added this code to expose asserts to the > compiler in an attempt to hide 'unused variable' warnings, incorrectly > claiming it was a no-op. This has two bad effects: > - any assert with side-effects are executed when they should be > disabled Hi, I believe that is not true. Your patch has: > -#define debug_assert(expr) (void)(0 && (expr)) which, as I understand, means that (expr) is never evaluated because the 0 short-circuits the && operator. Thanks, pq > - the whole content of the assert must be understandable by the > compiler, which isn't true if variable or members are correctly > guarded by NDEBUG > > Fix this by effectively reverting f0ba7d897d1c22202531a. > > Unused variables warnings can be addressed by marking the variables as > MAYBE_UNUSED instead, as is done in the rest of Mesa. > > Fixes: f0ba7d897d1c22202531a "util: better fix for unused variable >warnings with asserts" > Cc: Keith Whitwell > Reported-by: Gert Wollny > Signed-off-by: Eric Engestrom > --- > src/gallium/auxiliary/util/u_debug.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/src/gallium/auxiliary/util/u_debug.h > b/src/gallium/auxiliary/util/u_debug.h > index d2ea89f59c10e1bc0944..88e9bb8a29d63826167e 100644 > --- a/src/gallium/auxiliary/util/u_debug.h > +++ b/src/gallium/auxiliary/util/u_debug.h > @@ -188,7 +188,7 @@ void _debug_assert_fail(const char *expr, > #ifndef NDEBUG > #define debug_assert(expr) ((expr) ? (void)0 : _debug_assert_fail(#expr, > __FILE__, __LINE__, __FUNCTION__)) > #else > -#define debug_assert(expr) (void)(0 && (expr)) > +#define debug_assert(expr) ((void)0) > #endif > > pgpJOge_1WwRL.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 7/9] wayland-drm: static inline wayland_drm_buffer_get
On Mon, 23 Oct 2017 18:05:12 +0100 Emil Velikovwrote: > On 23 October 2017 at 16:41, Daniel Stone wrote: > > Hi Emil, > > > > On 28 September 2017 at 13:36, Emil Velikov > > wrote: > >> On 20 September 2017 at 15:06, Daniel Stone wrote: > >>> On 19 September 2017 at 11:25, Emil Velikov > >>> wrote: > It looks a bit icky and my Wayland knowledge is limited to actually > review it. > > I still think that we're trying to do different things - me simplify > things, while you're focusing on fixing a bug. > >>> > >>> I like the idea of the simplification, but it's just that a) it can't > >>> actually be simplified that far, and b) whilst we still rely on > >>> consistent resolution of wl_buffer_interface, your proposed change may > >>> actually _introduce_ a bug. > >> > >> Right, I did not see that one. Can you share your train of thought? > > > > Happily! > > > > When a compositor creates a client buffer, the request is dispatched > > by libwayland-server.so into libEGL.so (libEGL explicitly links > > libwayland-server). This lands in > > src/egl/wayland/wayland-drm/wayland-drm.c's create_buffer(), which > > calls wl_resource_create() with 'wl_buffer_interface'. Since libEGL.so > > is explicitly linked against both libwayland-client and > > libwayland-server, and both of those DSOs provide a resolution of > > wl_buffer_interface, resource->interface could be > > libwayland-server.so::wl_buffer_interface, or > > libwayland-client.so::wl_buffer_interface. > > > > When a compositor wants to import a buffer into GBM, it calls > > gbm_bo_import(), which will call wl_drm_buffer_get(). > > wl_drm_buffer_get will test (resource->interface == > > _buffer_interface). Previously, both create_buffer() and > > wl_drm_buffer_get() at least came from the same compilation unit, but > > now they are built and linked separately. This seems to make an > > existing problem (ambiguity of 'wl_buffer_interface') worse (symbol > > resolution now performed in separate compilation units). > > > > I don't really see a solution to this, apart from no longer relying on > > having a single canonical resolution of wl_buffer_interface. The above > > patch implements that, by removing the interface-address-equal test > > and replacing it with the destroy-listener test. The patch I provided > > is equally valid both with and without your series. > > > Ouch, that's something I did not expect looking at the Mesa code alone. > > Generally, this type of issues are not specific to Mesa, but come due > to the implementation/use of wl_resource_instance_of. Is that correct? > > I'm itching to bring back my suggestions to fix this in its root, but > let's ignore that for a moment ;-) > > Having a look at the wl_resource_instance_of() implementation > > WL_EXPORT int > wl_resource_instance_of(struct wl_resource *resource, >const struct wl_interface *interface, >const void *implementation) > { >return wl_interface_equal(resource->object.interface, interface) && >resource->object.implementation == implementation; > } > > ... and wl_interface_equal() > > int > wl_interface_equal(const struct wl_interface *a, const struct wl_interface *b) > { > /* In most cases the pointer equality test is sufficient. > * However, in some cases, depending on how things are split > * across shared objects, we can end up with multiple > * instances of the interface metadata constants. So if the > * pointers match, the interfaces are equal, if they don't > * match we have to compare the interface names. > */ > return a == b || strcmp(a->name, b->name) == 0; > } > > Am I reading it incorrectly, or you forgot about the name check? > It seems like the name check was added explicitly to workaround the > issue you're talking about. > > Thus wl_resource_instance_of() should return the same result with or > without my patch? > Or perhaps the interface check is safe, but the implementation one isn't? Hi Emil, your analysis is correct, there should be no problem. The implementation check is safe as well, because the thing that is being compared there is the 'struct wl_buffer_interface' pointer. Looking at your original patch, it seems that the implementation check ensures the wl_buffer was created with the specific 'struct wl_drm' instance and therefore has no relation to any DSO identity (I assume 'struct wl_drm' is allocated dynamically). Thanks, pq pgpOn5G0YUl2i.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 7/9] wayland-drm: static inline wayland_drm_buffer_get
On Mon, 23 Oct 2017 16:41:14 +0100 Daniel Stonewrote: > Hi Emil, > > On 28 September 2017 at 13:36, Emil Velikov wrote: > > On 20 September 2017 at 15:06, Daniel Stone wrote: > >> On 19 September 2017 at 11:25, Emil Velikov > >> wrote: > >>> It looks a bit icky and my Wayland knowledge is limited to actually > >>> review it. > >>> > >>> I still think that we're trying to do different things - me simplify > >>> things, while you're focusing on fixing a bug. > >> > >> I like the idea of the simplification, but it's just that a) it can't > >> actually be simplified that far, and b) whilst we still rely on > >> consistent resolution of wl_buffer_interface, your proposed change may > >> actually _introduce_ a bug. > > > > Right, I did not see that one. Can you share your train of thought? > > Happily! > > When a compositor creates a client buffer, the request is dispatched > by libwayland-server.so into libEGL.so (libEGL explicitly links > libwayland-server). This lands in > src/egl/wayland/wayland-drm/wayland-drm.c's create_buffer(), which > calls wl_resource_create() with 'wl_buffer_interface'. Since libEGL.so > is explicitly linked against both libwayland-client and > libwayland-server, and both of those DSOs provide a resolution of > wl_buffer_interface, resource->interface could be > libwayland-server.so::wl_buffer_interface, or > libwayland-client.so::wl_buffer_interface. > > When a compositor wants to import a buffer into GBM, it calls > gbm_bo_import(), which will call wl_drm_buffer_get(). > wl_drm_buffer_get will test (resource->interface == > _buffer_interface). Previously, both create_buffer() and > wl_drm_buffer_get() at least came from the same compilation unit, but > now they are built and linked separately. This seems to make an > existing problem (ambiguity of 'wl_buffer_interface') worse (symbol > resolution now performed in separate compilation units). Hi, this is not the whole truth. At least, if wl_drm_buffer_get() uses wl_resource_instance_of() rather than open-coding a broken version of the same. > I don't really see a solution to this, apart from no longer relying on > having a single canonical resolution of wl_buffer_interface. The above > patch implements that, by removing the interface-address-equal test > and replacing it with the destroy-listener test. The patch I provided > is equally valid both with and without your series. There is no relying on having a single canonical resolution of wl_buffer_interface, and there has never been as long as any part of libwayland has been stable. See: https://cgit.freedesktop.org/wayland/wayland/tree/src/connection.c?id=0.99.0#n840 I suppose one might argue that libwayland-server and libwayland-client might be different versions in some wacky distribution, but that argument is moot because wl_buffer_interface can never change due to the multiple factory interfaces issue. Thanks, pq pgp8WroCrBCuk.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 4/9] configure.ac: define WL_HIDE_DEPRECATED at global scale
On Thu, 7 Sep 2017 19:05:01 +0100 Emil Velikov <emil.l.veli...@gmail.com> wrote: > From: Emil Velikov <emil.veli...@collabora.com> > > Due to GCC feature described in previous commit, the expected > deprecation warnings may be missing. > > Set the WL_HIDE_DEPRECATED macro which will omit the deprecated > functionality, resulting in more distinct build issues. > > Patch is UNTESTED, see the open question below. > > Cc: Pekka Paalanen <pekka.paala...@collabora.co.uk> > Suggested-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> > Signed-off-by: Emil Velikov <emil.veli...@collabora.com> > --- > Pekka, I'm not 100% sold that having the macro all together is a good > idea. Consider the following example: > > - Project X uses Wayland vY, set as min. requirement. > - Project X sets the 'hide' macro > - In nearly all cases, the check does not guard the upper version. > - Wayland vY+Z, deprecates (and hides) functionality A > - Project X fails to build against Wayland vY+Z Hi Emil, yes, that is exactly the problem of the macro: in reality, we cannot add more stuff under it. It was a one-shot macro, it has been used, and now the things it may hide have been carved in stone. If we want to hide more stuff in libwayland API in a similar way, we need a new macro per release or something. In any case, using WL_HIDE_DEPRECATED in Mesa and extending it to cover all code that deals with Wayland is a good thing, so: Acked-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> Thanks, pq > --- > configure.ac| 2 +- > src/egl/drivers/dri2/egl_dri2.c | 2 -- > 2 files changed, 1 insertion(+), 3 deletions(-) > > diff --git a/configure.ac b/configure.ac > index d0d4c0dfd1d..f0368570b12 100644 > --- a/configure.ac > +++ b/configure.ac > @@ -1713,7 +1713,7 @@ for plat in $platforms; do > if test "x$have_wayland_protocols" = xno; then > AC_MSG_ERROR([wayland-protocols >= > $WAYLAND_PROTOCOLS_REQUIRED is needed to compile the wayland platform]) > fi > -DEFINES="$DEFINES -DHAVE_WAYLAND_PLATFORM" > +DEFINES="$DEFINES -DHAVE_WAYLAND_PLATFORM -DWL_HIDE_DEPRECATED" > ;; > > x11) > diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c > index 2667aa5d647..60d5e1ffddf 100644 > --- a/src/egl/drivers/dri2/egl_dri2.c > +++ b/src/egl/drivers/dri2/egl_dri2.c > @@ -25,8 +25,6 @@ > *Kristian Høgsberg <k...@bitplanet.net> > */ > > -#define WL_HIDE_DEPRECATED > - > #include > #include > #include ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland: Ensure we get a back buffer
On Tue, 16 May 2017 11:02:22 +0100 Daniel Stone <dani...@collabora.com> wrote: > Commit 9ca6711faa03 changed the Wayland winsys to only block for the > frame callback inside SwapBuffers, rather than get_back_bo. get_back_bo > would perform a single non-blocking Wayland event dispatch, to try to > find any release events which we had pulled off the wire but not > actually processed. The blocking dispatch was moved to SwapBuffers. > > This removed a guarantee that we would've processed all events inside > get_back_bo(), and introduced a failure whereby the server could've sent > a buffer release event, but we wouldn't have read it. In clients > unconstrained by SwapInterval (rendering ~as fast as possible), which > were being displayed on a hardware overlay (buffer release delayed), Hi, hardware overlay, or as in the bug report, on the primary plane, i.e. the only composite-bypass case in pre-atomic Weston available without hacks. > this could lead to get_back_bo() failing because there were no free > buffers available to it. > > The drawing rightly failed, but this was papered over because of the > path in eglSwapBuffers() which attempts to guarantee a BO, in order to > support calling SwapBuffers twice in a row with no rendering actually > having been performed. > > Since eglSwapBuffers will perform a blocking dispatch of Wayland > events, a buffer release would have arrived by that point, and we > could then choose a buffer to post to the server. The effect was that > frames were displayed out-of-order, since we grabbed a frame with random > past content to display to the compositor. > > Ideally get_back_bo() failing should store a failure flag inside the > surface and cause the next SwapBuffers to fail, but for the meantime, > restore the correct behaviour such that get_back_bo() no longer fails. > > Signed-off-by: Daniel Stone <dani...@collabora.com> > Reported-by: Eero Tamminen <eero.t.tammi...@intel.com> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98833 > Fixes: 9ca6711faa03 ("Revert "wayland: Block for the frame callback in > get_back_bo not dri2_swap_buffers"") > --- > src/egl/drivers/dri2/platform_wayland.c | 10 +- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/src/egl/drivers/dri2/platform_wayland.c > b/src/egl/drivers/dri2/platform_wayland.c > index 52ab9f7579..7837790bdb 100644 > --- a/src/egl/drivers/dri2/platform_wayland.c > +++ b/src/egl/drivers/dri2/platform_wayland.c > @@ -353,7 +353,7 @@ get_back_bo(struct dri2_egl_surface *dri2_surf) > /* There might be a buffer release already queued that wasn't processed */ > wl_display_dispatch_queue_pending(dri2_dpy->wl_dpy, dri2_surf->wl_queue); > > - if (dri2_surf->back == NULL) { > + while (dri2_surf->back == NULL) { >for (i = 0; i < ARRAY_SIZE(dri2_surf->color_buffers); i++) { > /* Get an unlocked buffer, preferrably one with a dri_buffer >* already allocated. */ > @@ -364,6 +364,14 @@ get_back_bo(struct dri2_egl_surface *dri2_surf) > else if (dri2_surf->back->dri_image == NULL) > dri2_surf->back = _surf->color_buffers[i]; >} > + > + if (dri2_surf->back) > + continue; Would 'break' be more readable? > + > + /* If we don't have a buffer, then block on the server to release one > for > + * us, and try again. */ > + if (wl_display_dispatch_queue(dri2_dpy->wl_dpy, dri2_surf->wl_queue) < > 0) > + return -1; > } > > if (dri2_surf->back == NULL) The swrast path does not need fixing, because there buffer release can never be delayed due to composite bypass, right? Nothing looks suspicious to me, so: Acked-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> Thanks, pq pgpQ7NLV1j8uw.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] question about container_of
On Mon, 27 Feb 2017 13:26:11 + Emil Velikovwrote: > Hi Julien, > > On 27 February 2017 at 12:08, Julien Isorce wrote: > > Hi, > > > > Since 2012 commit ccff74971203b533bf16b46b49a9e61753f75e6c it is said: > > "sample must be initialized, or else the result is undefined" in the > > description of mesa/src/util/list.h::container_of . > > > > But I can find a few places where it is used without initializing that > > second parameter, i.e. like: > > > > struct A a; > > container_of(ptr, a, member); > > > > Then I can add the "= NULL" but should it be just > > container_of(ptr, struct A, member); > > like in the kernel and some other places in mesa ? > > > Strictly peaking these are toolchain (ASAN iirc) bugs, since there is > no pointer deref, as we're doing pointer arithmetic. Hi Emil, that's what people would usually think. It used to work with GCC. Then came Clang. > Afaict the general decision was to to merge the patch(es) since they > will make actual bugs stand out amongst the noise. In the long run, > it's better to fix the tool (ASAN/other) than trying to "fix" all the > cases in mesa and dozens of other projects. But until then patches are > safe enough ;-) > > That's my take on it, at least. It depends on how container_of() has been defined. For more details, see e.g.: https://cgit.freedesktop.org/wayland/wayland/commit/?id=a18e34417ba3fefeb81d891235e8ebf394a20a74 The comment in the code in Mesa is correct for the fallback implementation it has. Maybe things rely on the fallback implementation never being used when compiling with Clang. Julien, some alternatives for container_of use a pointer, others expect a type instead. I believe one must use the correct form. Thanks, pq pgphcLK2HUoB3.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] wayland-drm: update use of struct wl_resource to avoid deprecated struct definition
On Tue, 7 Feb 2017 17:29:53 -0500 Micah Fedke <micah.fe...@collabora.co.uk> wrote: > Signed-off-by: Micah Fedke <micah.fe...@collabora.co.uk> > --- > src/egl/wayland/wayland-drm/wayland-drm.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/src/egl/wayland/wayland-drm/wayland-drm.c > b/src/egl/wayland/wayland-drm/wayland-drm.c > index 4fc12521d2..5c2bb0369e 100644 > --- a/src/egl/wayland/wayland-drm/wayland-drm.c > +++ b/src/egl/wayland/wayland-drm/wayland-drm.c > @@ -55,7 +55,7 @@ struct wl_drm { > static void > destroy_buffer(struct wl_resource *resource) > { > - struct wl_drm_buffer *buffer = resource->data; > + struct wl_drm_buffer *buffer = wl_resource_get_user_data(resource); > struct wl_drm *drm = buffer->drm; > > drm->callbacks->release_buffer(drm->user_data, buffer); > @@ -77,7 +77,7 @@ create_buffer(struct wl_client *client, struct wl_resource > *resource, >int32_t offset1, int32_t stride1, >int32_t offset2, int32_t stride2) > { > - struct wl_drm *drm = resource->data; > + struct wl_drm *drm = wl_resource_get_user_data(resource); > struct wl_drm_buffer *buffer; > > buffer = calloc(1, sizeof *buffer); > @@ -187,7 +187,7 @@ static void > drm_authenticate(struct wl_client *client, >struct wl_resource *resource, uint32_t id) > { > - struct wl_drm *drm = resource->data; > + struct wl_drm *drm = wl_resource_get_user_data(resource); > > if (drm->callbacks->authenticate(drm->user_data, id) < 0) > wl_resource_post_error(resource, Hi, this looks good to me: Reviewed-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> It would also be good to add #define WL_HIDE_DEPRECATED to all wayland-related .c files before #includes so that the compiler will catch any attempts to use the deprecated definitions again. Thanks, pq pgpx8DG7zXeQ6.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v2] gbm/drm: Pick the oldest available buffer in get_back_bo
On Wed, 23 Nov 2016 16:40:42 -0600 Derek Foreman <der...@osg.samsung.com> wrote: > Applications may query the back buffer age to efficiently perform > partial updates. Generally the application will keep a fixed length > damage history, and use this to calculate what needs to be redrawn > based on the age of the back buffer it's about to render to. > > If presented with a buffer that has an age greater than the > length of the damage history, the application will likely have > to completely repaint the buffer. > > Our current buffer selection strategy is to pick the first available > buffer without considering its age. If an application frequently > manages to fit within two buffers but occasionally requires a third, > this extra buffer will almost always be old enough to fall outside > of a reasonably long damage history, and require a full repaint. > > This patch changes the buffer selection behaviour to prefer the oldest > available buffer. > > By selecting the oldest available buffer, the application will likely > always be able to use its damage history, at a cost of having to > perform slightly more work every frame. This is an improvement if > the cost of a full repaint is heavy, and the surface damage between > frames is relatively small. > > It should be noted that since we don't currently trim our queue in > any way, an application that briefly needs a large number of buffers > will continue to receive older buffers than it would if it only ever > needed two buffers. > > Reviewed-by: Daniel Stone <dani...@collabora.com> > Signed-off-by: Derek Foreman <der...@osg.samsung.com> > --- > > The only changes are in the commit log, which hopefully better > expresses the rationale for the change. Hi, yes, I'm happy with the caveats being documented now. Therefore: Reviewed-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> Thanks, pq > src/egl/drivers/dri2/platform_drm.c | 6 -- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/src/egl/drivers/dri2/platform_drm.c > b/src/egl/drivers/dri2/platform_drm.c > index 2099314..f812ab5 100644 > --- a/src/egl/drivers/dri2/platform_drm.c > +++ b/src/egl/drivers/dri2/platform_drm.c > @@ -215,13 +215,15 @@ get_back_bo(struct dri2_egl_surface *dri2_surf) > struct dri2_egl_display *dri2_dpy = >dri2_egl_display(dri2_surf->base.Resource.Display); > struct gbm_dri_surface *surf = dri2_surf->gbm_surf; > + int age = 0; > unsigned i; > > if (dri2_surf->back == NULL) { >for (i = 0; i < ARRAY_SIZE(dri2_surf->color_buffers); i++) { > - if (!dri2_surf->color_buffers[i].locked) { > + if (!dri2_surf->color_buffers[i].locked && > + dri2_surf->color_buffers[i].age >= age) { > dri2_surf->back = _surf->color_buffers[i]; > - break; > + age = dri2_surf->color_buffers[i].age; >} >} > } pgpDBMnkzh9cV.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] gbm/drm: Pick the oldest available buffer in get_back_bo
On Thu, 10 Nov 2016 16:01:32 + Daniel Stone <dani...@collabora.com> wrote: > Hi, > > > > On Nov 10 2016, at 2:09 pm, Pekka Paalanen <ppaala...@gmail.com> wrote: > > > On Wed, 9 Nov 2016 14:57:22 -0600 > Derek Foreman <der...@osg.samsung.com> wrote: > > > > > > > We find the oldest backbuffer we can, on the grounds that clients are > > only going to keep a fixed history queue, so this gives them the > > greatest chance of being able to use that queue via making sure > > the age is ~always less than the depth of the swapchain > > > > > > right, that is one side of the story. > > > > > > On the other side, damage accumulates frame by frame. When you always > pick the oldest free buffer, you also maximize the accumulated damage > and need to repaint more than if you picked the youngest free buffer. > > > > > > Furthermore, if you normally can do with cycling just two buffers, and > a hickup causes you to allocate up to four buffers and afterwards you > come back to the normal steady flow, you are now always cycling through > all the four buffers and repainting damage for age=4 instead of age=2. > > > This would be a separate problem; the queue depth should really not be any > longer than need be. If there is a case where it ends up longer (say when a > surface switches from direct scanout to GPU composition), then Mesa should be > realising this and trimming the queue. Other stacks do this. > > > > If one was always picking the youngest free buffer, we could even have > heuristics to destroy buffers that have not been used for N swaps to > release some memory. > > > That is not strictly dependent on picking the youngest buffer; it could/should > be done regardless. > > > > There is a trade-off between repainting a little more than necessary in > the normal case to mitigate the impact on hickups and making the normal > case optimized while hickups cause a heavy hit (full repaint plus > possibly even allocating a new buffer). However, in a proper compositor > I fail to see how such hickups would even occur - so you wouldn't need > to mitigate hickups but also using the oldest would not be any worse > since you never allocate extra buffers. > > > Repainting more than necessary will only occur when the damage area is > changing literally frame-by-frame. This is kind of an odd case, since it means > that your eyes will have to be retargeting every 16ms. > > > > It would be nice to see more rationale in the commit message about why > picking the oldest is better than picking the youngest buffer. Age > greater than the length of the swapchain is only a trigger for full > repaint, just like a freshly allocated buffer is, except you skip the > cost of allocating. > > > > > > My counter-proposal is probably worse, but I'd like to see an > explanation because it's not obvious. > > > > Picking the youngest means that, as you say, predictability decreases in the > case where one buffer gets stuck for a very long time. This arguably shouldn't > happen generally, but provably does right now until Mesa gains the smarts to > trim the buffer queue. I can see how this would be provoked on a CPU-bound > system where we are doing direct scanout; by picking the youngest, we react to > this situation by generating more CPU load. > > > > Picking the oldest means that the queue remains balanced, which does imply a > complexity increase in the case that the surface damage area changes from > frame to frame. I do not believe that this is a common case, and even if it > were, the downside of this is not as bad as the downside of picking the > youngest. Hi, a very good explanation. I wish it was reflected in the commit message. Thanks, pq pgpgo_zvbSdEt.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] gbm/drm: Pick the oldest available buffer in get_back_bo
On Wed, 9 Nov 2016 14:57:22 -0600 Derek Foremanwrote: > We find the oldest backbuffer we can, on the grounds that clients are > only going to keep a fixed history queue, so this gives them the > greatest chance of being able to use that queue via making sure > the age is ~always less than the depth of the swapchain Hi, right, that is one side of the story. On the other side, damage accumulates frame by frame. When you always pick the oldest free buffer, you also maximize the accumulated damage and need to repaint more than if you picked the youngest free buffer. Furthermore, if you normally can do with cycling just two buffers, and a hickup causes you to allocate up to four buffers and afterwards you come back to the normal steady flow, you are now always cycling through all the four buffers and repainting damage for age=4 instead of age=2. If one was always picking the youngest free buffer, we could even have heuristics to destroy buffers that have not been used for N swaps to release some memory. There is a trade-off between repainting a little more than necessary in the normal case to mitigate the impact on hickups and making the normal case optimized while hickups cause a heavy hit (full repaint plus possibly even allocating a new buffer). However, in a proper compositor I fail to see how such hickups would even occur - so you wouldn't need to mitigate hickups but also using the oldest would not be any worse since you never allocate extra buffers. It would be nice to see more rationale in the commit message about why picking the oldest is better than picking the youngest buffer. Age greater than the length of the swapchain is only a trigger for full repaint, just like a freshly allocated buffer is, except you skip the cost of allocating. My counter-proposal is probably worse, but I'd like to see an explanation because it's not obvious. Thanks, pq > Reviewed-by: Daniel Stone > Signed-off-by: Derek Foreman > --- > src/egl/drivers/dri2/platform_drm.c | 6 -- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/src/egl/drivers/dri2/platform_drm.c > b/src/egl/drivers/dri2/platform_drm.c > index 2099314..f812ab5 100644 > --- a/src/egl/drivers/dri2/platform_drm.c > +++ b/src/egl/drivers/dri2/platform_drm.c > @@ -215,13 +215,15 @@ get_back_bo(struct dri2_egl_surface *dri2_surf) > struct dri2_egl_display *dri2_dpy = >dri2_egl_display(dri2_surf->base.Resource.Display); > struct gbm_dri_surface *surf = dri2_surf->gbm_surf; > + int age = 0; > unsigned i; > > if (dri2_surf->back == NULL) { >for (i = 0; i < ARRAY_SIZE(dri2_surf->color_buffers); i++) { > - if (!dri2_surf->color_buffers[i].locked) { > + if (!dri2_surf->color_buffers[i].locked && > + dri2_surf->color_buffers[i].age >= age) { > dri2_surf->back = _surf->color_buffers[i]; > - break; > + age = dri2_surf->color_buffers[i].age; >} >} > } pgpp9JF5Q7h7i.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland-egl: Fix for segfault in dri2_wl_destroy_surface.
On Wed, 24 Aug 2016 10:23:11 +0100 Emil Velikovwrote: > On 24 August 2016 at 08:48, Stencel, Joanna wrote: > > I couldn't find any clear requirement about order of destroys in EGL (or > > wayland) specification. > > Also, in EGL spec (3.7.3, about eglMakeCurrent) one can find: > > > > " If a native window underlying either draw or read is no longer valid, an > > EGL_BAD_NATIVE_WINDOW error is generated." > > "If a native window or pixmap underlying the draw or read surfaces is > > destroyed, rendering and readback are handled as above." > > > > So it seems that in general destroying native window underlying existing > > surface can be a case > > and should be handled. > > I agree that it's reasonable to call eglDestroySurface() first and probably > > most people do this. > > However, I think that different user's usage shouldn't cause a crash (or > > memory issues). > > > Completely agree - crashing (esp in the driver) isn't what we want. > Seems like we don't handle the case you mentioned. Care to spin a > patch ? > > > Could you explain what you call a memory leak here? Pointer which is > > nullified points to already > > free'd structure of wayland window. > > > Hmm you're right - it's the EGL implementation's back pointer that > gets nullified and not the original one used by the wayland-egl (as I > originally thought). > > With that said, I've rebased the patch on top of master added the tags > and pushed to master. Hi, I was about to scream that the patch changes the public stable ABI, but no, it does not and all seems fine. The reason for my mistake is that there are actually two different "native window" types at play here: wl_surface and wl_egl_surface. Of course, strictly from EGL point of view, there is only wl_egl_surface. Here is some background information in case you are interested for the other case. You are handling the premature destruction of wl_egl_surface which is very nice. Handling the premature destruction of wl_surface is practically impossible. The story for that can be found in the thread starting at: https://lists.freedesktop.org/archives/wayland-devel/2016-May/029134.html and continues in: https://lists.freedesktop.org/archives/wayland-devel/2016-June/029339.html Thanks, pq pgphdkXhXjciR.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] New dma_buf -> EGLImage EGL extension - Final spec published!
On Mon, 20 Jun 2016 09:45:26 -0400 Rob Clark <robdcl...@gmail.com> wrote: > On Mon, Jun 20, 2016 at 8:37 AM, Pekka Paalanen <ppaala...@gmail.com> wrote: > > On Fri, 17 Jun 2016 11:44:34 -0400 > > Rob Clark <robdcl...@gmail.com> wrote: > > > >> On Fri, Jun 17, 2016 at 9:31 AM, Pekka Paalanen <ppaala...@gmail.com> > >> wrote: > >> > On Fri, 17 Jun 2016 08:26:04 -0400 > >> > Rob Clark <robdcl...@gmail.com> wrote: > >> > > >> >> On Fri, Jun 17, 2016 at 3:59 AM, Pekka Paalanen <ppaala...@gmail.com> > >> >> wrote: > >> >> > On Thu, 16 Jun 2016 10:40:51 -0400 > >> >> > Rob Clark <robdcl...@gmail.com> wrote: > >> >> > > >> >> >> So, if we wanted to extend this to support the fourcc-modifiers that > >> >> >> we have on the kernel side for compressed/tiled/etc formats, what > >> >> >> would be the right approach? > >> >> >> > >> >> >> A new version of the existing extension or a new > >> >> >> EGL_EXT_image_dma_buf_import2 extension, or ?? > >> >> > > >> >> > Hi Rob, > >> >> > > >> >> > there are actually several things it might be nice to add: > >> >> > > >> >> > - a fourth plane, to match what DRM AddFB2 supports > >> >> > > >> >> > - the 64-bit fb modifiers > >> >> > > >> >> > - queries for which pixel formats are supported by EGL, so a display > >> >> > server can tell the applications that before the application goes > >> >> > and > >> >> > tries with a random bunch of them, shooting in the dark > >> >> > > >> >> > - queries for which modifiers are supported for each pixel format, > >> >> > ditto > >> >> > > >> >> > I discussed these with Emil in the past, and it seems an appropriate > >> >> > approach might be the following. > >> >> > > >> >> > Adding the 4th plane can be done as revising the existing > >> >> > EGL_EXT_image_dma_buf_import extension. The plane count is tied to > >> >> > pixel formats (and modifiers?), so the user does not need to know > >> >> > specifically whether the EGL implementation could handle a 4th plane > >> >> > or > >> >> > not. It is implied by the pixel format. > >> >> > > >> >> > Adding the fb modifiers needs to be a new extension, so that users can > >> >> > tell if they are supported or not. This is to avoid the following > >> >> > false > >> >> > failure: if user assumes modifiers are always supported, it will > >> >> > (may?) > >> >> > provide zero modifiers explicitly. If EGL implementation does not > >> >> > handle modifiers this would be rejected as unrecognized attributes, > >> >> > while if the zero modifiers were not given explicitly, everything > >> >> > would > >> >> > just work. > >> >> > >> >> hmm, if we design it as "not passing modifier" == "zero modifier", and > >> >> "never explicitly pass a zero modifier" then modifiers could be added > >> >> without a new extension. Although I agree that queries would need a > >> >> new extension.. so perhaps not worth being clever. > >> > > >> > Indeed. > >> > > >> >> > The queries obviously(?) need a new extension. It might make sense > >> >> > to bundle both modifier support and the queries in the same new > >> >> > extension. > >> >> > > >> >> > We have some rough old WIP code at > >> >> > https://git.collabora.com/cgit/user/lfrb/mesa.git/log/?h=T1410-modifiers > >> >> > https://git.collabora.com/cgit/user/lfrb/egl-specs.git/log/?h=T1410 > >> >> > > >> >> > > >> >> >> On Mon, Feb 25, 2013 at 6:54 AM, Tom Cooksey <tom.cook...@arm.com> > >> >> >> wrote: > >> >> >> > Hi All, > >> >> >> > > >> >> >> > The final spec has had enum values assigned and been published on > >> >> >
Re: [Mesa-dev] [RFC] New dma_buf -> EGLImage EGL extension - Final spec published!
On Fri, 17 Jun 2016 11:44:34 -0400 Rob Clark <robdcl...@gmail.com> wrote: > On Fri, Jun 17, 2016 at 9:31 AM, Pekka Paalanen <ppaala...@gmail.com> wrote: > > On Fri, 17 Jun 2016 08:26:04 -0400 > > Rob Clark <robdcl...@gmail.com> wrote: > > > >> On Fri, Jun 17, 2016 at 3:59 AM, Pekka Paalanen <ppaala...@gmail.com> > >> wrote: > >> > On Thu, 16 Jun 2016 10:40:51 -0400 > >> > Rob Clark <robdcl...@gmail.com> wrote: > >> > > >> >> So, if we wanted to extend this to support the fourcc-modifiers that > >> >> we have on the kernel side for compressed/tiled/etc formats, what > >> >> would be the right approach? > >> >> > >> >> A new version of the existing extension or a new > >> >> EGL_EXT_image_dma_buf_import2 extension, or ?? > >> > > >> > Hi Rob, > >> > > >> > there are actually several things it might be nice to add: > >> > > >> > - a fourth plane, to match what DRM AddFB2 supports > >> > > >> > - the 64-bit fb modifiers > >> > > >> > - queries for which pixel formats are supported by EGL, so a display > >> > server can tell the applications that before the application goes and > >> > tries with a random bunch of them, shooting in the dark > >> > > >> > - queries for which modifiers are supported for each pixel format, ditto > >> > > >> > I discussed these with Emil in the past, and it seems an appropriate > >> > approach might be the following. > >> > > >> > Adding the 4th plane can be done as revising the existing > >> > EGL_EXT_image_dma_buf_import extension. The plane count is tied to > >> > pixel formats (and modifiers?), so the user does not need to know > >> > specifically whether the EGL implementation could handle a 4th plane or > >> > not. It is implied by the pixel format. > >> > > >> > Adding the fb modifiers needs to be a new extension, so that users can > >> > tell if they are supported or not. This is to avoid the following false > >> > failure: if user assumes modifiers are always supported, it will (may?) > >> > provide zero modifiers explicitly. If EGL implementation does not > >> > handle modifiers this would be rejected as unrecognized attributes, > >> > while if the zero modifiers were not given explicitly, everything would > >> > just work. > >> > >> hmm, if we design it as "not passing modifier" == "zero modifier", and > >> "never explicitly pass a zero modifier" then modifiers could be added > >> without a new extension. Although I agree that queries would need a > >> new extension.. so perhaps not worth being clever. > > > > Indeed. > > > >> > The queries obviously(?) need a new extension. It might make sense > >> > to bundle both modifier support and the queries in the same new > >> > extension. > >> > > >> > We have some rough old WIP code at > >> > https://git.collabora.com/cgit/user/lfrb/mesa.git/log/?h=T1410-modifiers > >> > https://git.collabora.com/cgit/user/lfrb/egl-specs.git/log/?h=T1410 > >> > > >> > > >> >> On Mon, Feb 25, 2013 at 6:54 AM, Tom Cooksey <tom.cook...@arm.com> > >> >> wrote: > >> >> > Hi All, > >> >> > > >> >> > The final spec has had enum values assigned and been published on > >> >> > Khronos: > >> >> > > >> >> > http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt > >> >> > > >> >> > Thanks to all who've provided input. > >> > > >> > May I also pull your attention to a detail with the existing spec and > >> > Mesa behaviour I am asking about in > >> > https://lists.freedesktop.org/archives/mesa-dev/2016-June/120249.html > >> > "What is EGL_EXT_image_dma_buf_import image orientation as a GL texture?" > >> > Doing a dmabuf import seems to imply an y-flip AFAICT. > >> > >> I would have expected that *any* egl external image (dma-buf or > >> otherwise) should have native orientation rather than gl orientation. > >> It's somewhat useless otherwise. > > > > In that case importing dmabuf works differently than importing a > > wl_buffer
Re: [Mesa-dev] [RFC] New dma_buf -> EGLImage EGL extension - Final spec published!
On Fri, 17 Jun 2016 11:44:34 -0400 Rob Clark <robdcl...@gmail.com> wrote: > On Fri, Jun 17, 2016 at 9:31 AM, Pekka Paalanen <ppaala...@gmail.com> wrote: > > On Fri, 17 Jun 2016 08:26:04 -0400 > > Rob Clark <robdcl...@gmail.com> wrote: > > > >> On Fri, Jun 17, 2016 at 3:59 AM, Pekka Paalanen <ppaala...@gmail.com> > >> wrote: > >> > On Thu, 16 Jun 2016 10:40:51 -0400 > >> > Rob Clark <robdcl...@gmail.com> wrote: > >> > > >> >> On Mon, Feb 25, 2013 at 6:54 AM, Tom Cooksey <tom.cook...@arm.com> > >> >> wrote: > >> >> > Hi All, > >> >> > > >> >> > The final spec has had enum values assigned and been published on > >> >> > Khronos: > >> >> > > >> >> > http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt > >> >> > > >> >> > Thanks to all who've provided input. > >> > > >> > May I also pull your attention to a detail with the existing spec and > >> > Mesa behaviour I am asking about in > >> > https://lists.freedesktop.org/archives/mesa-dev/2016-June/120249.html > >> > "What is EGL_EXT_image_dma_buf_import image orientation as a GL texture?" > >> > Doing a dmabuf import seems to imply an y-flip AFAICT. > >> > >> I would have expected that *any* egl external image (dma-buf or > >> otherwise) should have native orientation rather than gl orientation. > >> It's somewhat useless otherwise. > > > > In that case importing dmabuf works differently than importing a > > wl_buffer (wl_drm), because for the latter, the y-invert flag is > > returned such that the orientation will match GL. And the direct > > scanout path goes through GBM since you have to import a wl_buffer, and > > I haven't looked what GBM does wrt. y-flip if anything. > > > >> I didn't read it carefully yet (would need caffeine first ;-)) but > >> EGL_KHR_image_base does say "This extension defines a new EGL resource > >> type that is suitable for sharing 2D arrays of image data between > >> client APIs" which to me implies native orientation. So that just > >> sounds like a mesa bug somehow? > > > > That specific sentence implies nothing about orientation to me. > > Furthermore, the paragraph continues: > > > > "Although the intended purpose is sharing 2D image data, the > > underlying interface makes no assumptions about the format or > > purpose of the resource being shared, leaving those decisions > > to the application and associated client APIs." > > > > Might "format" include orientation? > > > > How does "native orientation" connect with "GL texture coordinates"? > > The latter have explicitly defined orientation and origin. For use in > > GL, the right way up image is having the origin in the bottom-left > > corner. An image right way up is an image right way up, regardless > > which corner is the origin. The problem comes when you start using > > coordinates. > > > >> Do you just get that w/ i965? I know some linaro folks have been > >> using this extension to import buffers from video decoder with > >> freedreno/gallium and no one mentioned the video being upside down. > > > > Intel, yes, but since this happens *only* for the GL import path and > > direct scanout is fine without y-flipping, I bet people just flipped y > > and did not think twice, if there even was a problem. I just have a > > habit of asking "why". ;-) > > well, if possible, try with one of the gallium drivers? A good idea, I just need to do it at home with nouveau... which means I probably won't be getting there any time soon. > I'm honestly not 100% sure what it is supposed to be according to the > spec, but I do know some of the linaro folks were doing v4l2dec -> > glimagesink with dmabuf with both mali (I think some ST platform?) and > freedreno (snapdragon db410c), and no one complained to me that the > video was upside down for one or the other. So I guess at least > gallium and mali are doing the same thing. No idea if that is the > same thing that i965 does. Thanks, pq > > BR, > -R > > > After all, using GL with windows and FBOs and stuff you very often find > > yourself upside down, and I suspect people have got the habit of just > > flipping it if it does not look right the first time. See e.g. the > > row-order of data going into glTexImage2D... > > > > If the answer is "oops, well, dmabuf import is semantically y-flipping > > when it should not, but we cannot fix it because that would break > > everyone", I would be happy with that. I just want confirmation before > > flipping the flip flag. :-) > > > > > > Thanks, > > pq pgpdvZTYvek8E.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] New dma_buf -> EGLImage EGL extension - Final spec published!
On Fri, 17 Jun 2016 08:26:04 -0400 Rob Clark <robdcl...@gmail.com> wrote: > On Fri, Jun 17, 2016 at 3:59 AM, Pekka Paalanen <ppaala...@gmail.com> wrote: > > On Thu, 16 Jun 2016 10:40:51 -0400 > > Rob Clark <robdcl...@gmail.com> wrote: > > > >> So, if we wanted to extend this to support the fourcc-modifiers that > >> we have on the kernel side for compressed/tiled/etc formats, what > >> would be the right approach? > >> > >> A new version of the existing extension or a new > >> EGL_EXT_image_dma_buf_import2 extension, or ?? > > > > Hi Rob, > > > > there are actually several things it might be nice to add: > > > > - a fourth plane, to match what DRM AddFB2 supports > > > > - the 64-bit fb modifiers > > > > - queries for which pixel formats are supported by EGL, so a display > > server can tell the applications that before the application goes and > > tries with a random bunch of them, shooting in the dark > > > > - queries for which modifiers are supported for each pixel format, ditto > > > > I discussed these with Emil in the past, and it seems an appropriate > > approach might be the following. > > > > Adding the 4th plane can be done as revising the existing > > EGL_EXT_image_dma_buf_import extension. The plane count is tied to > > pixel formats (and modifiers?), so the user does not need to know > > specifically whether the EGL implementation could handle a 4th plane or > > not. It is implied by the pixel format. > > > > Adding the fb modifiers needs to be a new extension, so that users can > > tell if they are supported or not. This is to avoid the following false > > failure: if user assumes modifiers are always supported, it will (may?) > > provide zero modifiers explicitly. If EGL implementation does not > > handle modifiers this would be rejected as unrecognized attributes, > > while if the zero modifiers were not given explicitly, everything would > > just work. > > hmm, if we design it as "not passing modifier" == "zero modifier", and > "never explicitly pass a zero modifier" then modifiers could be added > without a new extension. Although I agree that queries would need a > new extension.. so perhaps not worth being clever. Indeed. > > The queries obviously(?) need a new extension. It might make sense > > to bundle both modifier support and the queries in the same new > > extension. > > > > We have some rough old WIP code at > > https://git.collabora.com/cgit/user/lfrb/mesa.git/log/?h=T1410-modifiers > > https://git.collabora.com/cgit/user/lfrb/egl-specs.git/log/?h=T1410 > > > > > >> On Mon, Feb 25, 2013 at 6:54 AM, Tom Cooksey <tom.cook...@arm.com> wrote: > >> > Hi All, > >> > > >> > The final spec has had enum values assigned and been published on > >> > Khronos: > >> > > >> > http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt > >> > > >> > Thanks to all who've provided input. > > > > May I also pull your attention to a detail with the existing spec and > > Mesa behaviour I am asking about in > > https://lists.freedesktop.org/archives/mesa-dev/2016-June/120249.html > > "What is EGL_EXT_image_dma_buf_import image orientation as a GL texture?" > > Doing a dmabuf import seems to imply an y-flip AFAICT. > > I would have expected that *any* egl external image (dma-buf or > otherwise) should have native orientation rather than gl orientation. > It's somewhat useless otherwise. In that case importing dmabuf works differently than importing a wl_buffer (wl_drm), because for the latter, the y-invert flag is returned such that the orientation will match GL. And the direct scanout path goes through GBM since you have to import a wl_buffer, and I haven't looked what GBM does wrt. y-flip if anything. > I didn't read it carefully yet (would need caffeine first ;-)) but > EGL_KHR_image_base does say "This extension defines a new EGL resource > type that is suitable for sharing 2D arrays of image data between > client APIs" which to me implies native orientation. So that just > sounds like a mesa bug somehow? That specific sentence implies nothing about orientation to me. Furthermore, the paragraph continues: "Although the intended purpose is sharing 2D image data, the underlying interface makes no assumptions about the format or purpose of the resource being shared, leaving those decisions to the application and associated client APIs." Might
Re: [Mesa-dev] [RFC] New dma_buf -> EGLImage EGL extension - Final spec published!
On Thu, 16 Jun 2016 10:40:51 -0400 Rob Clarkwrote: > So, if we wanted to extend this to support the fourcc-modifiers that > we have on the kernel side for compressed/tiled/etc formats, what > would be the right approach? > > A new version of the existing extension or a new > EGL_EXT_image_dma_buf_import2 extension, or ?? Hi Rob, there are actually several things it might be nice to add: - a fourth plane, to match what DRM AddFB2 supports - the 64-bit fb modifiers - queries for which pixel formats are supported by EGL, so a display server can tell the applications that before the application goes and tries with a random bunch of them, shooting in the dark - queries for which modifiers are supported for each pixel format, ditto I discussed these with Emil in the past, and it seems an appropriate approach might be the following. Adding the 4th plane can be done as revising the existing EGL_EXT_image_dma_buf_import extension. The plane count is tied to pixel formats (and modifiers?), so the user does not need to know specifically whether the EGL implementation could handle a 4th plane or not. It is implied by the pixel format. Adding the fb modifiers needs to be a new extension, so that users can tell if they are supported or not. This is to avoid the following false failure: if user assumes modifiers are always supported, it will (may?) provide zero modifiers explicitly. If EGL implementation does not handle modifiers this would be rejected as unrecognized attributes, while if the zero modifiers were not given explicitly, everything would just work. The queries obviously(?) need a new extension. It might make sense to bundle both modifier support and the queries in the same new extension. We have some rough old WIP code at https://git.collabora.com/cgit/user/lfrb/mesa.git/log/?h=T1410-modifiers https://git.collabora.com/cgit/user/lfrb/egl-specs.git/log/?h=T1410 > On Mon, Feb 25, 2013 at 6:54 AM, Tom Cooksey wrote: > > Hi All, > > > > The final spec has had enum values assigned and been published on Khronos: > > > > http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt > > > > Thanks to all who've provided input. May I also pull your attention to a detail with the existing spec and Mesa behaviour I am asking about in https://lists.freedesktop.org/archives/mesa-dev/2016-June/120249.html "What is EGL_EXT_image_dma_buf_import image orientation as a GL texture?" Doing a dmabuf import seems to imply an y-flip AFAICT. Thanks, pq pgpcTcsWCSSmv.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] What is EGL_EXT_image_dma_buf_import image orientation as a GL texture?
Hi all, I was working on Weston when I noticed that the weston-simple-dmabuf-intel demo shows the window content in different orientations depending on whether it is composited with GL or directly scanned out. Obviously something is a miss. Then I went out to find what the orientation should be according to specs, rather than just flipping the y-invert flag in Weston without thinking. The buffer is drawn with manual pixel-poking via mmap, and as a raw memory buffer, the image origin (orientation) is at top-left. When this is passed as a dmabuf to Weston and direct to scanout without any attempts in y-flipping (imported through GBM -> drmAddFB2), the image on the screen is on the same orientation as I would expect by reading the pixel-poking code. So that looks good to me. However, when the dmabuf is imported as an EGLImage and then composited from a GL texture, the orientation flips. The image on screen is y-inverted. Because Weston's GL-renderer works assuming the GL texture coordinate orientation, it y-inverts the texture coordinates when the image does *not* want y-flip. The dmabuf imported image is not flagged with y-invert, hence the problem. I could easily fix that by just toggling the flag, but I am not sure that's actually right. The above means, that the pixel 0,0 at top-left corner of the buffer ends up at GL texture coordinates 0,0. GL texture coordinates are specified to have origin at the bottom-left. This is the conflict. Importing the dmabuf semantically y-flips it. Is the Mesa implementation correct or not? I didn't find anything to answer that question. I skimmed through https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt https://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image_external.txt Did I miss something? Does it even matter if Mesa was incorrect, since it is now the de facto spec, and presumably everyone has adjusted their implementations and programs to match that already? For comparison, doing an import of a (wl_drm) wl_buffer as an EGLImage via EGL_WL_bind_wayland_display extension makes the image in GL texture coordinate orientation. That is, the EGL query for the y-invert flag returns the flag such, that the texture coordinate origin is at the bottom-left of the image. This matches the GL spec and the assumption in Weston's GL-renderer, so the image appears the right way up. I'm tracking the issue so far at: https://phabricator.freedesktop.org/T7475 Thanks, pq pgpwFBAPOxM9a.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland: Avoid race conditions when on non-main thread
On Wed, 4 May 2016 10:22:35 +0100 Daniel Stonewrote: > Hi Jonas, > > On 4 May 2016 at 09:53, Jonas Ådahl wrote: > > When EGL is used on some other thread than the thread that drives the > > main wl_display queue, the Wayland EGL dri2 implementation is > > vulnerable to a race condition related to display round trips and global > > object advertisements. > > > > The race that may happen is that after after a proxy is created, but > > before the queue is set, events meant to be emitted via the yet to be > > set queue may already have been queued on the wrong queue. > > > > In order to make it possible to avoid this race, wayland 1.11 > > introduced new API that allows creating a proxy wrapper that may be used > > as the factory proxy when creating new proxies via Wayland requests. The > > queue of a proxy wrapper can be changed without effecting what queue > > events emitted by the actual proxy will be queued on, while still > > effecting what default queue proxies created from it will have. > > This looks good to me, but I am worried about the hard dependency, > both at build and run-time. It would be great to make it optional at > build-time (#ifdef wrapper creation and destruction, just doing a > straight assignment otherwise to allow the other code to not be > #ifdefed), but even better if it could also use dlsym() to discover if > the symbol is available at runtime. Maybe the latter is overkill > though. But wouldn't that need #ifdefs in every place where the wrapper will be used, because if the wrapper is not really the wrapper, there needs to be a call to set the queue? Setting the queue is racy, yes, but it would have a chance of working which it wouldn't have otherwise, right? Potentially breaking also non-threaded apps. Jonas, is this patch missing one more wl_proxy_wrapper_destroy() for the display tear-down? I can only see the destroy in the initialize-failure path. This patch is also fixing an existing leak of wl_dpy in the case the connection was created by the EGL, I think. That should also be mentioned in the commit message. Thanks, pq pgpXcqHZgrKAm.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland: Try to use wl_surface.damage_buffer for SwapBuffersWithDamage
On Thu, 11 Feb 2016 10:34:10 -0600 Derek Foreman <der...@osg.samsung.com> wrote: > Since commit d1314de293e9e4a63c35f094c3893aaaed8580b4 we ignore > damage passed to SwapBuffersWithDamage. > > Wayland 1.10 now has functionality that allows us to properly > process those damage rectangles, and a way to query if it's > available. > > Now we can use wl_surface.damage_buffer and interpret the incoming > damage as being in buffer co-ordinates. > > Signed-off-by: Derek Foreman <der...@osg.samsung.com> > --- > src/egl/drivers/dri2/platform_wayland.c | 32 +--- > 1 file changed, 29 insertions(+), 3 deletions(-) > > diff --git a/src/egl/drivers/dri2/platform_wayland.c > b/src/egl/drivers/dri2/platform_wayland.c > index c2438f7..b5a5b59 100644 > --- a/src/egl/drivers/dri2/platform_wayland.c > +++ b/src/egl/drivers/dri2/platform_wayland.c > @@ -653,6 +653,30 @@ create_wl_buffer(struct dri2_egl_surface *dri2_surf) >_buffer_listener, dri2_surf); > } > > +static EGLBoolean > +try_damage_buffer(struct dri2_egl_surface *dri2_surf, > + const EGLint *rects, > + EGLint n_rects) > +{ > +#ifdef WL_SURFACE_DAMAGE_BUFFER_SINCE_VERSION > + int i; > + > + if (wl_proxy_get_version((struct wl_proxy *) dri2_surf->wl_win->surface) > + < WL_SURFACE_DAMAGE_BUFFER_SINCE_VERSION) > + return EGL_FALSE; > + > + for (i = 0; i < n_rects; i++) { > + const int *rect = [i * 4]; > + > + wl_surface_damage_buffer(dri2_surf->wl_win->surface, > + rect[0], > + dri2_surf->base.Height - rect[1] - rect[3], > + rect[2], rect[3]); > + } > + return EGL_TRUE; > +#endif > + return EGL_FALSE; > +} > /** > * Called via eglSwapBuffers(), drv->API.SwapBuffers(). > */ > @@ -703,10 +727,12 @@ dri2_wl_swap_buffers_with_damage(_EGLDriver *drv, > dri2_surf->dx = 0; > dri2_surf->dy = 0; > > - /* We deliberately ignore the damage region and post maximum damage, due > to > + /* If the compositor doesn't support damage_buffer, we deliberately > +* ignore the damage region and post maximum damage, due to > * https://bugs.freedesktop.org/78190 */ > - wl_surface_damage(dri2_surf->wl_win->surface, > - 0, 0, INT32_MAX, INT32_MAX); > + if (!n_rects || !try_damage_buffer(dri2_surf, rects, n_rects)) > + wl_surface_damage(dri2_surf->wl_win->surface, > +0, 0, INT32_MAX, INT32_MAX); > > if (dri2_dpy->is_different_gpu) { >_EGLContext *ctx = _eglGetCurrentContext(); Reviewed-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> But I also agree with Emil that having a comment on #ifdef WL_SURFACE_DAMAGE_BUFFER_SINCE_VERSION usage is good to add. Bumping the wayland-client requirement to >= 1.10 would be nice, but currently the requirement seems to be 1.2 so I wonder if there are other things to be cleaned up too. OTOH, with the #ifdef this patch could go to stable branches, couldn't it? How about landing this is as is, tagged for stable, and a follow-up if wanted to bump the wayland-client dependency on master? Would that be appropriate? Thanks, pq pgph0hauASsPL.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] question on GL_RED and gles
On Wed, 25 Nov 2015 15:24:07 + Julien Isorcewrote: > Hi, > > In EXT_texture_rg.txt it is mentioned of GL_RED_EXT on gles 2.0. > > In glformats.c::_mesa_es_error_check_format_and_type returns > GL_INVALID_VALUE if GL_RED_EXT(as it reaches default case) > so glTexImage2D(..., GL_RED_EXT, GL_UNSIGNED_BYTE, data) fails. > > Though GL_EXTENSIONS contains GL_EXT_texture_rg. > > So it seems that GL_RED_EXT is actually only possible with gles >= 3.0. > > What am I missing ? Hi, maybe it's a similar issue as https://bugs.freedesktop.org/show_bug.cgi?id=92265 perhaps? Thanks, pq pgpJCFGZ9jjjO.pgp Description: OpenPGP digital signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/wayland: Ignore rects from SwapBuffersWithDamage
On Sat, 7 Nov 2015 18:34:20 + Daniel Stone <dani...@collabora.com> wrote: > eglSwapBuffersWithDamage accepts damage-region rectangles to hint the > compositor that it only needs to redraw certain areas, which was passed > through the wl_surface_damage request, as designed. > > Wayland also offers a buffer transformation interface, e.g. to allow > users to render pre-rotated buffers. Unfortunately, there is no way to > query buffer transforms, and the damage region was provided in surface, > rather than buffer, co-ordinate space. > > Users could perhaps account for this themselves, but EGL also requires > co-ordinates to be passed in GL/mathematical co-ordinate space, with an > inversion to Wayland's natural/scanout co-orodinate space, so the > transformation is so convoluted that it's unreasonable to expect anyone > to do it. > > Pending creation and acceptance of a wl_surface.buffer_damage request, > which will accept co-ordinates in buffer co-ordinate space, pessimise to > always sending full-surface damage. > > bce64c6c provides the explanation for why we send maximum-range damage, > rather than the full size of the surface: in the presence of buffer > transformations, full-surface damage may not actually cover the entire > surface. > > Signed-off-by: Daniel Stone <dani...@collabora.com> > Cc: Jasper St. Pierre <jas...@mecheye.net> > --- > src/egl/drivers/dri2/platform_wayland.c | 14 ++ > 1 file changed, 2 insertions(+), 12 deletions(-) > > diff --git a/src/egl/drivers/dri2/platform_wayland.c > b/src/egl/drivers/dri2/platform_wayland.c > index 0d161f6..9e8e6ce 100644 > --- a/src/egl/drivers/dri2/platform_wayland.c > +++ b/src/egl/drivers/dri2/platform_wayland.c > @@ -703,18 +703,8 @@ dri2_wl_swap_buffers_with_damage(_EGLDriver *drv, > dri2_surf->dx = 0; > dri2_surf->dy = 0; > > - if (n_rects == 0) { > - wl_surface_damage(dri2_surf->wl_win->surface, > -0, 0, INT32_MAX, INT32_MAX); > - } else { > - for (i = 0; i < n_rects; i++) { > - const int *rect = [i * 4]; > - wl_surface_damage(dri2_surf->wl_win->surface, > - rect[0], > - dri2_surf->base.Height - rect[1] - rect[3], > - rect[2], rect[3]); > - } > - } > +wl_surface_damage(dri2_surf->wl_win->surface, > + 0, 0, INT32_MAX, INT32_MAX); > > if (dri2_dpy->is_different_gpu) { >_EGLContext *ctx = _eglGetCurrentContext(); One space too much of indent? Reviewed-by: Pekka Paalanen <pekka.paala...@collabora.co.uk> Perhaps a TODO comment referring to https://bugs.freedesktop.org/show_bug.cgi?id=78190 would be nice. Thanks, pq signature.asc Description: PGP signature ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH RFC] egl/dri2: Add dri3 support to x11 platform
On Sat, 4 Jul 2015 11:18:18 +0800 Boyan Ding boyan.j.d...@gmail.com wrote: [snip] +/* FIXME: Is this right? Seems problematic for WL_bind_wayland_display */ What seems to be the problem ? Afaik xcb_dri3_open_reply_fds should return an FD which is ok (be that a render_node device, or a master one with explicit auth). The problem is that WL_bind_wayland_display don't work under dri3 on x11. I only found it yesterday that to get it work, we'll need to add a mechanism to pass fd instead of name of dri device in wl_drm protocol. Previously, if a wayland client wants to use hardware accelerated EGL, it (with the help of libEGL in mesa) will bind to wl_drm object, and wl_drm will immediately send the name of dri device to the wayland client (actually also libEGL in mesa). After wayland platform code opens the device, it has to send the fd to the X server or drm to get authentication. Things are different with dri3, where a fd is directly sent to the client without the need to authenticate. I propose the following addition in wl_drm protocol: There are two kinds of wl_drm implementation. One is the current form. The other one, called dri3-capable (or whatever name), include wl_drm object built on dri3 directly or indirectly through wayland platform. If a client binds to a dri3-capable wl_drm object, it will send a device event to the client with NULL or empty string (so a client who knows nothing about it can safely fail). If the client knows about dri3-capable wl_drm object, it will send a request named get_fd and wl_drm will respond it with an fd acquired with dri3. If the wl_drm object is not dri3-capable it will raise an error if it receives a get_fd request, so will a dri3-capable wl_drm object if it receives authenticate request. Remember, that all protocol errors in Wayland are always fatal, and cause the whole client to be disconnected. Also, on Wayland EGL simply cannot have its private connection to the server, it *must* be shared with the whole application because otherwise EGL cannot operate on the app's wl_surfaces. If you really wanted to go this route (it seems you already have a better idea than this), just use the normal mechanism to extend the current wl_drm interface, or invent a whole new interface. Object flavours does not seem like a workable solution the way you describe, if I understood what you had in mind. Note, that the compositor and the app may be using different versions of Mesa, which means they may be using different definitions of wl_drm. Therefore, wl_drm must follow the same stable protocol rules as everything else. Thanks, pq So the following dri3_authenticate function is not needed. Let's not expose WL_bind_wayland_display for now, and its enablement should be separate patches. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] dlopen(libglapi) (Re: [PATCH 08/11] confiigure: drop unused variable GBM_BACKEND_DIRS)
On Sun, 21 Jun 2015 16:03:09 + Emil Velikov emil.l.veli...@gmail.com wrote: On 19/06/15 22:00, Eric Anholt wrote: I find 9/10 weird -- if the DRI drivers need libglapi, why aren't they just linking libglapi? Short version - currently they don't because of hysterical raisins, once we move to shared-glapi only libGL we'll add the link. Long version - the dri modules require glapi provider. Which could be either one of: 1) the xserver glapi (no longer there since ~1.14) 2) libGL with static glapi 3) the shared glapi Thanks to some interesting overlinking of libgbm, which itself has propagated to libEGL, people won't notice the issue if they're not using 1) or 2). So people working directly with EGL/gbm (wayland/weston folk, everyone writing their wayland compositor, Google with their gbm work) has noticed those and had the same workaround for a while. While most of us working with X and/or GLX did not see it. This of course is prominent, (only?) when using dlopen(libgbm/libEGL), rather than linking against them. Aaah, so this explains all the hassle with http://cgit.freedesktop.org/wayland/weston/tree/src/compositor-drm.c?id=1.8.0#n1382 When 1) and 2) are gone (second is on it's way out), we can add the link to the dri modules. Although neither helps in the case of using old dri module with new loader, thus the need for the explicit dlopen(). A nice reminder of that strange piece of code in Weston. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 7/9] egl/wayland: assume EGL_WINDOW_BIT
On Wed, 6 May 2015 11:00:13 +1000 Dave Airlie airl...@gmail.com wrote: On 2 May 2015 at 20:15, Axel Davy axel.d...@ens.fr wrote: Only EGL_WINDOW_BIT is supported. Remove tests related. Is this there no plans to support pixmap/pbuffer/ or any of the other bits? Seems like a step in the wrong direction if we really should be supporting other things than WINDOW_BIT in the future. EGL Wayland by definition does not have pixmaps: https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_platform_wayland.txt Pbuffers OTOH I suppose are possible. Thanks, pq Signed-off-by: Axel Davy axel.d...@ens.fr --- src/egl/drivers/dri2/platform_wayland.c | 38 +++-- 1 file changed, 13 insertions(+), 25 deletions(-) diff --git a/src/egl/drivers/dri2/platform_wayland.c b/src/egl/drivers/dri2/platform_wayland.c index 84482da..9603c32 100644 --- a/src/egl/drivers/dri2/platform_wayland.c +++ b/src/egl/drivers/dri2/platform_wayland.c @@ -120,7 +120,7 @@ resize_callback(struct wl_egl_window *wl_win, void *data) * Called via eglCreateWindowSurface(), drv-API.CreateWindowSurface(). */ static _EGLSurface * -dri2_wl_create_surface(_EGLDriver *drv, _EGLDisplay *disp, EGLint type, +dri2_wl_create_surface(_EGLDriver *drv, _EGLDisplay *disp, _EGLConfig *conf, void *native_window, const EGLint *attrib_list) { @@ -137,7 +137,7 @@ dri2_wl_create_surface(_EGLDriver *drv, _EGLDisplay *disp, EGLint type, return NULL; } - if (!_eglInitSurface(dri2_surf-base, disp, type, conf, attrib_list)) + if (!_eglInitSurface(dri2_surf-base, disp, EGL_WINDOW_BIT, conf, attrib_list)) goto cleanup_surf; if (conf-RedSize == 5) @@ -147,25 +147,17 @@ dri2_wl_create_surface(_EGLDriver *drv, _EGLDisplay *disp, EGLint type, else dri2_surf-format = WL_DRM_FORMAT_ARGB; - switch (type) { - case EGL_WINDOW_BIT: - dri2_surf-wl_win = window; + dri2_surf-wl_win = window; - dri2_surf-wl_win-private = dri2_surf; - dri2_surf-wl_win-resize_callback = resize_callback; + dri2_surf-wl_win-private = dri2_surf; + dri2_surf-wl_win-resize_callback = resize_callback; - dri2_surf-base.Width = -1; - dri2_surf-base.Height = -1; - break; - default: - goto cleanup_surf; - } + dri2_surf-base.Width = -1; + dri2_surf-base.Height = -1; dri2_surf-dri_drawable = (*dri2_dpy-dri2-createNewDrawable) (dri2_dpy-dri_screen, - type == EGL_WINDOW_BIT ? - dri2_conf-dri_double_config : - dri2_conf-dri_single_config, + dri2_conf-dri_double_config, dri2_surf); if (dri2_surf-dri_drawable == NULL) { _eglError(EGL_BAD_ALLOC, dri2-createNewDrawable); @@ -193,8 +185,7 @@ dri2_wl_create_window_surface(_EGLDriver *drv, _EGLDisplay *disp, struct dri2_egl_display *dri2_dpy = dri2_egl_display(disp); _EGLSurface *surf; - surf = dri2_wl_create_surface(drv, disp, EGL_WINDOW_BIT, conf, - native_window, attrib_list); + surf = dri2_wl_create_surface(drv, disp, conf, native_window, attrib_list); if (surf != NULL) dri2_wl_swap_interval(drv, disp, surf, dri2_dpy-default_swap_interval); @@ -253,10 +244,8 @@ dri2_wl_destroy_surface(_EGLDriver *drv, _EGLDisplay *disp, _EGLSurface *surf) if (dri2_surf-throttle_callback) wl_callback_destroy(dri2_surf-throttle_callback); - if (dri2_surf-base.Type == EGL_WINDOW_BIT) { - dri2_surf-wl_win-private = NULL; - dri2_surf-wl_win-resize_callback = NULL; - } + dri2_surf-wl_win-private = NULL; + dri2_surf-wl_win-resize_callback = NULL; free(surf); @@ -428,9 +417,8 @@ update_buffers(struct dri2_egl_surface *dri2_surf) dri2_egl_display(dri2_surf-base.Resource.Display); int i; - if (dri2_surf-base.Type == EGL_WINDOW_BIT - (dri2_surf-base.Width != dri2_surf-wl_win-width || -dri2_surf-base.Height != dri2_surf-wl_win-height)) { + if (dri2_surf-base.Width != dri2_surf-wl_win-width || + dri2_surf-base.Height != dri2_surf-wl_win-height) { dri2_wl_release_buffers(dri2_surf); -- 2.3.7 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org
Re: [Mesa-dev] [Mesa-stable] [PATCH] i965: Add XRGB8888 format to intel_screen_make_configs
On Thu, 16 Apr 2015 21:06:27 +0100 Daniel Stone dan...@fooishbar.org wrote: Hi, On 9 April 2015 at 17:20, Kristian Høgsberg k...@bitplanet.net wrote: On Wed, Apr 8, 2015 at 11:37 AM, Emil Velikov emil.l.veli...@gmail.com wrote: Hi all, Can we get a pair of eyes on this patch please ? Reviewed-by: Kristian Høgsberg k...@bitplanet.net Thanks for the reminder. Mind you, this does break 75b7e1df13, which explicitly removes them in order to 'fix' (scare quotes in original) a conformance test. That commit also made our lives harder with https://bugs.freedesktop.org/show_bug.cgi?id=67676 for which the suggested fix was a separate EGL_XXX_transparent_alpha extension, but between the two of these, it does seem like a more nuanced fix might be in order. Not being able to choose between XRGB and ARGB formats in the GBM backend does actually impair our ability to hoist ARGB content to planes, so at the very least, we'd need to duplicate ARGB driver_configs in the EGLConfig list: one for an ARGB native visual ID, and one for XRGB. Any thoughts? Dear Intel developers, humble ping? Any advice on what the proper course of action is? To me it seems like the issue still exists; not being able to run Weston/DRM on Intel would be a bit awkward. Thanks, pq Boyan For the future can you please include the CC mesa-stable line in the commit message. It will make things a bit more obvious as I'm pursing through the list :-) Thanks Emil On 25 March 2015 at 11:36, Boyan Ding boyan.j.d...@gmail.com wrote: Some application, such as drm backend of weston, uses XRGB config as default. i965 doesn't provide this format, but before commit 65c8965d, the drm platform of EGL takes ARGB as XRGB. Now that commit 65c8965d makes EGL recognize format correctly so weston won't start because it can't find XRGB. Add XRGB format to i965 just as other drivers do. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89689 Signed-off-by: Boyan Ding boyan.j.d...@gmail.com --- src/mesa/drivers/dri/i965/intel_screen.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/mesa/drivers/dri/i965/intel_screen.c b/src/mesa/drivers/dri/i965/intel_screen.c index 3640b67..2b82c33 100644 --- a/src/mesa/drivers/dri/i965/intel_screen.c +++ b/src/mesa/drivers/dri/i965/intel_screen.c @@ -1126,7 +1126,8 @@ intel_screen_make_configs(__DRIscreen *dri_screen) { static const mesa_format formats[] = { MESA_FORMAT_B5G6R5_UNORM, - MESA_FORMAT_B8G8R8A8_UNORM + MESA_FORMAT_B8G8R8A8_UNORM, + MESA_FORMAT_B8G8R8X8_UNORM }; /* GLX_SWAP_COPY_OML is not supported due to page flipping. */ -- 2.3.3 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] llvmpipe support for ARM?
On Wed, 15 Apr 2015 19:02:50 +0300 Marko Moberg marko.s.mob...@gmail.com wrote: Thanks for the replies. I tried to upgrade LLVM from 3.3 to 3.4 after I sent the e-mail but ended up having an internal compiler error from gcc linaro toolchain compiler. Nice. I could try a more up-to-date version of Mesa to see if it helps. Although I have learned that I can not take the most up-to-date version since something has gotten broken related to SW rendering/wayland/egl. Hi, for the record, I've been talking with Marko, and he cannot really upgrade Mesa to the very latest because of https://bugs.freedesktop.org/show_bug.cgi?id=86701 which means you cannot have Mesa software rendering on Wayland. IIRC, his target system is without X11. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] wayland-drm: add a description for every item.
=create planar drm buffer +Create a wayland buffer for the named planar DRM buffer. + +The DRM surface must have a name using the flink ioctl. + /description + arg name=id type=new_id interface=wl_buffer + summary=wl_buffer assigned to this drm buffer/ + arg name=name type=uint summary=unique buffer name/ Flink name. + arg name=width type=int summary=buffer width/ + arg name=height type=int summary=buffer height/ + arg name=format type=uint summary=see wl_drm_format/ + arg name=offset0 type=int summary=first plane offset/ + arg name=stride0 type=int summary=first plane stride/ + arg name=offset1 type=int summary=second plane offset/ + arg name=stride1 type=int summary=second plane stride/ + arg name=offset2 type=int summary=third plane offset/ + arg name=stride2 type=int summary=third plane stride/ Units? /request -!-- Notification of the path of the drm device which is used by - the server. The client should use this device for creating - local buffers. Only buffers created from this device should - be be passed to the server using this drm object's - create_buffer request. -- event name=device - arg name=name type=string/ + description summary=drm device of the server +Notification of the path of the drm device which is used by the +server. + +The client should use this device for creating local buffers. +Only buffers created from this device should be be passed to +the server using this drm object's create_buffer request. + /description + arg name=name type=string summary=path of the drm device/ Ok. /event event name=format - arg name=format type=uint/ + description summary=pixel format description +Informs the client about a valid pixel format that can be used +for buffers. + /description + arg name=format type=uint summary=pixel format/ /event -!-- Raised if the authenticate request succeeded -- -event name=authenticated/ +event name=authenticated + description summary=successful authentication +Raised if the authenticate request succeeded. + /description +/event Ok. enum name=capability since=2 - description summary=wl_drm capability bitmask -Bitmask of capabilities. + description summary=capability description +Lists the available capabilities the server can expose. You lost the most important word of the description: bitmask. It tells how to add new values to this enum. /description entry name=prime value=1 summary=wl_drm prime available/ /enum event name=capabilities - arg name=value type=uint/ + description summary=capabilities bitmask +Bitmask of capabilities supported by the server. + /description + arg name=value type=uint summary=capabilities/ Ok. Might add see wl_drm_capability. /event !-- Version 2 additions -- -!-- Create a wayland buffer for the prime fd. Use for regular and planar - buffers. Pass 0 for offset and stride for unused planes. -- request name=create_prime_buffer since=2 - arg name=id type=new_id interface=wl_buffer/ - arg name=name type=fd/ - arg name=width type=int/ - arg name=height type=int/ - arg name=format type=uint/ - arg name=offset0 type=int/ - arg name=stride0 type=int/ - arg name=offset1 type=int/ - arg name=stride1 type=int/ - arg name=offset2 type=int/ - arg name=stride2 type=int/ + description summary=create prime drm buffer +Create a wayland buffer for the prime fd. + +Use for regular and planar buffers. Pass 0 for offset and +stride for unused planes. + /description + arg name=id type=new_id interface=wl_buffer + summary=wl_buffer assigned to this drm buffer/ + arg name=name type=fd summary=prime fd/ I'm not really sure what a prime fd is, care to explain it in the description? Is it a dmabuf fd? Any additional semantics or constraints? + arg name=width type=int summary=buffer width/ + arg name=height type=int summary=buffer height/ + arg name=format type=uint summary=see wl_drm_format/ + arg name=offset0 type=int summary=first plane offset/ + arg name=stride0 type=int summary=first plane stride/ + arg name=offset1 type=int summary=second plane offset/ + arg name=stride1 type=int summary=second plane stride/ + arg name=offset2 type=int summary=third plane offset/ + arg name=stride2 type=int summary=third plane stride/ /request /interface I don't mind if you don't add the extra information I asked for. So, if you put the bitmask back, then this would be: Revieved-by: Pekka Paalanen
Re: [Mesa-dev] Performance regression on Tegra/GK20A since commit 363b53f00069
On Tue, 25 Nov 2014 17:04:36 +0900 Alexandre Courbot acour...@nvidia.com wrote: Hi Pekka, On 11/20/2014 08:41 PM, Pekka Paalanen wrote: At this point, I think it would be best to open a bug report against Mesa, and continue there. Such freezes obviously should not happen on either EGL driver. Please add me to CC on the bug. Thanks, I have opened a bug against Mesa and added you to the CC list: https://bugs.freedesktop.org/show_bug.cgi?id=86690 I will try to attach the WAYLAND_DEBUG trace you suggested to it. Cool, thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] configure.ac: remove enable flags for EGL and GBM Gallium state trackers
On Tue, 4 Nov 2014 23:42:44 +0100 Marek Olšák mar...@gmail.com wrote: From: Marek Olšák marek.ol...@amd.com Btw. would have been *really* nice if the commit message here explained why you do this and what all things it intentionally breaks, before pushing it. Thanks, pq --- configure.ac| 74 +++-- docs/egl.html | 7 - src/gallium/Makefile.am | 8 -- 3 files changed, 4 insertions(+), 85 deletions(-) diff --git a/configure.ac b/configure.ac index fc7d372..91e111b 100644 --- a/configure.ac +++ b/configure.ac @@ -697,20 +697,6 @@ AC_ARG_ENABLE([xlib-glx], [make GLX library Xlib-based instead of DRI-based @:@default=disabled@:@])], [enable_xlib_glx=$enableval], [enable_xlib_glx=no]) -AC_ARG_ENABLE([gallium-egl], -[AS_HELP_STRING([--enable-gallium-egl], -[enable optional EGL state tracker (not required - for EGL support in Gallium with OpenGL and OpenGL ES) - @:@default=disabled@:@])], -[enable_gallium_egl=$enableval], -[enable_gallium_egl=no]) -AC_ARG_ENABLE([gallium-gbm], -[AS_HELP_STRING([--enable-gallium-gbm], -[enable optional gbm state tracker (not required for - gbm support in Gallium) - @:@default=auto@:@])], -[enable_gallium_gbm=$enableval], -[enable_gallium_gbm=auto]) AC_ARG_ENABLE([r600-llvm-compiler], [AS_HELP_STRING([--enable-r600-llvm-compiler], @@ -1314,51 +1300,6 @@ AM_CONDITIONAL(HAVE_EGL, test x$enable_egl = xyes) AC_SUBST([EGL_LIB_DEPS]) dnl -dnl EGL Gallium configuration -dnl -if test x$enable_gallium_egl = xyes; then -if test -z $with_gallium_drivers; then -AC_MSG_ERROR([cannot enable egl_gallium without Gallium]) -fi -if test x$enable_egl = xno; then -AC_MSG_ERROR([cannot enable egl_gallium without EGL]) -fi -if test x$have_libdrm != xyes; then -AC_MSG_ERROR([egl_gallium requires libdrm = $LIBDRM_REQUIRED]) -fi -# XXX: Uncomment once converted to use static/shared pipe-drivers -#enable_gallium_loader=$enable_shared_pipe_drivers -fi -AM_CONDITIONAL(HAVE_GALLIUM_EGL, test x$enable_gallium_egl = xyes) - -dnl -dnl gbm Gallium configuration -dnl -if test x$enable_gallium_gbm = xauto; then -case $enable_gbm$enable_gallium_egl$enable_dri$with_egl_platforms in -yesyesyes*drm*) -enable_gallium_gbm=yes ;; - *) -enable_gallium_gbm=no ;; -esac -fi -if test x$enable_gallium_gbm = xyes; then -if test -z $with_gallium_drivers; then -AC_MSG_ERROR([cannot enable gbm_gallium without Gallium]) -fi -if test x$enable_gbm = xno; then -AC_MSG_ERROR([cannot enable gbm_gallium without gbm]) -fi - -if test x$enable_gallium_egl != xyes; then -AC_MSG_ERROR([gbm_gallium is only used by egl_gallium]) -fi - -enable_gallium_loader=$enable_shared_pipe_drivers -fi -AM_CONDITIONAL(HAVE_GALLIUM_GBM, test x$enable_gallium_gbm = xyes) - -dnl dnl XA configuration dnl if test x$enable_xa = xyes; then @@ -1386,9 +1327,9 @@ if test x$enable_openvg = xyes; then if test -z $with_gallium_drivers; then AC_MSG_ERROR([cannot enable OpenVG without Gallium]) fi -if test x$enable_gallium_egl = xno; then -AC_MSG_ERROR([cannot enable OpenVG without egl_gallium]) -fi + +AC_MSG_ERROR([Cannot enable OpenVG, because egl_gallium has been removed and + OpenVG hasn't been integrated into standard libEGL yet]) EGL_CLIENT_APIS=$EGL_CLIENT_APIS '$(VG_LIB)' VG_LIB_DEPS=$VG_LIB_DEPS $SELINUX_LIBS $PTHREAD_LIBS @@ -2170,8 +2111,6 @@ AC_CONFIG_FILES([Makefile src/gallium/drivers/vc4/kernel/Makefile src/gallium/state_trackers/clover/Makefile src/gallium/state_trackers/dri/Makefile - src/gallium/state_trackers/egl/Makefile - src/gallium/state_trackers/gbm/Makefile src/gallium/state_trackers/glx/xlib/Makefile src/gallium/state_trackers/omx/Makefile src/gallium/state_trackers/osmesa/Makefile @@ -2307,12 +2246,7 @@ if test $enable_egl = yes; then egl_drivers=$egl_drivers builtin:egl_dri2 fi -if test x$enable_gallium_egl = xyes; then -echo EGL drivers:${egl_drivers} egl_gallium -echo EGL Gallium STs:$EGL_CLIENT_APIS -else -echo EGL drivers:$egl_drivers -fi +echo EGL drivers:$egl_drivers fi echo diff --git a/docs/egl.html b/docs/egl.html index eebb8c7..e77c235 100644 --- a/docs/egl.html +++ b/docs/egl.html @@ -77,13 +77,6 @@ drivers will be installed to code${libdir}/egl/code./p /dd -dtcode--enable-gallium-egl/code/dt -dd - -pEnable the optional codeegl_gallium/code driver./p - -/dd -
Re: [Mesa-dev] [PATCH 1/2] configure.ac: remove enable flags for EGL and GBM Gallium state trackers
On Tue, 25 Nov 2014 16:22:25 +0100 Marek Olšák mar...@gmail.com wrote: Hi Pekka, I explained in the introductory thread [PATCH 0/2] Disable the EGL state tracker for Linux/DRI builds that it may break swrast (llvmpipe) on Wayland. I also explained all the reasons why I was about to do it. Does it break something not discussed before? Hello Marek, it's just that those explanations never ended up in git. A user was complaining about the Wayland breakage and the fact that the commit (he found it completely by himself, even!) gave absolutely no hint as to why. So I needed to seach the email archives for this thread, because I remembered it was discussed and the reason was here. I just wish you had copied it to the commit message. Cover letters do not end up in the git history. Not a problem anymore for me personally, since I have now bookmarked the thread so I can easily point people to it. The discussion about how to add the Wayland support back is now open at https://bugs.freedesktop.org/show_bug.cgi?id=86701 since there is an actual use case reported (testing GL apps inside qemu). Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Performance regression on Tegra/GK20A since commit 363b53f00069
On Thu, 20 Nov 2014 18:24:34 +0900 Alexandre Courbot acour...@nvidia.com wrote: Hi Pekka, On 11/19/2014 04:34 PM, Pekka Paalanen wrote: On Wed, 19 Nov 2014 15:32:38 +0900 Alexandre Courbot acour...@nvidia.com wrote: Some more information: CPU usage of the EGL app (glmark2 here) is much higher when this patch is applied, which I presume is what triggers the frame skips. On 11/19/2014 03:05 PM, Alexandre Courbot wrote: Hi guys, I am seeing a severe performance regression (lots frame drops when running EGL apps in Weston) on Tegra/GK20A since the following commit: commit 363b53f00069af718f64cf047f19ad5681a8bf6d Author: Marek Olšák marek.ol...@amd.com Date: Sat Nov 1 14:31:09 2014 +0100 egl: remove egl_gallium from the loader Reverting said commit on top of master brings the expected performance back. I am not knowledgeable enough about Mesa to speculate about the reason, but could we try to investigate why this happens and how we could fix this? Hi, that sounds like you used to get egl_gallium as the EGL driver, and after that patch you get egl_dri2. These two have separate Wayland platform implementations (and probably all other platforms as well?). I think that might be a lead for investigation. EGL debug environment variable (EGL_LOG_LEVEL=debug) should confirm which EGL driver gets loaded. You can force the EGL driver with e.g. EGL_DRIVER=egl_dri2. You are spot on, EGL_LOG_LEVEL revealed that I was using the egl_gallium driver while this patch makes Wayland applications use egl_dri2. If I set EGL_DRIVER=egl_gallium things go back to the expected behavior. Note, that there are two different EGL platforms in play: DRM/GBM for Weston, and Wayland for the app. Have you confirmed the problem is in the app side? That is, does Weston itself run smoothly? Weston seems to be fine, and since setting EGL_DRIVER=egl_gallium after starting it brings things back to the previous behavior I believe we can consider it is not part of this problem. Agreed. You say frame drops, how do you see that? Only on screen, or also in the app performance profile? How's the average framerate for the app? Looking at it again this is actually quite interesting. The misbehaving app is glmark2, and what happens is that despite working nicely otherwise, a given frame sometimes remain displayed for up to half a second. Now looking at the framerates reported by glmark2, I noticed that while they are capped at 60fps when using the gallium driver, the numbers are much higher when using dri2 (which is nice!). Excepted when these stuck frames happen, then the reported framerate drops dramatically, indicating that the app itself is also blocked by this. I have a hunch (wl_buffer.release not delivered in time, and client side EGL running out of available buffers), but confirming that would require a Wayland protocol dump up to such hickup. You could try to get one by setting the enviroment variable WAYLAND_DEBUG=client for glmark2. It will be a flood, especially if glmark2 succeeds in running at uncapped framerates. The trace will come to stderr, so you want to redirect that to file. You need to find out where in the trace the hickup happened. The timestamps are in milliseconds. I could then take a look (will need the whole trace). At this point, I think it would be best to open a bug report against Mesa, and continue there. Such freezes obviously should not happen on either EGL driver. Please add me to CC on the bug. Interestingly, if I run weston-simple-egl with dri2, the framerate is again capped at 60fps, so this may be something specific to glmark2. weston-simple-egl does not even try to exceed the monitor framerate. I'd expect glmark2 OTOH to set eglSwapInterval to 0 (unlimited), which means it should be limited only by Wayland roundtrips, when it replaces wl_surface's buffer and eventually needs to wait for one of the buffers to come back for re-use, so that it can continue rendering. Also, and I cannot explain why, but if there is other activity happening in Weston (e.g. another egl application, even another instance of glmark2 itself), the issue seems to not manifest itself. That's probably even more proof that my hunch might be right. But it would also mean the bug is in Weston rather than Mesa. Or in the combination of the two, plus a race. We can easily move the bug report from Mesa to Weston if that's true, anyway. Just for my education, is the egl_gallium driver going to be removed? That was recently discussed in the thread http://lists.freedesktop.org/archives/mesa-dev/2014-November/thread.html I'm not sure. What are egl_gallium and egl_dri2 doing differently? I don't know exactly, but egl_gallium is a Gallium3D state tracker and can support only Gallium3D drivers, while egl_dri2 can load all kinds of Mesa hardware drivers, AFAIU. I'm fairly sure the differences are in the internal
Re: [Mesa-dev] Performance regression on Tegra/GK20A since commit 363b53f00069
On Wed, 19 Nov 2014 15:32:38 +0900 Alexandre Courbot acour...@nvidia.com wrote: Some more information: CPU usage of the EGL app (glmark2 here) is much higher when this patch is applied, which I presume is what triggers the frame skips. On 11/19/2014 03:05 PM, Alexandre Courbot wrote: Hi guys, I am seeing a severe performance regression (lots frame drops when running EGL apps in Weston) on Tegra/GK20A since the following commit: commit 363b53f00069af718f64cf047f19ad5681a8bf6d Author: Marek Olšák marek.ol...@amd.com Date: Sat Nov 1 14:31:09 2014 +0100 egl: remove egl_gallium from the loader Reverting said commit on top of master brings the expected performance back. I am not knowledgeable enough about Mesa to speculate about the reason, but could we try to investigate why this happens and how we could fix this? Hi, that sounds like you used to get egl_gallium as the EGL driver, and after that patch you get egl_dri2. These two have separate Wayland platform implementations (and probably all other platforms as well?). I think that might be a lead for investigation. EGL debug environment variable (EGL_LOG_LEVEL=debug) should confirm which EGL driver gets loaded. You can force the EGL driver with e.g. EGL_DRIVER=egl_dri2. Note, that there are two different EGL platforms in play: DRM/GBM for Weston, and Wayland for the app. Have you confirmed the problem is in the app side? That is, does Weston itself run smoothly? FYI, I've been slowly working on https://github.com/ppaalanen/wesgr but the Weston patches are not upstream yet, and I'm in the process of updating them currently. I hope this tool might give some indication whether the skips/stalls happen in Weston or not. You say frame drops, how do you see that? Only on screen, or also in the app performance profile? How's the average framerate for the app? Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: Allow both 24 and 32 bit X visuals for RGBA configs
On Fri, 07 Nov 2014 11:32:04 -0800 Eric Anholt e...@anholt.net wrote: Pekka Paalanen ppaala...@gmail.com writes: On Thu, 06 Nov 2014 13:01:03 -0800 Ian Romanick i...@freedesktop.org wrote: I thought Eric and Chad already NAKed it in bugzilla. The problem is that applications ask for an RGBA visual for GL blending. They use the alpha channel to generate their images, but the final alpha values are, basically, random... and the composited result would be pure garbage. Reading https://bugs.freedesktop.org/show_bug.cgi?id=67676#c5 We should certainly be exposing EGLConfigs that match up to the rgba visual, though, so you can find it when you try. - Eric To me that sounds like Eric would accept having the visuals there in additional configs (as long as they are sorted after the otherwise equivalent xRGB configs?). Eric, would you like to confirm your current opinion? What I believe we want: Somebody just requesting RGBA with ChooseConfig doesn't end up forced into the depth-32 (blending) X visual. This is the most important. There is some mechanism for somebody that does want the depth 32 visual to get an EGL config to draw to it. This is important but secondary to not breaking everything else, and them having to jump through hoops is reasonable but probably avoidable. I think that is exactly what everybody already agrees on. The remaining question seems to be, should we add new configs with the blending X visual, or wait for an EGL extension to allow to request blending in generic terms (*and* add configs with the blending X visual, since that is essentially required for making, say, a new value for EGL_TRANSPARENT_TYPE to work on X11). Can you imagine other reasonable mechanisms? Btw. I noticed that EGL_TRANSPARENT_TYPE defaults to EGL_NONE for eglChooseConfig, not DONT_CARE (which is not even allowed in EGL 1.4). So if we add only configs now, and add the EGL_TRANSPARENT_TYPE=alpha extension later, apps using eglChooseConfig for initial config filtering and wanting a blending config will be broken yet again. I suppose one might even claim, that exposing bleding configs when EGL_TRANSPARENT_TYPE=NONE is a violation of the spirit of the spec. EGL 1.4 says that EGL_TRANSPARENT_TYPE is not a sorting key at all, but NATIVE_VISUAL_ID is with an implementation specified order. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: Allow both 24 and 32 bit X visuals for RGBA configs
On Thu, 06 Nov 2014 13:01:03 -0800 Ian Romanick i...@freedesktop.org wrote: I thought Eric and Chad already NAKed it in bugzilla. The problem is that applications ask for an RGBA visual for GL blending. They use the alpha channel to generate their images, but the final alpha values are, basically, random... and the composited result would be pure garbage. Reading https://bugs.freedesktop.org/show_bug.cgi?id=67676#c5 We should certainly be exposing EGLConfigs that match up to the rgba visual, though, so you can find it when you try. - Eric To me that sounds like Eric would accept having the visuals there in additional configs (as long as they are sorted after the otherwise equivalent xRGB configs?). Eric, would you like to confirm your current opinion? As Chad points out in comment #1, EGL just doesn't let applications do the thing the patch is trying to do. True in general, not exactly for X11. Surely the native visual is exposed in EGL configs for some reason? (Ok, yeah, not a good argument, EGL considered.) For the record, if a Wayland compositor uses the opaque region hint in Wayland core protocol, we do not see this problem on Wayland at all. In Wayland, so far the implementations always assume that if the alpha channel exists in the buffer, it will be used for window blending, except on regions marked as opaque. So we kind of already have a way to make it work in Wayland, and just X11 is broken here. Still, I won't even try to deny that an extension to EGL_TRANSPARENT_TYPE would be hugely useful and the superior solution over all the hacking. I do understand you would rather see that developed than a native visual based solution. More below... On 11/06/2014 05:12 AM, Emil Velikov wrote: Humble ping x2 On 14/10/14 15:25, Emil Velikov wrote: Humble ping. On 23/09/14 01:25, Emil Velikov wrote: From: Sjoerd Simons sjoerd.sim...@collabora.co.uk When using RGBA EGLConfigs allow both RGB and RGBA X visuals, such that application can decide whether they want to use RGBA (and have the compositor blend their windows). On my system with this change EGLConfigs with a 24 bit visual comes up first, as such applications blindly picking the first EGLConfig will still get an RGB X visual. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67676 --- Hello gents, This patch has been stuck in bugzilla since February this year. Bringing it around here to gather greater exposure and perhaps some comments/reviews. -Emil src/egl/drivers/dri2/egl_dri2.c | 5 + src/egl/drivers/dri2/platform_x11.c | 17 + 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c index 20a7243..2ed90a7 100644 --- a/src/egl/drivers/dri2/egl_dri2.c +++ b/src/egl/drivers/dri2/egl_dri2.c @@ -110,6 +110,11 @@ EGLint dri2_to_egl_attribute_map[] = { static EGLBoolean dri2_match_config(const _EGLConfig *conf, const _EGLConfig *criteria) { + + if (criteria-NativeVisualID != EGL_DONT_CARE +conf-NativeVisualID != criteria-NativeVisualID) + return EGL_FALSE; Does this directly affect the behaviour of eglChooseConfig? EGL 1.4 spec quite clearly says that NATIVE_VISUAL_ID is ignored if specified as matching criterion to eglChooseConfig, which is why apps are expected to match it manually rather than via eglChooseConfig. Doesn't that mean that apps that want alpha blending of the window already check the native visual? Maybe not if it worked by accident... Now, this magical alpha-blending visual thing used to work the past somehow for a long time, didn't it? Until someone noticed a performance problem (fd.o #59783), not even a correctness problem, or are there other bugs about correctness too? The patch that caused bug #59783 to be reported was an intel-driver specific commit, and then #59783 got fixed by a hardware agnostic change elsewhere, causing some existing apps to produce unwanted results rather than just affect performance, leading to fd.o #67676. To not break most of existing apps that need alpha for rendering but do not intend the alpha channel to go to the window system blending, we only need to make sure the configs with argb visual get sorted after the equivalent xrgb visual config? Reading the EGL 1.4 spec, that seems perfectly valid. I don't know if this patch guarantees the config ordering though. Thanks, pq + if (_eglCompareConfigs(conf, criteria, NULL, EGL_FALSE) != 0) return EGL_FALSE; diff --git a/src/egl/drivers/dri2/platform_x11.c b/src/egl/drivers/dri2/platform_x11.c index a7a7338..3395fb7 100644 --- a/src/egl/drivers/dri2/platform_x11.c +++ b/src/egl/drivers/dri2/platform_x11.c @@ -672,14 +672,15 @@ dri2_x11_add_configs_for_visuals(struct dri2_egl_display *dri2_dpy, dri2_add_config(disp, dri2_dpy-driver_configs[j], id++,
Re: [Mesa-dev] [PATCH 0/2] Disable the EGL state tracker for Linux/DRI builds
On Tue, 4 Nov 2014 23:42:43 +0100 Marek Olšák mar...@gmail.com wrote: Hi everybody, I'm about to address this long-standing issue: The EGL state tracker is redundant. It duplicates what st/dri does and it also duplicates what the common loader egl_dri2 does, which is used by all classic drivers and even works better with gallium drivers. Let's compare EGL extensions for both backends: st/egl: EGL version string: 1.4 (Gallium) EGL client APIs: OpenGL OpenGL_ES OpenGL_ES2 OpenVG EGL extensions string: EGL_WL_bind_wayland_display EGL_KHR_image_base EGL_KHR_image_pixmap EGL_KHR_image EGL_KHR_reusable_sync EGL_KHR_fence_sync EGL_KHR_surfaceless_context EGL_NOK_swap_region EGL_NV_post_sub_buffer egl_dri2: EGL version string: 1.4 (DRI2) EGL client APIs: OpenGL OpenGL_ES OpenGL_ES2 OpenGL_ES3 EGL extensions string: EGL_MESA_drm_image EGL_MESA_configless_context EGL_KHR_image_base EGL_KHR_image_pixmap EGL_KHR_image EGL_KHR_gl_texture_2D_image EGL_KHR_gl_texture_cubemap_image EGL_KHR_gl_renderbuffer_image EGL_KHR_surfaceless_context EGL_KHR_create_context EGL_NOK_swap_region EGL_NOK_texture_from_pixmap EGL_CHROMIUM_sync_control EGL_EXT_image_dma_buf_import EGL_NV_post_sub_buffer egl_dri2 also supports MSAA on the window framebuffer (through st/dri). It's really obvious which one is better. I am slightly surprised you do not get EGL_WL_bind_wayland_display on DRI2. Wonder why that is... Other things not in your dri2 that are in gallium are KHR_reusable_sync and KHR_fence_sync. Does that matter? I'm aware of 2 features that we will lose: - swrast on Wayland - I'm not sure about this. Perhaps kms-swrast has addressed this already. I don't think it does. Isn't kms-swrast about the EGL DRM/KMS platform but using dumb buffers with software rendering instead of real acceleratable buffers? That is, it could be used by Wayland display servers, but not Wayland apps. I don't think that has anything to do with the EGL Wayland platform that is used by applications on Wayland. I am not aware of anything else than egl_gallium.so implementing the glue for doing software rendering into wl_shm-based buffers, which do not need *any* special support from the display server. This means that the server does not need to use EGL at all while still supporting software-GL in apps. It also allows one to use a non-Wayland supporting EGL implementation to accelerate compositing in the server, while still supporting software rendered GL apps. In other words, I believe that removing egl_gallium.so will prevent running GL apps on Wayland with software-GL, like you suspected. I'm not sure how used that is, but every once in a while I explain to someone how to get software-GL going, so I would expect a few upset people from that. I have no clue what it would take to support swrast via wl_shm on EGL Wayland platform with egl_dri2. I tried to ask if the kms-swrast made that any easier, but I was left with the feeling that it doesn't. Swrast on wl_shm would be the sensible thing, but swrast on DRM buffers on Wayland platform would probably be useless, at least until we have a generic dmabuf protocol in place (which is a whole another saga with no end in sight), and it would still explicitly depend on DRM. Personally I like the idea of getting rid of egl_gallium.so, but indeed we do lose some things if that is done right now. Or maybe this would be the kick needed to have someone look at implementing wl_shm support for swrast in egl_dri2... Thanks, pq - OpenVG - It has never taken off. If people want this on Linux, it should use egl_dri2 and st/dri, like OpenGL does. This series removes st/egl and st/gbm support from the autoconf build (the latter depends on the former and is probably just as redundant). The next step is to remove all Linux-specific backends from st/egl. Windows, Android, and other platform backends will be kept intact, therefore st/egl won't be removed completely. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] Revert configure: ask vdpau.pc for the default location of the vdpau drivers
On Sun, 28 Sep 2014 13:00:47 -0400 Ilia Mirkin imir...@alum.mit.edu wrote: This reverts commit bbe6f7f865cd4316b5f885507ee0b128a20686eb. This change broke the usual assumption that setting a prefix will cause files to be installed into that prefix. Restore that assumption before people's system installs are accidentally overwritten, and everyone starts having to add a --with-libvdpau-dir as a hack to work around this. Signed-off-by: Ilia Mirkin imir...@alum.mit.edu Cc: Christian König deathsim...@vodafone.de Cc: Alexandre Demers alexandre.f.dem...@gmail.com Cc: Matt Turner matts...@gmail.com Cc: Emil Velikov emil.l.veli...@gmail.com --- Emil and I had a discussion about this on IRC but he didn't seem to agree. However I think that this behavior is as broken as opening up ld.so.conf, grabbing the first dir from that and using that as the installation libdir. (Which I think everyone can agree is ridiculous.) Setting a prefix should cause things to be installed into that prefix. This is how every other autoconf-using package behaves. As a side-note, using --with-module-dir as an installation destination is a little crazy, but... I don't have the energy to fight that one. configure.ac | 11 +++ 1 file changed, 3 insertions(+), 8 deletions(-) Hi, so do I understand correctly, that assuming there is a system libvdpau installed but not another one in the $prefix, then Emil wants the system libvdpau to automatically discover the backends from $prefix? (Not literally, but with an equivalent effect.) If so, I would find that strange. A system library finding modules automatically from inside a $prefix is certainly unexpected. One uses a separate $prefix to exactly avoid that. Yet, from the IRC and this discussion here, it sounds like the reason to make the original patch, except it's just the other way around: cannot make system libvdpau go look in arbitrary $prefixes, so install into the system paths instead. If that was not the reason, then I completely missed it, sorry. FWIW, I agree with Ilia: 'make install' should never attempt to write outside of the given $prefix (or DESTDIR, whatwasit). It didn't even cross my mind before that some build might actually try writing to system paths regardless (and then simply fail half-way through installing, leaving me with a broken $prefix). Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: fix EXT_image_dma_buf_import fds
On Wed, 13 Aug 2014 19:46:40 +0300 Pohjolainen, Topi topi.pohjolai...@intel.com wrote: On Fri, Aug 08, 2014 at 05:28:59PM +0300, Pekka Paalanen wrote: From: Pekka Paalanen pekka.paala...@collabora.co.uk The EGL_EXT_image_dma_buf_import specification was revised (according to its revision history) on Dec 5th, 2013, for EGL to not take ownership of the file descriptors. Do not close the file descriptors passed in to eglCreateImageKHR with EGL_LINUX_DMA_BUF_EXT target. It is assumed, that the drivers, which ultimately process the file descriptors, do not close or modify them in any way either. This avoids the need to dup(), as it seems we would only need to just close the dup'd file descriptors right after. Signed-off-by: Pekka Paalanen pekka.paala...@collabora.co.uk I wrote the current logic based on the older version, and at least to me this is the right thing to do. Thanks for fixing it as well as taking care of the piglit test. Reviewed-by: Topi Pohjolainen topi.pohjolai...@intel.com I would be happier though if someone else gave his/her approval as well. Thank you, I have added your R-b, and will wait some more. I think I want the piglit patch landed first before I try to push this, anyway. Thanks for the piglit review too, I sent a new version with your R-b and the comment fix. - pq --- Hi, the corresponding Piglit fix has already been sent to the piglit mailing list. Both this and that need to be applied to not regress Mesa' piglit run by one test (ext_image_dma_buf_import-ownership_transfer). This patch fixes my test case on heavily modified Weston. Thanks, pq --- src/egl/drivers/dri2/egl_dri2.c | 37 ++--- 1 file changed, 6 insertions(+), 31 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c index 5602ec3..cd85fd3 100644 --- a/src/egl/drivers/dri2/egl_dri2.c +++ b/src/egl/drivers/dri2/egl_dri2.c @@ -1678,36 +1678,13 @@ dri2_check_dma_buf_format(const _EGLImageAttribs *attrs) /** * The spec says: * - * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, - * the EGL takes ownership of the file descriptor and is responsible for - * closing it, which it may do at any time while the EGLDisplay is - * initialized. + * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, the + * EGL will take a reference to the dma_buf(s) which it will release at any + * time while the EGLDisplay is initialized. It is the responsibility of the + * application to close the dma_buf file descriptors. + * + * Therefore we must never close or otherwise modify the file descriptors. */ -static void -dri2_take_dma_buf_ownership(const int *fds, unsigned num_fds) -{ - int already_closed[num_fds]; - unsigned num_closed = 0; - unsigned i, j; - - for (i = 0; i num_fds; ++i) { - /** - * The same file descriptor can be referenced multiple times in case more - * than one plane is found in the same buffer, just with a different - * offset. - */ - for (j = 0; j num_closed; ++j) { - if (already_closed[j] == fds[i]) -break; - } - - if (j == num_closed) { - close(fds[i]); - already_closed[num_closed++] = fds[i]; - } - } -} - static _EGLImage * dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, EGLClientBuffer buffer, const EGLint *attr_list) @@ -1770,8 +1747,6 @@ dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, return EGL_NO_IMAGE_KHR; res = dri2_create_image_from_dri(disp, dri_image); - if (res) - dri2_take_dma_buf_ownership(fds, num_fds); return res; } -- 1.8.5.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] i965: fix compiler error in union initiliazer
From: Pekka Paalanen pekka.paala...@collabora.co.uk gcc 4.6.3 chokes with the following error: brw_vec4.cpp: In member function 'int brw::vec4_visitor::setup_uniforms(int)': brw_vec4.cpp:1496:37: error: expected primary-expression before '.' token Apparently C++ does not do named initializers for unions, except maybe as a gcc extension, which is not present here. As .f is the first element of the union, just drop it. Fixes the build error. Cc: Matt Turner matts...@gmail.com Signed-off-by: Pekka Paalanen pekka.paala...@collabora.co.uk --- src/mesa/drivers/dri/i965/brw_vec4.cpp | 2 +- src/mesa/drivers/dri/i965/brw_vec4_visitor.cpp | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/mesa/drivers/dri/i965/brw_vec4.cpp b/src/mesa/drivers/dri/i965/brw_vec4.cpp index 5f8f399..5d4a92c 100644 --- a/src/mesa/drivers/dri/i965/brw_vec4.cpp +++ b/src/mesa/drivers/dri/i965/brw_vec4.cpp @@ -1493,7 +1493,7 @@ vec4_visitor::setup_uniforms(int reg) reralloc(NULL, stage_prog_data-param, const gl_constant_value *, 4); for (unsigned int i = 0; i 4; i++) { unsigned int slot = this-uniforms * 4 + i; -static gl_constant_value zero = { .f = 0.0 }; +static gl_constant_value zero = { 0.0 }; stage_prog_data-param[slot] = zero; } diff --git a/src/mesa/drivers/dri/i965/brw_vec4_visitor.cpp b/src/mesa/drivers/dri/i965/brw_vec4_visitor.cpp index 4863cae..ce64b30 100644 --- a/src/mesa/drivers/dri/i965/brw_vec4_visitor.cpp +++ b/src/mesa/drivers/dri/i965/brw_vec4_visitor.cpp @@ -695,7 +695,7 @@ vec4_visitor::setup_uniform_values(ir_variable *ir) components++; } for (; i 4; i++) { -static gl_constant_value zero = { .f = 0.0 }; +static gl_constant_value zero = { 0.0 }; stage_prog_data-param[uniforms * 4 + i] = zero; } -- 1.8.5.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: fix EXT_image_dma_buf_import fds
On Thu, 14 Aug 2014 19:43:58 +0300 Pohjolainen, Topi topi.pohjolai...@intel.com wrote: On Thu, Aug 14, 2014 at 08:50:32AM -0700, Matt Turner wrote: On Thu, Aug 14, 2014 at 12:24 AM, Pekka Paalanen ppaala...@gmail.com wrote: On Wed, 13 Aug 2014 19:46:40 +0300 Pohjolainen, Topi topi.pohjolai...@intel.com wrote: On Fri, Aug 08, 2014 at 05:28:59PM +0300, Pekka Paalanen wrote: From: Pekka Paalanen pekka.paala...@collabora.co.uk The EGL_EXT_image_dma_buf_import specification was revised (according to its revision history) on Dec 5th, 2013, for EGL to not take ownership of the file descriptors. Do not close the file descriptors passed in to eglCreateImageKHR with EGL_LINUX_DMA_BUF_EXT target. It is assumed, that the drivers, which ultimately process the file descriptors, do not close or modify them in any way either. This avoids the need to dup(), as it seems we would only need to just close the dup'd file descriptors right after. Signed-off-by: Pekka Paalanen pekka.paala...@collabora.co.uk I wrote the current logic based on the older version, and at least to me this is the right thing to do. Thanks for fixing it as well as taking care of the piglit test. Reviewed-by: Topi Pohjolainen topi.pohjolai...@intel.com I would be happier though if someone else gave his/her approval as well. Thank you, I have added your R-b, and will wait some more. I think I want the piglit patch landed first before I try to push this, anyway. Thanks for the piglit review too, I sent a new version with your R-b and the comment fix. The plan is to make the 10.3 branch tomorrow, so don't wait too long. :) Just had a chat with Matt, better for you to commit the fix. I'll push your piglit patch. There is still time before next release if for some reason there is something amiss. Pushed to Mesa master only. No stable CCs per Matt's reqeust. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: fix EXT_image_dma_buf_import fds
On Wed, 13 Aug 2014 07:41:50 +0200 Gwenole Beauchesne gb.de...@gmail.com wrote: Hi, 2014-08-08 16:28 GMT+02:00 Pekka Paalanen ppaala...@gmail.com: From: Pekka Paalanen pekka.paala...@collabora.co.uk The EGL_EXT_image_dma_buf_import specification was revised (according to its revision history) on Dec 5th, 2013, for EGL to not take ownership of the file descriptors. Do not close the file descriptors passed in to eglCreateImageKHR with EGL_LINUX_DMA_BUF_EXT target. It is assumed, that the drivers, which ultimately process the file descriptors, do not close or modify them in any way either. This avoids the need to dup(), as it seems we would only need to just close the dup'd file descriptors right after. Since this was a so radical change, shouldn't have change the API string name instead to EXT_image_dma_buf_import2 for instance? It's like this in the Khronos registry, so how could I change the name? That is, I can write the code, but I need to know that is actually wanted and correct, and someone will do the same for the specs at Khronos. FWIW, when I wrote experimental dma_buf support in Weston, I was reading the Khronos spec. I just made a temporary hack to briefly be able to test on Mesa. And I know of a proprietary EGL implementation that implements this like the new spec, not like Mesa. I would like to consider this as just a Mesa bug, and a spec bug that was already fixed. Anyway, could you please point to the following bug in your patch? https://bugs.freedesktop.org/show_bug.cgi?id=76188 Done, thanks for the pointer. I also commented on the bug. After someone tells me whether or not I should add the stable CC's, I can send a new version. Thanks, pq Thanks. the corresponding Piglit fix has already been sent to the piglit mailing list. Both this and that need to be applied to not regress Mesa' piglit run by one test (ext_image_dma_buf_import-ownership_transfer). This patch fixes my test case on heavily modified Weston. Thanks, pq --- src/egl/drivers/dri2/egl_dri2.c | 37 ++--- 1 file changed, 6 insertions(+), 31 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c index 5602ec3..cd85fd3 100644 --- a/src/egl/drivers/dri2/egl_dri2.c +++ b/src/egl/drivers/dri2/egl_dri2.c @@ -1678,36 +1678,13 @@ dri2_check_dma_buf_format(const _EGLImageAttribs *attrs) /** * The spec says: * - * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, - * the EGL takes ownership of the file descriptor and is responsible for - * closing it, which it may do at any time while the EGLDisplay is - * initialized. + * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, the + * EGL will take a reference to the dma_buf(s) which it will release at any + * time while the EGLDisplay is initialized. It is the responsibility of the + * application to close the dma_buf file descriptors. + * + * Therefore we must never close or otherwise modify the file descriptors. */ -static void -dri2_take_dma_buf_ownership(const int *fds, unsigned num_fds) -{ - int already_closed[num_fds]; - unsigned num_closed = 0; - unsigned i, j; - - for (i = 0; i num_fds; ++i) { - /** - * The same file descriptor can be referenced multiple times in case more - * than one plane is found in the same buffer, just with a different - * offset. - */ - for (j = 0; j num_closed; ++j) { - if (already_closed[j] == fds[i]) -break; - } - - if (j == num_closed) { - close(fds[i]); - already_closed[num_closed++] = fds[i]; - } - } -} - static _EGLImage * dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, EGLClientBuffer buffer, const EGLint *attr_list) @@ -1770,8 +1747,6 @@ dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, return EGL_NO_IMAGE_KHR; res = dri2_create_image_from_dri(disp, dri_image); - if (res) - dri2_take_dma_buf_ownership(fds, num_fds); return res; } -- 1.8.5.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev Regards, ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] egl_dri2: fix EXT_image_dma_buf_import fds
From: Pekka Paalanen pekka.paala...@collabora.co.uk The EGL_EXT_image_dma_buf_import specification was revised (according to its revision history) on Dec 5th, 2013, for EGL to not take ownership of the file descriptors. Do not close the file descriptors passed in to eglCreateImageKHR with EGL_LINUX_DMA_BUF_EXT target. It is assumed, that the drivers, which ultimately process the file descriptors, do not close or modify them in any way either. This avoids the need to dup(), as it seems we would only need to just close the dup'd file descriptors right after. Signed-off-by: Pekka Paalanen pekka.paala...@collabora.co.uk --- Hi, the corresponding Piglit fix has already been sent to the piglit mailing list. Both this and that need to be applied to not regress Mesa' piglit run by one test (ext_image_dma_buf_import-ownership_transfer). This patch fixes my test case on heavily modified Weston. Thanks, pq --- src/egl/drivers/dri2/egl_dri2.c | 37 ++--- 1 file changed, 6 insertions(+), 31 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c index 5602ec3..cd85fd3 100644 --- a/src/egl/drivers/dri2/egl_dri2.c +++ b/src/egl/drivers/dri2/egl_dri2.c @@ -1678,36 +1678,13 @@ dri2_check_dma_buf_format(const _EGLImageAttribs *attrs) /** * The spec says: * - * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, - * the EGL takes ownership of the file descriptor and is responsible for - * closing it, which it may do at any time while the EGLDisplay is - * initialized. + * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, the + * EGL will take a reference to the dma_buf(s) which it will release at any + * time while the EGLDisplay is initialized. It is the responsibility of the + * application to close the dma_buf file descriptors. + * + * Therefore we must never close or otherwise modify the file descriptors. */ -static void -dri2_take_dma_buf_ownership(const int *fds, unsigned num_fds) -{ - int already_closed[num_fds]; - unsigned num_closed = 0; - unsigned i, j; - - for (i = 0; i num_fds; ++i) { - /** - * The same file descriptor can be referenced multiple times in case more - * than one plane is found in the same buffer, just with a different - * offset. - */ - for (j = 0; j num_closed; ++j) { - if (already_closed[j] == fds[i]) -break; - } - - if (j == num_closed) { - close(fds[i]); - already_closed[num_closed++] = fds[i]; - } - } -} - static _EGLImage * dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, EGLClientBuffer buffer, const EGLint *attr_list) @@ -1770,8 +1747,6 @@ dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, return EGL_NO_IMAGE_KHR; res = dri2_create_image_from_dri(disp, dri_image); - if (res) - dri2_take_dma_buf_ownership(fds, num_fds); return res; } -- 1.8.5.5 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl_dri2: fix EXT_image_dma_buf_import fds
On Fri, 8 Aug 2014 17:28:59 +0300 Pekka Paalanen ppaala...@gmail.com wrote: From: Pekka Paalanen pekka.paala...@collabora.co.uk The EGL_EXT_image_dma_buf_import specification was revised (according to its revision history) on Dec 5th, 2013, for EGL to not take ownership of the file descriptors. Do not close the file descriptors passed in to eglCreateImageKHR with EGL_LINUX_DMA_BUF_EXT target. It is assumed, that the drivers, which ultimately process the file descriptors, do not close or modify them in any way either. This avoids the need to dup(), as it seems we would only need to just close the dup'd file descriptors right after. Signed-off-by: Pekka Paalanen pekka.paala...@collabora.co.uk --- Hi, the corresponding Piglit fix has already been sent to the piglit mailing list. Both this and that need to be applied to not regress Mesa' piglit run by one test (ext_image_dma_buf_import-ownership_transfer). This patch fixes my test case on heavily modified Weston. Sorry, I forgot. This patch would apply also to the 10.0, 10.1, and 10.2 stable branches. Should it be a candidate there? Thanks, pq --- src/egl/drivers/dri2/egl_dri2.c | 37 ++--- 1 file changed, 6 insertions(+), 31 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c index 5602ec3..cd85fd3 100644 --- a/src/egl/drivers/dri2/egl_dri2.c +++ b/src/egl/drivers/dri2/egl_dri2.c @@ -1678,36 +1678,13 @@ dri2_check_dma_buf_format(const _EGLImageAttribs *attrs) /** * The spec says: * - * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, - * the EGL takes ownership of the file descriptor and is responsible for - * closing it, which it may do at any time while the EGLDisplay is - * initialized. + * If eglCreateImageKHR is successful for a EGL_LINUX_DMA_BUF_EXT target, the + * EGL will take a reference to the dma_buf(s) which it will release at any + * time while the EGLDisplay is initialized. It is the responsibility of the + * application to close the dma_buf file descriptors. + * + * Therefore we must never close or otherwise modify the file descriptors. */ -static void -dri2_take_dma_buf_ownership(const int *fds, unsigned num_fds) -{ - int already_closed[num_fds]; - unsigned num_closed = 0; - unsigned i, j; - - for (i = 0; i num_fds; ++i) { - /** - * The same file descriptor can be referenced multiple times in case more - * than one plane is found in the same buffer, just with a different - * offset. - */ - for (j = 0; j num_closed; ++j) { - if (already_closed[j] == fds[i]) -break; - } - - if (j == num_closed) { - close(fds[i]); - already_closed[num_closed++] = fds[i]; - } - } -} - static _EGLImage * dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, EGLClientBuffer buffer, const EGLint *attr_list) @@ -1770,8 +1747,6 @@ dri2_create_image_dma_buf(_EGLDisplay *disp, _EGLContext *ctx, return EGL_NO_IMAGE_KHR; res = dri2_create_image_from_dri(disp, dri_image); - if (res) - dri2_take_dma_buf_ownership(fds, num_fds); return res; } ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 3/6] Add support for swrast to the DRM EGL platform
On Thu, 24 Jul 2014 17:16:39 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: On 24/07/14 14:30, Pekka Paalanen wrote: On Thu, 24 Jul 2014 13:34:42 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: On 24/07/14 07:23, Pekka Paalanen wrote: On Thu, 24 Jul 2014 01:43:35 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: From: Giovanni Campagna gcampa...@src.gnome.org Turn GBM into a swrast loader (providing putimage/getimage backed by a dumb KMS buffer). This allows to run KMS+DRM GL applications (such as weston or mutter-wayland) unmodified on cards that don't have any client side HW acceleration component but that can do modeset (examples include simpledrm and qxl) [Emil Velikov] - Fix make check. - Split dri_open_driver() from dri_load_driver(). - Don't try to bind the swrast extensions when using dri. - Handle swrast-CreateNewScreen() failure. - strdup the driver_name, as it's free'd at destruction. - s/LIBGL_ALWAYS_SOFTWARE/GBM_ALWAYS_SOFTWARE/ - Move gbm_dri_bo_map/unmap to gbm_driiint.h. - Correct swrast fallback logic. Signed-off-by: Emil Velikov emil.l.veli...@gmail.com --- src/egl/drivers/dri2/platform_drm.c | 153 +++ src/gbm/backends/dri/gbm_dri.c | 203 +++- src/gbm/backends/dri/gbm_driint.h | 57 +- src/gbm/gbm-symbols-check | 1 + src/gbm/main/gbm.h | 3 + 5 files changed, 369 insertions(+), 48 deletions(-) ... is GBM_ALWAYS_SOFTWARE a new env var? Is it documented somewhere? None of the GBM env variables are. In a matter of fact there isn't even a single reference of gbm in the docs - env vars or otherwise. It's like gbm does not exist :'( Will need to get a new document out there similar to egl.html. Any objections if we do that as a follow up patch ? I would be happy to see any documentation at some point. :-) How does it interact with EGL_SOFTWARE? Does GBM_ALWAYS_SOFTWARE affect GBM's ability to import dmabufs somehow, or only the surfaces that will be passed to EGL? (Importing dmabufs to be passed directly to KMS for scanout.) Considering it's a variable that needs to be explicitly set, the behaviour depends on the current state of the software backend. AFAICS current swrast/kms_swrast does not allow/use dmabufs. Maybe that was a stupid question on my part, as I don't know whether generic DRM API even has a way to import dmabufs at all. Something like dumb buffer import... AFAICS one needs to use a device driver capable of handling dmabufs, otherwise the core drm will return EINVAL/ENOSYS. I don't see any software implementation (dumb buffer) that can be used here. IMHO all the above should be mentioned somewhere/documented rather than expecting everything to magically work exactly as you expected it to. I'm terribly confused with all the *SOFTWARE* variables, and it seems I'm not alone as someone just recently filed a bunch of Weston bug reports while trying to get software GL rendering with LIBGL_ALWAYS_SOFTWARE on DRM/KMS. I have the sneaking suspicion that most people see LIBGL_ALWAYS_SOFTWARE as the magic variable that reads your mind and makes things work as you would imagine them. Would be great to move from that to read the documentation and amend propose improvements of it when needed. Right. What about EGL_SOFTWARE? Why not use that on GBM platform too? A quick grep implies that X11 and Wayland platforms use it (but only with egl_gallium.so?)? A bit of history - when Chia-I was doing all the EGL work, he was considerable enough to make individual switches for people to toggle when needed. He also documented all of these :) GBM_ALWAYS_SOFTWARE sounds like a platform-specific variable, but should forcing software rendering be platform-specific? GBM is not EGL I fear. Also EGL is not the only user of GBM. Sure, but GBM does no rendering at all, does it? It's strange, GBM needs to load a driver but does not render, so forcing software rendering sounds funny. But obviously you want to be able to tell GBM to load a software driver instead of a hardware one. Certainly does not make this any easier to understand. Well, the same can be said about EGL... So, this thing is about GBM the driver loader, not GBM the EGL platform. Maybe that is where I got confused. (The subject does speak explicitly about EGL platform.) If we were to have zero users of libgbm outside of mesa(libegl) and then we can happily go with EGL_SOFTWARE and be gone with it. Perhaps the most reasonable compromise is to use the GBM env var and fallback to the EGL one. How does that sound ? You raise a good point there. GBM is initialized first, whether it is used as an EGL platform or stand-alone (just for importing dmabufs, I think... what about libva?). For example, Weston uses GBM as an EGL platform for GL-based
[Mesa-dev] Env var for forcing sw driver (Re: [PATCH 3/6] Add support for swrast to the DRM EGL platform)
On Fri, 25 Jul 2014 09:14:40 +0300 Pekka Paalanen ppaala...@gmail.com wrote: On Thu, 24 Jul 2014 17:16:39 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: On 24/07/14 14:30, Pekka Paalanen wrote: On Thu, 24 Jul 2014 13:34:42 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: On 24/07/14 07:23, Pekka Paalanen wrote: On Thu, 24 Jul 2014 01:43:35 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: From: Giovanni Campagna gcampa...@src.gnome.org Turn GBM into a swrast loader (providing putimage/getimage backed by a dumb KMS buffer). This allows to run KMS+DRM GL applications (such as weston or mutter-wayland) unmodified on cards that don't have any client side HW acceleration component but that can do modeset (examples include simpledrm and qxl) [Emil Velikov] - Fix make check. - Split dri_open_driver() from dri_load_driver(). - Don't try to bind the swrast extensions when using dri. - Handle swrast-CreateNewScreen() failure. - strdup the driver_name, as it's free'd at destruction. - s/LIBGL_ALWAYS_SOFTWARE/GBM_ALWAYS_SOFTWARE/ - Move gbm_dri_bo_map/unmap to gbm_driiint.h. - Correct swrast fallback logic. Signed-off-by: Emil Velikov emil.l.veli...@gmail.com --- src/egl/drivers/dri2/platform_drm.c | 153 +++ src/gbm/backends/dri/gbm_dri.c | 203 +++- src/gbm/backends/dri/gbm_driint.h | 57 +- src/gbm/gbm-symbols-check | 1 + src/gbm/main/gbm.h | 3 + 5 files changed, 369 insertions(+), 48 deletions(-) ... is GBM_ALWAYS_SOFTWARE a new env var? Is it documented somewhere? None of the GBM env variables are. In a matter of fact there isn't even a single reference of gbm in the docs - env vars or otherwise. It's like gbm does not exist :'( Will need to get a new document out there similar to egl.html. Any objections if we do that as a follow up patch ? I would be happy to see any documentation at some point. :-) How does it interact with EGL_SOFTWARE? Does GBM_ALWAYS_SOFTWARE affect GBM's ability to import dmabufs somehow, or only the surfaces that will be passed to EGL? (Importing dmabufs to be passed directly to KMS for scanout.) Considering it's a variable that needs to be explicitly set, the behaviour depends on the current state of the software backend. AFAICS current swrast/kms_swrast does not allow/use dmabufs. Maybe that was a stupid question on my part, as I don't know whether generic DRM API even has a way to import dmabufs at all. Something like dumb buffer import... AFAICS one needs to use a device driver capable of handling dmabufs, otherwise the core drm will return EINVAL/ENOSYS. I don't see any software implementation (dumb buffer) that can be used here. IMHO all the above should be mentioned somewhere/documented rather than expecting everything to magically work exactly as you expected it to. I'm terribly confused with all the *SOFTWARE* variables, and it seems I'm not alone as someone just recently filed a bunch of Weston bug reports while trying to get software GL rendering with LIBGL_ALWAYS_SOFTWARE on DRM/KMS. I have the sneaking suspicion that most people see LIBGL_ALWAYS_SOFTWARE as the magic variable that reads your mind and makes things work as you would imagine them. Would be great to move from that to read the documentation and amend propose improvements of it when needed. Right. What about EGL_SOFTWARE? Why not use that on GBM platform too? A quick grep implies that X11 and Wayland platforms use it (but only with egl_gallium.so?)? A bit of history - when Chia-I was doing all the EGL work, he was considerable enough to make individual switches for people to toggle when needed. He also documented all of these :) GBM_ALWAYS_SOFTWARE sounds like a platform-specific variable, but should forcing software rendering be platform-specific? GBM is not EGL I fear. Also EGL is not the only user of GBM. Sure, but GBM does no rendering at all, does it? It's strange, GBM needs to load a driver but does not render, so forcing software rendering sounds funny. But obviously you want to be able to tell GBM to load a software driver instead of a hardware one. Certainly does not make this any easier to understand. Well, the same can be said about EGL... So, this thing is about GBM the driver loader, not GBM the EGL platform. Maybe that is where I got confused. (The subject does speak explicitly about EGL platform.) If we were to have zero users of libgbm outside of mesa(libegl) and then we can happily go with EGL_SOFTWARE and be gone with it. Perhaps the most reasonable compromise is to use the GBM env var and fallback to the EGL one. How does that sound ? You raise
Re: [Mesa-dev] [PATCH 3/6] Add support for swrast to the DRM EGL platform
On Thu, 24 Jul 2014 01:43:35 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: From: Giovanni Campagna gcampa...@src.gnome.org Turn GBM into a swrast loader (providing putimage/getimage backed by a dumb KMS buffer). This allows to run KMS+DRM GL applications (such as weston or mutter-wayland) unmodified on cards that don't have any client side HW acceleration component but that can do modeset (examples include simpledrm and qxl) [Emil Velikov] - Fix make check. - Split dri_open_driver() from dri_load_driver(). - Don't try to bind the swrast extensions when using dri. - Handle swrast-CreateNewScreen() failure. - strdup the driver_name, as it's free'd at destruction. - s/LIBGL_ALWAYS_SOFTWARE/GBM_ALWAYS_SOFTWARE/ - Move gbm_dri_bo_map/unmap to gbm_driiint.h. - Correct swrast fallback logic. Signed-off-by: Emil Velikov emil.l.veli...@gmail.com --- src/egl/drivers/dri2/platform_drm.c | 153 +++ src/gbm/backends/dri/gbm_dri.c | 203 +++- src/gbm/backends/dri/gbm_driint.h | 57 +- src/gbm/gbm-symbols-check | 1 + src/gbm/main/gbm.h | 3 + 5 files changed, 369 insertions(+), 48 deletions(-) diff --git a/src/gbm/backends/dri/gbm_dri.c b/src/gbm/backends/dri/gbm_dri.c index 347bc99..1aca506 100644 --- a/src/gbm/backends/dri/gbm_dri.c +++ b/src/gbm/backends/dri/gbm_dri.c @@ -743,7 +886,7 @@ static struct gbm_device * dri_device_create(int fd) { struct gbm_dri_device *dri; - int ret; + int ret, force_sw; dri = calloc(1, sizeof *dri); if (!dri) @@ -763,7 +906,15 @@ dri_device_create(int fd) dri-base.type = GBM_DRM_DRIVER_TYPE_DRI; dri-base.base.name = drm; - ret = dri_screen_create(dri); + force_sw = getenv(GBM_ALWAYS_SOFTWARE) != NULL; + if (!force_sw) { + ret = dri_screen_create(dri); + if (ret) + ret = dri_screen_create_swrast(dri); + } else { + ret = dri_screen_create_swrast(dri); + } + if (ret) goto err_dri; Hi, is GBM_ALWAYS_SOFTWARE a new env var? Is it documented somewhere? How does it interact with EGL_SOFTWARE? Does GBM_ALWAYS_SOFTWARE affect GBM's ability to import dmabufs somehow, or only the surfaces that will be passed to EGL? (Importing dmabufs to be passed directly to KMS for scanout.) I'm terribly confused with all the *SOFTWARE* variables, and it seems I'm not alone as someone just recently filed a bunch of Weston bug reports while trying to get software GL rendering with LIBGL_ALWAYS_SOFTWARE on DRM/KMS. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 3/6] Add support for swrast to the DRM EGL platform
On Thu, 24 Jul 2014 13:34:42 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: On 24/07/14 07:23, Pekka Paalanen wrote: On Thu, 24 Jul 2014 01:43:35 +0100 Emil Velikov emil.l.veli...@gmail.com wrote: From: Giovanni Campagna gcampa...@src.gnome.org Turn GBM into a swrast loader (providing putimage/getimage backed by a dumb KMS buffer). This allows to run KMS+DRM GL applications (such as weston or mutter-wayland) unmodified on cards that don't have any client side HW acceleration component but that can do modeset (examples include simpledrm and qxl) [Emil Velikov] - Fix make check. - Split dri_open_driver() from dri_load_driver(). - Don't try to bind the swrast extensions when using dri. - Handle swrast-CreateNewScreen() failure. - strdup the driver_name, as it's free'd at destruction. - s/LIBGL_ALWAYS_SOFTWARE/GBM_ALWAYS_SOFTWARE/ - Move gbm_dri_bo_map/unmap to gbm_driiint.h. - Correct swrast fallback logic. Signed-off-by: Emil Velikov emil.l.veli...@gmail.com --- src/egl/drivers/dri2/platform_drm.c | 153 +++ src/gbm/backends/dri/gbm_dri.c | 203 +++- src/gbm/backends/dri/gbm_driint.h | 57 +- src/gbm/gbm-symbols-check | 1 + src/gbm/main/gbm.h | 3 + 5 files changed, 369 insertions(+), 48 deletions(-) diff --git a/src/gbm/backends/dri/gbm_dri.c b/src/gbm/backends/dri/gbm_dri.c index 347bc99..1aca506 100644 --- a/src/gbm/backends/dri/gbm_dri.c +++ b/src/gbm/backends/dri/gbm_dri.c @@ -743,7 +886,7 @@ static struct gbm_device * dri_device_create(int fd) { struct gbm_dri_device *dri; - int ret; + int ret, force_sw; dri = calloc(1, sizeof *dri); if (!dri) @@ -763,7 +906,15 @@ dri_device_create(int fd) dri-base.type = GBM_DRM_DRIVER_TYPE_DRI; dri-base.base.name = drm; - ret = dri_screen_create(dri); + force_sw = getenv(GBM_ALWAYS_SOFTWARE) != NULL; + if (!force_sw) { + ret = dri_screen_create(dri); + if (ret) + ret = dri_screen_create_swrast(dri); + } else { + ret = dri_screen_create_swrast(dri); + } + if (ret) goto err_dri; Hi, is GBM_ALWAYS_SOFTWARE a new env var? Is it documented somewhere? None of the GBM env variables are. In a matter of fact there isn't even a single reference of gbm in the docs - env vars or otherwise. It's like gbm does not exist :'( Will need to get a new document out there similar to egl.html. Any objections if we do that as a follow up patch ? I would be happy to see any documentation at some point. :-) How does it interact with EGL_SOFTWARE? Does GBM_ALWAYS_SOFTWARE affect GBM's ability to import dmabufs somehow, or only the surfaces that will be passed to EGL? (Importing dmabufs to be passed directly to KMS for scanout.) Considering it's a variable that needs to be explicitly set, the behaviour depends on the current state of the software backend. AFAICS current swrast/kms_swrast does not allow/use dmabufs. Maybe that was a stupid question on my part, as I don't know whether generic DRM API even has a way to import dmabufs at all. Something like dumb buffer import... I'm terribly confused with all the *SOFTWARE* variables, and it seems I'm not alone as someone just recently filed a bunch of Weston bug reports while trying to get software GL rendering with LIBGL_ALWAYS_SOFTWARE on DRM/KMS. I have the sneaking suspicion that most people see LIBGL_ALWAYS_SOFTWARE as the magic variable that reads your mind and makes things work as you would imagine them. Would be great to move from that to read the documentation and amend propose improvements of it when needed. Right. What about EGL_SOFTWARE? Why not use that on GBM platform too? A quick grep implies that X11 and Wayland platforms use it (but only with egl_gallium.so?)? GBM_ALWAYS_SOFTWARE sounds like a platform-specific variable, but should forcing software rendering be platform-specific? Btw. how do you force software rendering if using egl_dri2 driver on any platform? And not using libGL but, say, GLESv2? Uhh... :-) Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH demos 0/3] demos release plan and glxinfo: Print more limits
On Fri, 04 Jul 2014 08:45:00 +0100 Steven Newbury st...@snewbury.org.uk wrote: On Fri, 2014-07-04 at 03:40 -0400, Ilia Mirkin wrote: On Fri, Jul 4, 2014 at 3:37 AM, Steven Newbury wrote: On Thu, 2014-07-03 at 10:47 +0200, Andreas Boll wrote: 2014-07-03 7:39 GMT+02:00 Steven Newbury : On Wed, 2014-07-02 at 21:04 +0200, Andreas Boll wrote: I'd like to make a new demos release on Friday, July 4th. The last release was on February 24th, 2013. Additionally this release is needed to fix the build with mesa 10.2. (fdo#78101) Any objections? Also I'd like to get these 3 patches included in the new release. Andreas. Fredrik Höglund (3): glxinfo: Print XFB, TBO, and UBO limits glxinfo: Print GL_ARB_vertex_attrib_binding limits glxinfo: Print GL_EXT_texture_array limits src/xdemos/glinfo_common.c | 30 ++ 1 file changed, 30 insertions(+) What about extending eglinfo to have switches to enable printing glxinfo style output for each supported API? Sounds good, feel free to send a patch. Although I'm not planning to hold off this release. I can make further releases when required. I've been giving this some more thought, it occurs to me that since there's already es1_info/es2_info progs, what's really needed is an EGL version of glxinfo/wglinfo (egl_glinfo?) which should be able to share the glinfo_common code. While digging though the existing code I noticed the above mentioned es*_info program only works with X11 EGL, that should probably be improved too... Shall I see if I can code something up to get glinfo output through EGL? Would this be he best approach? I'm sure I'm missing something, but what's wrong with eglinfo? http://cgit.freedesktop.org/mesa/demos/tree/src/egl/opengl/eglinfo.c It doesn't share the glinfo_common code, but that should be easily arranged, I would think. Of course it's an option to extend eglinfo with switches to select API, but there is already es*_info, which is what made me thing it might be the better approach. I'm perfectly happy to hack on eglinfo instead! :-) For me personally, the problem with eglinfo has been that it does not create any GL context, so cannot report any GL info either. In the past, the problem was that you needed window system specific code to create the EGLDisplay and maybe the window too. EGL_DEFAULT_DISPLAY just won't work everywhere, and in Mesa it might pick a wrong window system, and the window system will affect at least EGL capabilities. Do we now have an acceptable plan of writing a multi-window-system capable *info program, or are we still stuck with *info programs written for specific window systems? I'd really like one that works e.g. on Wayland, and preferably does not do anything very Mesa specific. Would also be very cool to use the new extensions allowing to be explicit on which window system you are intending to use, rather than rely on the EGL implementation guessing based on your EGLNativeDisplay value. The other complication with writing a multi-API *info program is, that you need to link to a different library depending on the API you choose: libGLESv1_CM vs. libGLESv2 vs. libGL... Libepoxy to the rescue? https://github.com/anholt/libepoxy Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/3] Software rendering in EGL-DRM
On Thu, 3 Jul 2014 10:48:26 +0200 Boris BREZILLON boris.brezil...@free-electrons.com wrote: Hello Giovanni, I have recently been working on a DRM/KMS driver which does not support OpenGL rendering (it only provides plane composition functionalities): [1]. If I understand correctly you patch series might solve some of the issues I am facing. I'm trying to get wayland working with HW cursor and several planes, the problem is that it depends on GBM to provides drm plane and drm cursor support. Which compositor? All the dependencies are in the compositors, not Wayland per se. If it is Weston, have you tried --use-pixman to avoid EGL altogether? I see Weston still tries to use GBM for cursor fbs, while primary fbs are dumb buffers, but then again, I'm not sure if cursors are supposed to support dumb buffers. Weston's overlay planes code however totally depends on EGL to provide hw-capable buffers from clients. A software renderer support in EGL-DRM won't help in that. Thanks, pq I tried to get EGL working with my DRM device and it always ask for an atmel-hlcdc_dri.so module (I have applied this patch [2] to get to this point). First of all, am I mistaken in thinking this series could solve my issue ? If not, could you tell me on which branch (or which tag) you based your work ? I'm asking this because I tried to apply your patches on top of the master branch (a few days ago), and after fixing some conflict I got a segfault (sorry, I don't have the backtrace anymore :-(, but this was related to negative stride value which was implying faulty memory access). Yesterday I tried to rebase your patches on top of last master branch modifications, and it seems they've completely rework the gallium dri module architecture ([3]) and know I get an 'undefined symbol: dd_create_screen' error when mesa tries to load kms_swrast_dri.so. Sorry if my questions seem stupid to you, but I'm new in graphic related developments :-). Best Regards, Boris [1] https://lkml.org/lkml/2014/6/9/487 [2] http://thread.gmane.org/gmane.comp.video.mesa3d.devel/66385 [3] http://thread.gmane.org/gmane.comp.video.mesa3d.devel/78175 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/3] Software rendering in EGL-DRM
On Thu, 3 Jul 2014 12:10:36 +0200 Boris BREZILLON boris.brezil...@free-electrons.com wrote: On Thu, 3 Jul 2014 12:24:44 +0300 Pekka Paalanen ppaala...@gmail.com wrote: On Thu, 3 Jul 2014 10:48:26 +0200 Boris BREZILLON boris.brezil...@free-electrons.com wrote: Hello Giovanni, I have recently been working on a DRM/KMS driver which does not support OpenGL rendering (it only provides plane composition functionalities): [1]. If I understand correctly you patch series might solve some of the issues I am facing. I'm trying to get wayland working with HW cursor and several planes, the problem is that it depends on GBM to provides drm plane and drm cursor support. Which compositor? All the dependencies are in the compositors, not Wayland per se. Sorry, I meant weston not wayland. If it is Weston, have you tried --use-pixman to avoid EGL altogether? I see Weston still tries to use GBM for cursor fbs, while primary fbs are dumb buffers, but then again, I'm not sure if cursors are supposed to support dumb buffers. Yes weston works fine with --use-pixman, but then I can't use HW cursor and drm overlay planes (because it requires gbm). Weston's overlay planes code however totally depends on EGL to provide hw-capable buffers from clients. A software renderer support in EGL-DRM won't help in that. Okay, if I understand correctly, this means I'll have to implement an atmel-hlcdc_dri.so module (as suggested by Giovanni), right ? Well, uhh, I suppose... That means you need to get clients actually rendering into hw-capable buffers, so that usually means having a GL(ES) rendering working on EGL Wayland platform, too. Or, clients could use something like libva(?) to fill the buffers and use Mesa's internal wl_drm protocol to pass those to the compositor. If the compositor is able to initialize EGL_WL_bind_wayland_display extension, then with Mesa, the clients will have wl_drm available. Still probably requires working GLESv2 rendering for the EGL DRM/GBM platform, because the pixman renderer cannot handle anything else than wl_shm buffers. Or, you migh hack Weston to copy pixels from wl_shm buffers into dumb buffers, and put those into overlays (err, did dumb buffers support going on overlays, or were they primary plane only?). But if you have like less than 10 overlays in hw, that might be a net lose in performance. Or, you might go wild and have a hack on my hacky zlinux_dmabuf support in weston: http://cgit.collabora.com/git/user/pq/weston.git/log/?h=dmabuf-WIP It is missing pixman-renderer integration, and the test client is intel-only, but if you hack around those, you can have clients filling dmabufs, sending those to Weston, and Weston using GBM to import them to put them on overlays via DRM - unless the scenegraph forces them to go through pixman-renderer in which case you are in a slight pickle. Warning: that weston branch may get rewritten or deleted without notice. I guess the take-away from this is that DRM overlay planes have not really been considered for use with server nor client software rendering in Weston. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/3] Software rendering in EGL-DRM
On Thu, 3 Jul 2014 14:15:34 +0200 Boris BREZILLON boris.brezil...@free-electrons.com wrote: On Thu, 3 Jul 2014 13:49:06 +0300 Pekka Paalanen ppaala...@gmail.com wrote: On Thu, 3 Jul 2014 12:10:36 +0200 Boris BREZILLON boris.brezil...@free-electrons.com wrote: On Thu, 3 Jul 2014 12:24:44 +0300 Pekka Paalanen ppaala...@gmail.com wrote: Weston's overlay planes code however totally depends on EGL to provide hw-capable buffers from clients. A software renderer support in EGL-DRM won't help in that. Okay, if I understand correctly, this means I'll have to implement an atmel-hlcdc_dri.so module (as suggested by Giovanni), right ? Well, uhh, I suppose... That means you need to get clients actually rendering into hw-capable buffers, so that usually means having a GL(ES) rendering working on EGL Wayland platform, too. Or, clients could use something like libva(?) to fill the buffers and use Mesa's internal wl_drm protocol to pass those to the compositor. If the compositor is able to initialize EGL_WL_bind_wayland_display extension, then with Mesa, the clients will have wl_drm available. Still probably requires working GLESv2 rendering for the EGL DRM/GBM platform, because the pixman renderer cannot handle anything else than wl_shm buffers. Or, you migh hack Weston to copy pixels from wl_shm buffers into dumb buffers, and put those into overlays (err, did dumb buffers support going on overlays, or were they primary plane only?). But if you have like less than 10 overlays in hw, that might be a net lose in performance. I have, at most, 3 overlays (it depends on the HLCDC IP version), so this might be an acceptable solution. ITOH, I'd like to keep the implementation as clean as possible in order to be able to base future work on offical weston versions (and tell me if I'm wrong, but I'm not sure the proposed solution can ever make it to the official weston version). Or, you might go wild and have a hack on my hacky zlinux_dmabuf support in weston: http://cgit.collabora.com/git/user/pq/weston.git/log/?h=dmabuf-WIP It is missing pixman-renderer integration, and the test client is intel-only, but if you hack around those, you can have clients filling dmabufs, sending those to Weston, and Weston using GBM to import them to put them on overlays via DRM - unless the scenegraph forces them to go through pixman-renderer in which case you are in a slight pickle. That sounds interesting! I'll take a closer look at it. Note, that the protocol there does not address the problem of compatibility at all, and the implementation does not even advertise any pixel formats. It is all based on luck and clairvoyance, that the client just happens to create exactly the right kind of dmabufs that the compositor can use. If you fail that, the client gets kicked or you get a mess on the screen. Obviously not upstreamable material. ;-) Warning: that weston branch may get rewritten or deleted without notice. I guess the take-away from this is that DRM overlay planes have not really been considered for use with server nor client software rendering in Weston. Yes, I kinda realize that know. My main goal here is to provide a video player demo application where the primary plane (or an overlay plane) is used to display video player controls (play, pause, ...) and another plane is used to display video content (using gstreamer or any other alternative). This needs to be done using overlays in order to get acceptable performances (avoid software rendering for plane composition), and thus should use drm overlay planes. Oh, a video player! How do you get the video frames? Do you have hardware decoding? Can you perhaps decode straight into dmabufs? If yes, then you could use zlinux_dmabuf to throw those video frames to Weston zero-copy. Then the tricky part is to make Weston cope with those video frame buffers, as if they even attempt to go through the pixman-renderer, Weston will... hm, not sure what it does, but it won't work. So you have to somehow guarantee, that the surface which those buffers are targeting will *always* be assigned to an overlay. That may be a fair amount of hacking. Good luck! :-) Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] GLX: Add an env var to enable the support of GLX extensions needing both client and server support, in case of client support and direct rendering.
On Wed, 18 Jun 2014 22:55:16 -0700 Kenneth Graunke kenn...@whitecape.org wrote: On Wednesday, June 18, 2014 11:32:45 PM Axel Davy wrote: In the case of XWayland, there's no accelerated indirect rendering. For example GLX_ARB_create_context is not advertised by the server, and according to the spec, we are not allowed to advertise it to the application because of that. This env var makes Mesa ignore this restriction, and a GLX extension is advertised whenever the client supports it. Signed-off-by: Axel Davy axel.d...@ens.fr --- src/glx/glxextensions.c | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) In this specific case, I think it might make sense to just advertise the extension and return a GLX error if asked for indirect rendering. Or, just lie and return a direct rendering context. That would make the common case that most people want (GLX_ARB_create_context for direct rendered 3.2+ core profile stuff) work out of the box, without the need for environment variables. It's technically out of spec, but for X on Wayland, I think it's not unreasonable. On the other hand, supporting AIGLX of sorts might be possible... With XWayland, there are really a couple layers to indirect rendering... 1. Doing it X client side (direct rendering) 2. Doing it in the XWayland X11 server/Wayland client (semi-indirect). 3. Doing it wherever Weston/etc are running (total indirect). It seems like XWayland could support AIGLX with model #2 - X clients would speak GLX protocol to XWayland, which could then do the GL. Model #3 seems like something we should avoid at all costs. Not only avoid, but model #3 is practically impossible (well, not sensible) anyway. There is no rendering protocol in Wayland, you'd have to invent that first, and then it would probably just get rejected. IMO we can just keep talking about direct (model #1) and indirect (model #2) rendering as usual. To me those are basically GLX concepts and have nothing to do with Wayland. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/3] Software rendering in EGL-DRM
On Mon, 16 Jun 2014 17:15:47 +0200 Giovanni Campagna scampa.giova...@gmail.com wrote: 2014-06-16 7:47 GMT+02:00 Pekka Paalanen ppaala...@gmail.com: On Sun, 15 Jun 2014 13:49:48 +0200 Giovanni Campagna scampa.giova...@gmail.com wrote: Hello all, This is the third attempt at swrast/llvmpipe support for DRM drivers that don't have userspace support (qxl, cirrus, simpledrm, etc.) I hope I addressed all of Emil's comments. Hi, this sounds cool work to me, sorry I can't really review it. Does this work also help in getting llvmpipe working with the egl_dri2 loader? If you mean in wayland, unfortunately no. Each egl platform has to implement buffer uploading differently, so the code paths are different. This patchset only tackles the DRM platform, which means mutter, weston and other wayland compositors can run with llvmpipe, but their clients will not have working egl. Yes, that was clear to me. AFAIU, currently on EGL-Wayland the only way to use llvmpipe is to use egl_gallium.so as the loader, and I don't really know what it would take to make egl_dri2 work there, apart from the Wayland-specific bits, so I was kind of hoping your work would make that easier to implement. In the end, the simple way is to implement swrastGetImage and swrastPutImage for the wayland platform, using an shm backed buffer. It would look similar to the platform_drm side (because the way drm and wayland do double buffering is quite similar, and because both are incapable of front-buffer rendering, single-buffer rendering or rendering to foreign windows), but no real code sharing. OTOH, in wayland buffer sharing exists, so it would be hugely inefficient to implement swrast support this way (it incures one extra copy, from the malloc backbuffer to the shm fake frontbuffer). It should be possible to design an swrast2 interface in terms of shm FDs, similar to prime/dma-buf FDs, and with similar semantics, just nobody is working on it right now. Ookay... I maybe got half of that, but no worries. :-) I was just under the impression, that some larger infrastructural work was needed before egl_dri2 could work with software renderers, and I was hoping your work do at least part of that infrastructure. Doesn't matter, I was just curious. :-) Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH v3 0/3] Software rendering in EGL-DRM
On Sun, 15 Jun 2014 13:49:48 +0200 Giovanni Campagna scampa.giova...@gmail.com wrote: Hello all, This is the third attempt at swrast/llvmpipe support for DRM drivers that don't have userspace support (qxl, cirrus, simpledrm, etc.) I hope I addressed all of Emil's comments. Hi, this sounds cool work to me, sorry I can't really review it. Does this work also help in getting llvmpipe working with the egl_dri2 loader? AFAIU, currently on EGL-Wayland the only way to use llvmpipe is to use egl_gallium.so as the loader, and I don't really know what it would take to make egl_dri2 work there, apart from the Wayland-specific bits, so I was kind of hoping your work would make that easier to implement. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH mesa] add EGL_TEXTURE_EXTERNAL_WL to WL_bind_wayland_display spec
On Thu, 16 Aug 2012 17:28:19 -0500 Rob Clark rob.cl...@linaro.org wrote: From: Rob Clark r...@ti.com Signed-off-by: Rob Clark r...@ti.com --- docs/WL_bind_wayland_display.spec |5 + include/EGL/eglmesaext.h |1 + 2 files changed, 6 insertions(+) diff --git a/docs/WL_bind_wayland_display.spec b/docs/WL_bind_wayland_display.spec index 02bd6ea..ce52e2d 100644 --- a/docs/WL_bind_wayland_display.spec +++ b/docs/WL_bind_wayland_display.spec @@ -75,6 +75,7 @@ New Tokens EGL_TEXTURE_Y_U_V_WL0x31D7 EGL_TEXTURE_Y_UV_WL 0x31D8 EGL_TEXTURE_Y_XUXV_WL 0x31D9 +EGL_TEXTURE_EXTERNAL_WL 0x31DA Additions to the EGL 1.4 Specification: @@ -143,6 +144,10 @@ Additions to the EGL 1.4 Specification: Two planes, samples Y from the first plane to r in the shader, U and V from the second plane to g and a. +EGL_TEXTURE_EXTERNAL_WL +Treated as a single plane texture, but sampled with +samplerExternalOES according to OES_EGL_image_external + After querying the wl_buffer layout, create EGLImages for the planes by calling eglCreateImageKHR with wl_buffer as EGLClientBuffer, EGL_WAYLAND_BUFFER_WL as the target, NULL diff --git a/include/EGL/eglmesaext.h b/include/EGL/eglmesaext.h index d476d18..2b91897 100644 --- a/include/EGL/eglmesaext.h +++ b/include/EGL/eglmesaext.h @@ -118,6 +118,7 @@ typedef EGLDisplay (EGLAPIENTRYP PFNEGLGETDRMDISPLAYMESA) (int fd); #define EGL_TEXTURE_Y_U_V_WL0x31D7 #define EGL_TEXTURE_Y_UV_WL 0x31D8 #define EGL_TEXTURE_Y_XUXV_WL 0x31D9 +#define EGL_TEXTURE_EXTERNAL_WL 0x31DA struct wl_display; struct wl_buffer; Hi all, it looks like this patch never made it into Mesa. Also the implementation apparently didn't make it into Mesa, as git pick-axe does not find any mention of EGL_TEXTURE_EXTERNAL_WL. Still, the Weston patch was merged on Aug 31st, 2012. Oops. :-) Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Debugging into Mesa Driver
On Wed, 4 Jun 2014 21:46:38 -0700 roshan chaudhari rgc...@gmail.com wrote: Thanks for reply. I have added CFLAGS='-Og -ggdb3' CXXFLAGS='-Og -ggdb3' into configure file and ran The advice was to add those to the configure command line, not into the configure file. Undo all your edits from Mesa and start from scratch. ./configure --enable-debug ; make; make install but still it did not step into driver. So you tried exactly what Ian suggested, and the breakpoint did not trigger? Also, are you perhaps trying to debug a 32-bit application on an otherwise 64-bit system? Thanks, pq On Wed, Jun 4, 2014 at 1:09 PM, Ian Romanick i...@freedesktop.org wrote: On 06/04/2014 11:14 AM, roshan chaudhari wrote: Hello, I just cloned the mesa driver from git repository. I am trying to debug the opengl application with mesa driver. I am not sure which flag to enable for debugging and where, I built a driver with -enable-debug in Makefile and added --DEBUG in CFLAGS in Makefile but still when I try to step into driver code it does not allow me. I am doing it with gdb in ubuntu. Modifying the CFLAGS in the Makefile is likely to cause problems. Instead, try CFLAGS='-Og -ggdb3' CXXFLAGS='-Og -ggdb3' ./configure --enable-debug your other configure options If your version of GCC is too old, you will need to use -O0 instead of -Og. That should be sufficient. You can verify this by doing gdb $(which glxgears) Then, at the gdb prompt, break _mesa_Clear It will ask Function _mesa_Clear not defined. Make breakpoint pending on future shared library load? (y or [n]) Answer 'y'. Then, run If it stops in _mesa_Clear, you're good to go. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Mesa build instructions
On Fri, 11 Apr 2014 11:50:50 -0700 Eric Anholt e...@anholt.net wrote: For anyone that's curious about how I work: My scripts setup looks a lot like Matt's. I don't do OOT builds, and instead I just have 6 main trees: ~/src/mesa (normal debug build) ~/src/mesa-release (non-debug build) ~/src/mesa-clean (usually HEAD~1 of ~/src/mesa) ~/src/32/mesa ~/src/32/mesa-release ~/src/32/mesa-clean The debug build also is with -O2 so that I can actually get mostly-representative performance results while developing. REALLY IMPORTANT: Don't forget -fno-omit-frame-pointer on 64-bit, otherwise you won't get backtraces from sysprof and perf (similarly, in your kernel, CONFIG_FRAME_POINTER=y). I think the most important feature of my setup is the clean builds. By having copies of unmodified mesa along with patched mesa, I can use http://anholt.net/compare-perf/ to easily produce stats to test whether my change is actually having the desired effect. Hi Eric, just curious, how have you set up git here? Do you have a .git/ completely separate in each directory, and if so, how do you keep them synced? Or do you use some GIT_DIR env magic to have a single git repo with multiple checkouts? Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 07/25] dri/nouveau: add GLX_MESA_query_renderer support
On Sat, 22 Feb 2014 03:03:57 + Emil Velikov emil.l.veli...@gmail.com wrote: v2: nv04-2x cards support upto opengl 1.3. v3: Include correct headers. Signed-off-by: Emil Velikov emil.l.veli...@gmail.com --- src/mesa/drivers/dri/nouveau/nouveau_screen.c | 83 ++- 1 file changed, 82 insertions(+), 1 deletion(-) ... I still think there is something funny here: +static int +nouveau_query_renderer_integer(__DRIscreen *psp, int param, +unsigned int *value) +{ + const struct nouveau_screen *const screen = + (struct nouveau_screen *) psp-driverPrivate; + + switch (param) { + case __DRI2_RENDERER_VENDOR_ID: + value[0] = 0x10de; + return 0; + case __DRI2_RENDERER_DEVICE_ID: { + struct drm_nouveau_getparam gp; + int *chip_id = 0, ret; chip_id is set to NULL. + + memset(gp, 0, sizeof(gp)); + gp.param = NOUVEAU_GETPARAM_PCI_DEVICE; + gp.value = (unsigned long) chip_id; gp.value is now 0 (NULL). + + ret = drmCommandWriteRead(psp-fd, DRM_NOUVEAU_GETPARAM, gp, sizeof(gp)); + if (ret) { + nouveau_error(Error retrieving NOUVEAU_GETPARAM_PCI_DEVICE.\n); + *chip_id = -1; Dereferencing NULL. + } + value[0] = *chip_id; Dereferencing NULL. Am I missing something? You didn't mention anything about this when Ilia pointed it out. Thanks, pq + return 0; + } ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] egl/glx: Remove egl_glx driver
On Thu, 30 Jan 2014 02:55:54 +0100 Marek Olšák mar...@gmail.com wrote: egl_gallium also seems to have backends for Wayland, DRM, etc. I guess this should really be in core Mesa though. From what I have tested, egl_gallium is also the only way to use software rendering (llvmpipe) on Wayland. egl_dri2 simply fails, I guess it cannot use wl_shm buffers. This is a bit inconvenient, because egl_gallium is currently the only way to support sw GL on Wayland clients (works also when the compositor itself does not use EGL at all!), but if you actually build egl_gallium, it will be preferred over egl_dri2, which means that both nouveau and radeon hardware drivers get used through egl_gallium by default. Just today someone came complaining about a bug due to egl_gallium with nouveau on Wayland. I suspect this is a PITA for distributions. Thanks, pq On Thu, Jan 30, 2014 at 2:39 AM, Matt Turner matts...@gmail.com wrote: On Wed, Jan 29, 2014 at 4:14 PM, Michel Dänzer mic...@daenzer.net wrote: On Mit, 2014-01-29 at 12:18 -0800, Matt Turner wrote: On Wed, Jan 29, 2014 at 9:25 AM, Chad Versace chad.vers...@linux.intel.com wrote: Mesa now has a real, feature-rich EGL implementation on X11 via xcb. Therefore I believe there is no longer a practical need for the egl_glx driver. Furthermore, egl_glx appears to be unmaintained. The most recent nontrivial commit driver was 6baa5f1 on 2011-11-25. Signed-off-by: Chad Versace chad.vers...@linux.intel.com --- In a similar vein -- is gallium_egl useful for anything these days? The only time I hear it mentioned is when telling others to not use it. In contrast to egl_dri2, egl_gallium supports OpenVG. I suppose I could ask whether OpenVG is useful for anything these days? :) I've heard on IRC that we could simply add OpenVG support to DRI if anyone actually cared about it. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] What use do swap interval 1 and OML_sync_control divisor and remainder have?
Hi Ian and Jason On Mon, 27 Jan 2014 12:26:23 -0700 Ian Romanick i...@freedesktop.org wrote: On 01/24/2014 04:32 AM, Pekka Paalanen wrote: Hi, I am investigating what kind of Wayland protocol extensions would be needed to support proper presentation timing. Looking at existing works, I am wondering about two things whether they have any real use. Where is swap interval (e.g. eglSwapInterval) greater than one useful? Intervals 0 and 1 I understand, and Mesa EGL Wayland already supports those. But when would you like to make your framerate a fraction of the display's? There are a number of theoretical uses, but I don't know that we've ever seen any in the wild. One is video playback. You likely want 30fps there. I would hope that no video player will use swap interval as a means of target timing, because the buffer queueing protocol I'm planning is supposed to be superior for accurately timed video presentation. The protocol will also be usable with EGL provided content, if the EGL implementation can cope with buffers being reserved by the display server for longer than usual. Imagine that you have a game that only needs 30fps to be playable. When you're running on battery, you may want the system to throttle you to a lower framerate to save battery. You could also have a game that can always hit at least 30fps, but sometimes it may go higher. Using a swap interval of 2 gives the the game a consistent framerate. Sometimes that is better. 120Hz monitors. These are good points. I can easily come up with a counter argument for at least the first and last, how something else would be better than the application itself setting swap interval. I can even see a compositor user option limit this window/application to a fraction of the refresh rate Hz which is already possible in Wayland without any protocol changes. Whether that would be a good UI is another question. After this and the discussion on #xorg-devel, I am now fairly confident, that I do not have to design for swap interval 1 support at this time. If such support really is needed, I see two possibilies already. - Use the buffer queueing protocol to target presentation at last realized presentation time plus two frame periods. EGL internally could keep on waiting for the usual frame callback (a Wayland protocol feature) like it does now for swap interval = 1. Enabling this occurred to me yesterday, and I have it in my buffer queue plans now. - Add a new request alike wl_surface.frame, which has a parameter of how many output refresh cycles should pass since the last presentation before this presentation is executed. When are the target-MSC related remainder and divisor parameters as defined in the GLX_OML_sync_control useful? Why does also X11 Present protocol include remainder and divisor? X11 Present has it to support GLX_OML_sync_control. I believe that GLX_OML_sync_control has it to support playback of content on monitors that aren't 60Hz. There used to be these things called CRTs, and some of them had wonkey refresh rates... like 72Hz. But the divisor and remainder apply only if the original target-MSC is missed, causing the presentation to be postponed to a later point in time determined by matching the modulus of MSC. I still don't really understand when that was useful, or how it was even used. Are you saying that these were used to pretend that the monitor refresh rate was something lower than what it really was? Did that really work better than just presenting the content update at the earliest possible refresh? On Fri, 24 Jan 2014 12:27:11 -0600 Jason Ekstrand ja...@jlekstrand.net wrote: On Jan 24, 2014 6:32 AM, Pekka Paalanen ppaala...@gmail.com wrote: GLX_OML_sync_control defines that for interlaced displays MSC is incremented for each field. With divisor and remainder you could then target only top or bottom fields. Is that useful, and do we care about interlaced displays anymore? I think we do. In particular, we should care about TV set-top boxes. Even though most TVs are LCD, DLP, or similar, hdmi does support interlacing and it is still used (particularly in HDTV). I have no idea what implications this has for a present extension, but I think we could still handle it in a sane way without going for MSC. Right, there was quite some discussion on #xorg-devel about interlacing. All that lead me to write down the following in my notes: Supporting interlaced material and displays is punted for a later extension. Presumably the protocol supporting interlaced content would be as simple as having an extra wl_surface-like request to say on which of the two fields the content should be displayed first. The field designation would be an additional restriction on when a content update should initially hit the screen. I.e. if both field and target timestamp are given, both conditions must pass. This means that giving a field may delay
[Mesa-dev] What use do swap interval 1 and OML_sync_control divisor and remainder have?
Hi, I am investigating what kind of Wayland protocol extensions would be needed to support proper presentation timing. Looking at existing works, I am wondering about two things whether they have any real use. Where is swap interval (e.g. eglSwapInterval) greater than one useful? Intervals 0 and 1 I understand, and Mesa EGL Wayland already supports those. But when would you like to make your framerate a fraction of the display's? When are the target-MSC related remainder and divisor parameters as defined in the GLX_OML_sync_control useful? Why does also X11 Present protocol include remainder and divisor? GLX_OML_sync_control defines that for interlaced displays MSC is incremented for each field. With divisor and remainder you could then target only top or bottom fields. Is that useful, and do we care about interlaced displays anymore? I am contemplating on not supporting these, because I am going to propose using an UST-like clock as the standard clock language in Wayland present extension. Supporting MSC-based timings would add complexity. Therefore I would like to know where and how the above mentioned are useful, because I cannot imagine it myself. Please, let me know of real actual use cases and existing software, where these features offer a visible benefit and what that benefit is exactly. I am not interested in what one might do in theory, I am interested in real-world examples where they have proved useful. Well, maybe also theories if they allow taking advantage of some new cool technology. Btw. if you think that using UST for presentation timing and feedback is nonsense, and MSC is the only right way, let me know and I can start another email thread about that detail after preparing my material. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Android build targets
On Tue, 14 Jan 2014 00:19:04 -0600 Tom Gall tom.g...@linaro.org wrote: Hi, Been experimenting with building mesa on android specifically for ARM. I was curious if there is a wiki page that covers building mesa on android somewhere. Didn't see anything with the source or on mesa3d.org. As a start I'm building with BOARD_GPU_DRIVERS := swrast libGLES_mesa.so is built but I don't seem to end up with like a libGLESv2.so for instance from mesa which I was expecting. Guessing it looks like libGLES_mesa.so goes into /system/lib/egl and then mesa would be put into egl.conf but that's a guess on my part. Hi, from my very vague memory of fiddling with android, yes, the Android itself provides the libs that apps link to, e.g. libGLESv2.so, and those are trampolines to the real implementations, which is what you would build from Mesa. Your guess sounds right to me, FWIW. The last time I looked, a year(?) ago, Android's libEGL multiplexes into at least two different gfx stacks: Android's own sw-based, and the vendor's hw stack. You should find the app-facing EGL and GLES libs in Android's (platform)/frameworks/base/opengl/libs or around there. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Move wayland-drm protocol from Mesa to wayland-core
On Thu, 28 Nov 2013 10:24:33 +0100 Benjamin Gaignard benjamin.gaign...@linaro.org wrote: Jason, From my point of view wl_drm isn't link to Mesa, it is only about exchange buffers by using a file descriptor and, for example, doesn't rely on EGL. I understand that other graphic stacks could have defined their own way to for zero-copy (and so other protocols). I don't aim to make gstreamer wayland sink works for all of them but at least with wl_drm protocol which is quite generic. Since dmabuf has been adopted in kernel we have the opportunity to rationalize also some code in userland and for that we need a common wayland protocol. Move wl_drm into wayland-core make it more easily accessible for all software even for those who don't use Mesa. Hi, if you aim particularly for dma-buf, then I would propose making a new Wayland protocol extension mesa_dma_buf or gbm_dma_buf or whatever name is appropriate, and drop all the unneeded parts compared to wl_drm. I think dma-buf with render nodes will not need authentication, and it certainly does not use flink names. Would that work? Note that still it would not be about a generic fd-based buffer passing but specifically dma-buf. If you want it really generic, then you would need something like Android's native_buffer_t (IIRC) which contains an arbitrary number of fds and integers, and then you are going to need platform-specific code to handle them right and whooops, the protocol itself is not meaningful anymore. You might be interested in http://lists.freedesktop.org/archives/wayland-devel/2013-October/011593.html which is David Herrmann's proof of concept for passing mmappable dmabufs over wl_shm, and not needing any changes to Weston or Mesa AFAIU. Also note that not all drivers support mmap'ing dmabufs, and mmap'ing in general is usually not a good thing for gfx buffers. I believe you would want to avoid mmap when possible. If you wanted to be truely portable, I guess you might create a library pair similar to libEGL, where the actual protocol extension would again be just an internal detail of the library. But the question there is how would that library bind to rendering in client side and compositing in the server side. Where do you want to draw the line? Thanks, pq 2013/11/27 Jasper St. Pierre jstpie...@mecheye.net: Wasn't EGLStreams supposed to solve the use case of passing hardware buffers around in a standard way? On Wed, Nov 27, 2013 at 1:22 PM, Jason Ekstrand ja...@jlekstrand.net wrote: On Nov 27, 2013 10:53 AM, Benjamin Gaignard benjamin.gaign...@linaro.org wrote: Hi all, I'm working for Linaro on enabling a zero copy path in GStreamer by using dmabuf. To make this possible I have patched gst wayland sink to use wayland drm protocol: https://bugzilla.gnome.org/show_bug.cgi?id=711155 Today wayland drm protocol is limited to Mesa so I have decided to move it into wayland-core. My hardware doesn't have gpu support yet so I have patched weston (pixman) to allow usage of wl_drm buffers. With this I able to share/use a buffer allocated by DRM in gstreamer pipeline even if I don't have gpu and EGL. What do you think about make wayland drm protocol available like this ? Benjamin, The problem here is that wl_drm is really a mesa extension. Well, more of an open-source linux graphics stack extension. The point is that there are other graphics stacks: Rhaspberry Pi, libhybris, other proprietary stacks, etc. and each of these graphics stacks has its own extension for passing hardware buffers around. This means that if you want your GStreamer sink to work on these other stacks with zero-copy, you will have to talk their protocols. Because wl_drm isn't global to all of wayland, it probably shouldn't go into the wayland core. This does not mean, however, that wl_drm can't be exposed in a more sensible way. As of 1.3, we are now exporting wayland.xml to /usr/share/wayland and we can put other extensions there as well. We could have, for instance, a /usr/share/mesa.xml file that provides the mesa extensions. Then projects wanting to talk directly to mesa can generate the header and C files from the system-installed xml file. This would solve the problem of keeping everything in sync. One last note (this one's been bugging me for a while). If we are going to export wl_drm in some way, we should probably rename it to something like mesa_drm. We really need to get out of the habit of using the wl_ namespace for everything that talks the wayland protocol. If we don't alter the details of wl_drm in the rename, it shouldn't be too hard to move everyone over from wl_drm to mesa_drm. Hope that's sensible/helpful. --Jason Ekstrand ___ wayland-devel mailing list wayland-de...@lists.freedesktop.org
Re: [Mesa-dev] [PATCH v3] wayland: Add support for eglSwapInterval
On Fri, 13 Sep 2013 17:04:38 +0100 Neil Roberts n...@linux.intel.com wrote: Here is another version of the patch which brings back the blocking when there are no buffers available to cope with the situation where the compositor isn't immediately releasing buffers. Maybe we could leave the decision about whether to increase the buffer count to 4 as a separate patch. Hi Neil, yes, I think this approach is how it should work. There are a few details commented below. -- 8 -- The Wayland EGL platform now respects the eglSwapInterval value. The value is clamped to either 0 or 1 because it is difficult (and probably not useful) to sync to more than 1 redraw. The main change is that if the swap interval is 0 then instead of installing a frame callback it will just call the display sync method and throttle itself to that. When the application is not running fullscreen the compositor is likely to release the previous buffer immediately so this gives the application the best chance of reusing the buffer. If there are no buffers available then instead of returning with an error, get_back_bo will now block until a buffer becomes available. This is necessary if the compositor is not releasing buffers immediately. As there are only three buffers, this could actually mean that the client ends up throttled to the vblank anyway because Weston can hold on to three buffers when the client is fullscreen. We could fix this by increasing the buffer count to 4 or changing Weston and KMS to allow cancelling a pending buffer swap, but for now this patch ignores that problem. A good explanation, cool. This also moves the vblank configuration defines from platform_x11.c to the common egl_dri2.h header so they can be shared by both platforms. This little part could be a separate patch, so it could go in already. --- src/egl/drivers/dri2/egl_dri2.h | 7 ++ src/egl/drivers/dri2/platform_wayland.c | 159 src/egl/drivers/dri2/platform_x11.c | 6 -- 3 files changed, 147 insertions(+), 25 deletions(-) diff --git a/src/egl/drivers/dri2/egl_dri2.h b/src/egl/drivers/dri2/egl_dri2.h index fba5f81..cc657ba 100644 --- a/src/egl/drivers/dri2/egl_dri2.h +++ b/src/egl/drivers/dri2/egl_dri2.h @@ -175,6 +175,7 @@ struct dri2_egl_surface intdx; intdy; struct wl_callback*frame_callback; + struct wl_callback*throttle_callback; int format; #endif @@ -221,6 +222,12 @@ struct dri2_egl_image __DRIimage *dri_image; }; +/* From xmlpool/options.h, user exposed so should be stable */ +#define DRI_CONF_VBLANK_NEVER 0 +#define DRI_CONF_VBLANK_DEF_INTERVAL_0 1 +#define DRI_CONF_VBLANK_DEF_INTERVAL_1 2 +#define DRI_CONF_VBLANK_ALWAYS_SYNC 3 + /* standard typecasts */ _EGL_DRIVER_STANDARD_TYPECASTS(dri2_egl) _EGL_DRIVER_TYPECAST(dri2_egl_image, _EGLImage, obj) diff --git a/src/egl/drivers/dri2/platform_wayland.c b/src/egl/drivers/dri2/platform_wayland.c index ffc5959..6ee6ffb 100644 --- a/src/egl/drivers/dri2/platform_wayland.c +++ b/src/egl/drivers/dri2/platform_wayland.c @@ -180,8 +180,16 @@ dri2_create_window_surface(_EGLDriver *drv, _EGLDisplay *disp, _EGLConfig *conf, EGLNativeWindowType window, const EGLint *attrib_list) { - return dri2_create_surface(drv, disp, EGL_WINDOW_BIT, conf, + struct dri2_egl_display *dri2_dpy = dri2_egl_display(disp); + _EGLSurface *surf; + + surf = dri2_create_surface(drv, disp, EGL_WINDOW_BIT, conf, window, attrib_list); + + if (surf != NULL) + drv-API.SwapInterval(drv, disp, surf, dri2_dpy-default_swap_interval); + + return surf; } /** @@ -216,6 +224,8 @@ dri2_destroy_surface(_EGLDriver *drv, _EGLDisplay *disp, _EGLSurface *surf) if (dri2_surf-frame_callback) wl_callback_destroy(dri2_surf-frame_callback); + if (dri2_surf-throttle_callback) + wl_callback_destroy(dri2_surf-throttle_callback); if (dri2_surf-base.Type == EGL_WINDOW_BIT) { dri2_surf-wl_win-private = NULL; @@ -261,24 +271,46 @@ get_back_bo(struct dri2_egl_surface *dri2_surf, __DRIbuffer *buffer) __DRIimage *image; int i, name, pitch; - /* There might be a buffer release already queued that wasn't processed */ - wl_display_dispatch_queue_pending(dri2_dpy-wl_dpy, dri2_dpy-wl_queue); + if (dri2_surf-throttle_callback == NULL) { + /* There might be a buffer release already queued that wasn't processed */ + wl_display_dispatch_queue_pending(dri2_dpy-wl_dpy, dri2_dpy-wl_queue); + } else { + /* If we aren't throttling to the frame callbacks then the compositor + * may have sent a release event after the last attach so we'll wait + * until the sync for the commit request completes in order to have the + * best chance
Re: [Mesa-dev] WL_bind_wayland_display
On Sun, 26 May 2013 17:38:41 +0200 Wladimir laa...@gmail.com wrote: Hello all, I was reading the WL_bind_wayland_display extension spec and noticed the following: EGL_TEXTURE_Y_U_V_WL Three planes, samples Y from the first plane to r in the shader, U from the second plane to r, and V from the third plane to r. EGL_TEXTURE_Y_UV_WL Two planes, samples Y from the first plane to r in the shader, U and V from the second plane to rg. EGL_TEXTURE_Y_XUXV_WL Two planes, samples Y from the first plane to r in the shader, U and V from the second plane to g and a. How does this work? EGL_TEXTURE_Y_U_V_WL puts all planes in r; EGL_TEXTURE_Y_UV_WL puts a plane in r and one in rg (thus overlapping). Hi Wladimir, it is used like this: http://cgit.freedesktop.org/wayland/weston/tree/src/gl-renderer.c?id=14e438c8a2bc7342489d248a3d66b9123245d552#n1220 You get a separate EGLImage for each plane. Cheers, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] non x11/xlib based EGL and software only renderer
On Wed, 8 May 2013 00:04:49 +0530 Divick Kishore divick.kish...@gmail.com wrote: What exactly is a null platform? Also could you please point me to the directory and potential starting points to support window as pbuffers? Modifying apps is a complete no no. What are you actually trying to do? Aren't your applications written to use a specific window system? If so, they will pass window system specific data to EGL functions, and the EGL platform in use must match that, so you cannot use a null platform. If your app was written for X11, you simply must use the X11 platform in EGL, for example. Also, where do you intend the rendering to end up? Are your applications reading it back themselves? Or do you want it discarded? I don't think the null platform will put the rendering result available anywhere, unless the application itself reads it back. Not modifying apps seems to conflict your goals, without knowing what you really want to do. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Haiku using the Wayland state_tracker?
On Wed, 27 Mar 2013 15:45:23 -0500 Alexander von Gluck IV kallis...@unixzen.com wrote: On 03/27/2013 2:37 am, Pekka Paalanen wrote: On Tue, 26 Mar 2013 16:47:15 -0500 Alexander von Gluck IV kallis...@unixzen.com wrote: I've been hitting several brick walls working on the Haiku GL stuff. (mostly due to things being too complex for the time I have available) Given all the recent publicity for Wayland, I decided to look into the wl_shm stuff. From my point of view wl_shm looks *extremely* simple. Does anyone see any reason *not* to use the wayland state_tracker for Haiku's GL rendering? The only possible down side I see is using EGL vs OpenGL. (I admit I don't fully understand the pros and cons of EGL) I'm not familiar with the wayland state_tracker, so I can only comment from the Wayland protocol perspective. I'm not sure what you intend to do with wl_shm, either. Are you implementing Wayland platform support? To be honest, we're not related to Wayland in any way. I'm looking for a simple interface between os and GL rendering. The Xorg interface to Mesa and Gallium always seemed like overkill for our needs, Ah, well, I think this is not exactly the interface you are looking for. The Wayland protocol interfaces, wl_shm and wl_drm, are only to facilitate buffer passing between processes using the minimum amount of information needed to be sent over the Wayland wire. Or rather to not push pixels through the wire but handles and metadata. To make that actually work, especially wl_drm, they rely on the OS magic to let it happen. In our case, it is the kernel DRM, DRI2, and probably some other acronyms I'm not completely familiar with. So what you see in the Wayland protocol is just the skin of buffer passing, all the crucial details are hidden below in the OS graphics stack, like how to turn a GEM flink name into a buffer you can actually do something with (or whatever). I'm guessing that there is no magic way to make things work, other than actually inventing your own magic to begin with, if it doesn't exist yet. Thanks, pq If you are only ever going to have software rendered GL, then I guess you might use wl_shm. If you have any reason to believe you might ever want hardware accelerated GL, then wl_shm won't work. (Actually, you probably want to choose between wl_shm and something else according to your renderer. Maybe.) I noticed that the native Wayland code chooses dri or shm based on the rendering needs. Thats fine for us (although we don't have any of the dri stuff ported or wrapped yet) wl_shm basically deals with mmappable files, i.e. directly CPU-accessible memory. Buffers suitable for hardware rendering or texturing are often not CPU-accessible, or extremely slow for that. Conversely, CPU-accessible memory is often not usable for GPU, or is slow. And you really don't want to have extra copies between CPU and GPU memory, especially just for buffer passing. Mesa contains another Wayland protocol interface used for hardware accelerated graphics buffers: wl_drm. Also, EGL vs. OpenGL is like comparing a bucket to paint. EGL is just one form of a bucket, that can give you OpenGL as the paint. There are other buckets, and other paints, and you cannot use a bucket as paint, nor paint as a bucket. Probably I just didn't understand what you are actually comparing here. (and sorry for a bad analogue :-p) This is actually a good example. Our (Haiku's) GL Rendering with Mesa swrast works fine at the moment. We also have an in-development Gallium driver (using llvmpipe or swpipe) *almost* working minus some on-screen stride issues. The problem i'm trying to tackle is that by calling private Mesa and Gallium functions externally.. the size of the paint can lid keeps changing and keeping up between Mesa versions is consuming a lot of resources. (small project trying to keep up with a large project, we have the same issue with Webkit) I have a blog post about Wayland, that is maybe not directly related to your question, but might give some insight, I hope: http://ppaalanen.blogspot.fi/2012/11/on-supporting-wayland-gl-clients-and.html I'll take a look. Thanks! -- Alex ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Haiku using the Wayland state_tracker?
On Tue, 26 Mar 2013 16:47:15 -0500 Alexander von Gluck IV kallis...@unixzen.com wrote: I've been hitting several brick walls working on the Haiku GL stuff. (mostly due to things being too complex for the time I have available) Given all the recent publicity for Wayland, I decided to look into the wl_shm stuff. From my point of view wl_shm looks *extremely* simple. Does anyone see any reason *not* to use the wayland state_tracker for Haiku's GL rendering? The only possible down side I see is using EGL vs OpenGL. (I admit I don't fully understand the pros and cons of EGL) I'm not familiar with the wayland state_tracker, so I can only comment from the Wayland protocol perspective. I'm not sure what you intend to do with wl_shm, either. Are you implementing Wayland platform support? If you are only ever going to have software rendered GL, then I guess you might use wl_shm. If you have any reason to believe you might ever want hardware accelerated GL, then wl_shm won't work. (Actually, you probably want to choose between wl_shm and something else according to your renderer. Maybe.) wl_shm basically deals with mmappable files, i.e. directly CPU-accessible memory. Buffers suitable for hardware rendering or texturing are often not CPU-accessible, or extremely slow for that. Conversely, CPU-accessible memory is often not usable for GPU, or is slow. And you really don't want to have extra copies between CPU and GPU memory, especially just for buffer passing. Mesa contains another Wayland protocol interface used for hardware accelerated graphics buffers: wl_drm. Also, EGL vs. OpenGL is like comparing a bucket to paint. EGL is just one form of a bucket, that can give you OpenGL as the paint. There are other buckets, and other paints, and you cannot use a bucket as paint, nor paint as a bucket. Probably I just didn't understand what you are actually comparing here. (and sorry for a bad analogue :-p) I have a blog post about Wayland, that is maybe not directly related to your question, but might give some insight, I hope: http://ppaalanen.blogspot.fi/2012/11/on-supporting-wayland-gl-clients-and.html Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] Very low framerate when recording desktop content in Weston using mesa git on Radeon 5770 (glReadPixels slow path)
On Tue, 26 Mar 2013 03:30:58 +0100 Rune Kjær Svendsen runesv...@gmail.com wrote: Marek, do you have an idea on where the currency bottleneck is? I just did a profiling with sysprof, zooming in on the desktop in Weston and moving the mouse wildly around, so that the buffer is completely changed for every frame. I got around 5 fps, which isn't *that* much, but still an order of magnitude better than without your patches. sysprof says there is 100% CPU usage, but unlike the previous 0.5-FPS recording, it's not in a single function, but spread out over several functions: 35% weston_recorder_frame_notify 11% __memcpy_ssse3 4.5% clear_page_c 4.3% output_run Although I'm not completely sure I'm reading the sysprof output right. weston_recorder_frame_notify, for example, has 35% CPU usage, but none of its child functions has any significant CPU usage. I presume the CPU usage in that function is from calling glReadPixels, although that's not apparent from sysprof: weston_recorder_frame_notify 39.15% 39.15% - - kernel - - 0.00% 0.01% ret_from_intr 0.00% 0.01% __irqentry_text_start 0.00% Well, if you look at weston_recorder_frame_notify function, it has a naive loop over each single pixel it processes. component_delta() may get inlined, and output_run() you saw in the profile. I think it's possible it actually is weston_recorder_frame_notify eating the CPU. Can you get more precise profiling, like hits per source line or instruction? Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] New EGL extension: EGL_EXT_platform_display
On Tue, 26 Feb 2013 12:53:49 -0800 Chad Versace chad.vers...@linux.intel.com wrote: On 02/25/2013 11:58 PM, Pekka Paalanen wrote: On Mon, 25 Feb 2013 09:09:22 -0800 Chad Versace chad.vers...@linux.intel.com wrote: Thank you for the reply. Indeed, that dance to check for the extension is non-trivial, and I think it might be good to explain somehow in the spec, maybe in the QA section? Yes, I'll add the topic to the Issues section. I quickly checked the EGL 1.4 spec, and could not spot any mention of whether eglGetProcAddress needs a valid context or display or anything. Does it require any initialization? Well, should not matter much I guess, since in any case you have to open a display first to get the extensions string. Btw. what is EGL_DEFAULT_DISPLAY in Mesa, what does it give us? Ah, that is complicated. In the Mesa configuration that nearly everyone uses, eglGetDisplay(EGL_DEFAULT_DISPLAY) calls XOpenDisplay(NULL) to acquire the display. If Mesa is configured as Wayland-centric, though, then eglGetDisplay(EGL_DEFAULT_DISPLAY) calls wayland_display_connect(NULL) to acquire the display. It's also possible to select to which platform EGL_DEFAULT_DISPLAY belongs with the env var EGL_PLATFORM, but, according to Kristian, that method is deprecated. Indeed, I think setting EGL_PLATFORM will override the whole display type detection, even if you did pass a specific display object instead of EGL_DEFAULT_DISPLAY. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [RFC] New EGL extension: EGL_EXT_platform_display
On Mon, 25 Feb 2013 09:09:22 -0800 Chad Versace chad.vers...@linux.intel.com wrote: On 02/24/2013 11:46 PM, Pekka Paalanen wrote: On Tue, 19 Feb 2013 08:20:51 -0800 Chad Versace chad.vers...@linux.intel.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'm seeking feedback on an EGL extension that I'm drafting. The ideas have already been discussed at Khronos meetings to a good reception, but I want feedback from Mesa developers too. Summary - --- The extension, tentatively named EGL_EXT_platform_display, enables EGL clients to specify to which platform (X11, Wayland, gbm, etc) an EGL resource (EGLDisplay, EGLSurface, etc) belongs when the resource is derived from a platform-native type. As a corollary, the extension enables the creation of EGL resources from different platforms within a single process. Feedback - I'd like to hear feeback about the details below. Do you see any potential problems? Is it lacking a feature that you believe should be present? Details - --- The draft extension defines the following new functions: // This is the extenion's key function. // EGLDisplay eglGetPlatformDisplayEXT(EGLenum platform, void *native_display); // The two eglCreate functions below differ from their core counterparts // only in their signature. The EGLNative types are replaced with void*. // This makes the signature agnostic to which platform the native resource // belongs. EGLSurface eglCreatePlatformWindowSurfaceEXT(EGLDisplay dpy, EGLConfig config, void *native_window, const EGLint *attrib_list); EGLSurface eglCreatePlatformPixmapSurface(EGLDisplay dpy, EGLConfig config, void *native_pixmap, const EGLint *attrib_list); Valid values for `platform` are defined by layered extensions. For example, EGL_EXT_platform_x11 defines EGL_PLATFORM_X11, and EGL_EXT_platform_wayland defines EGL_PLATFORM_WAYLAND. Also, the layered extensions specify which native types should be passed as the native parameters. For example, EGL_EXT_platform_wayland specifies that, when calling eglCreatePlatformWindowSurfaceEXT with a display that was derived from a Wayland display, then the native_window parameter must be `struct wl_egl_window*`. Analogously, EGL_EXT_platform_x11 specifies that native_window must be `Window*`. Example Code for X11 - // The internal representation of the egl_dpy, created below, remembers that // it was derived from an Xlib display. Display *xlib_dpy = XOpenDisplay(NULL); EGLDisplay *egl_dpy = eglGetPlatformDisplayEXT(EGL_PLATFORM_X11, xlib_dpy); EGLConfig config; eglChooseConfig(egl_dpy, config, ...); // Since egl_dpy remembers that it was derived from an Xlib display, when // creating the EGLSurface below libEGL internally casts the // `(void*) xlib_win` to `Window*`. Window xlib_win = XCreateWindow(xlib_dpy, ...); EGLSurface egl_surface = eglCreatePlatformWindowSurfaceEXT(egl_dpy, config, (void*) xlib_win, NULL); Example Code for Wayland - // The internal representation of the egl_dpy, created below, remembers that // it was derived from a Wayland display. struct wl_display *wl_dpy = wl_display_connect(NULL); EGLDisplay *egl_dpy = eglGetPlatformDisplay(EGL_PLATFORM_WAYLAND, wl_dpy); EGLConfig config; eglChooseConfig(egl_dpy, config, ...); // Since egl_dpy remembers that it was derived from an Wayland display, when // creating the EGLSurface below libEGL internally casts the // `(void*) wl_win` to `struct wl_egl_window*`. struct wl_egl_window *wl_win = wl_egl_window_create(...); EGLSurface egl_surface = eglCreateWindowSurface(egl_dpy, config, (void*) wl_win, NULL); Hi, is it possible to build a binary, that will use this extension if it is present in whatever libEGL is available at runtime, and if it is not, fall back gracefully to using the core eglGetDisplay()? Or is this specifically not supported, since I cannot see a way for runtime detection of this extension? Or is the runtime detection left unspecified, so one could do it with e.g. getting eglGetPlatformDisplay via dlsym()? Just a question that popped in my mind, since there were no comments toward that topic. Thanks for asking this question. I neglected to address the topic of runtime detection, despite it coming it up in face-to-face conversation multiple times
Re: [Mesa-dev] [RFC] New EGL extension: EGL_EXT_platform_display
On Tue, 19 Feb 2013 08:20:51 -0800 Chad Versace chad.vers...@linux.intel.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'm seeking feedback on an EGL extension that I'm drafting. The ideas have already been discussed at Khronos meetings to a good reception, but I want feedback from Mesa developers too. Summary - --- The extension, tentatively named EGL_EXT_platform_display, enables EGL clients to specify to which platform (X11, Wayland, gbm, etc) an EGL resource (EGLDisplay, EGLSurface, etc) belongs when the resource is derived from a platform-native type. As a corollary, the extension enables the creation of EGL resources from different platforms within a single process. Feedback - I'd like to hear feeback about the details below. Do you see any potential problems? Is it lacking a feature that you believe should be present? Details - --- The draft extension defines the following new functions: // This is the extenion's key function. // EGLDisplay eglGetPlatformDisplayEXT(EGLenum platform, void *native_display); // The two eglCreate functions below differ from their core counterparts // only in their signature. The EGLNative types are replaced with void*. // This makes the signature agnostic to which platform the native resource // belongs. EGLSurface eglCreatePlatformWindowSurfaceEXT(EGLDisplay dpy, EGLConfig config, void *native_window, const EGLint *attrib_list); EGLSurface eglCreatePlatformPixmapSurface(EGLDisplay dpy, EGLConfig config, void *native_pixmap, const EGLint *attrib_list); Valid values for `platform` are defined by layered extensions. For example, EGL_EXT_platform_x11 defines EGL_PLATFORM_X11, and EGL_EXT_platform_wayland defines EGL_PLATFORM_WAYLAND. Also, the layered extensions specify which native types should be passed as the native parameters. For example, EGL_EXT_platform_wayland specifies that, when calling eglCreatePlatformWindowSurfaceEXT with a display that was derived from a Wayland display, then the native_window parameter must be `struct wl_egl_window*`. Analogously, EGL_EXT_platform_x11 specifies that native_window must be `Window*`. Example Code for X11 - // The internal representation of the egl_dpy, created below, remembers that // it was derived from an Xlib display. Display *xlib_dpy = XOpenDisplay(NULL); EGLDisplay *egl_dpy = eglGetPlatformDisplayEXT(EGL_PLATFORM_X11, xlib_dpy); EGLConfig config; eglChooseConfig(egl_dpy, config, ...); // Since egl_dpy remembers that it was derived from an Xlib display, when // creating the EGLSurface below libEGL internally casts the // `(void*) xlib_win` to `Window*`. Window xlib_win = XCreateWindow(xlib_dpy, ...); EGLSurface egl_surface = eglCreatePlatformWindowSurfaceEXT(egl_dpy, config, (void*) xlib_win, NULL); Example Code for Wayland - // The internal representation of the egl_dpy, created below, remembers that // it was derived from a Wayland display. struct wl_display *wl_dpy = wl_display_connect(NULL); EGLDisplay *egl_dpy = eglGetPlatformDisplay(EGL_PLATFORM_WAYLAND, wl_dpy); EGLConfig config; eglChooseConfig(egl_dpy, config, ...); // Since egl_dpy remembers that it was derived from an Wayland display, when // creating the EGLSurface below libEGL internally casts the // `(void*) wl_win` to `struct wl_egl_window*`. struct wl_egl_window *wl_win = wl_egl_window_create(...); EGLSurface egl_surface = eglCreateWindowSurface(egl_dpy, config, (void*) wl_win, NULL); Hi, is it possible to build a binary, that will use this extension if it is present in whatever libEGL is available at runtime, and if it is not, fall back gracefully to using the core eglGetDisplay()? Or is this specifically not supported, since I cannot see a way for runtime detection of this extension? Or is the runtime detection left unspecified, so one could do it with e.g. getting eglGetPlatformDisplay via dlsym()? Just a question that popped in my mind, since there were no comments toward that topic. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 2/2] intel: Support new dri image interface v2
) case __DRI_IMAGE_ATTRIB_HEIGHT: *value = image-region-height; return true; + case __DRI_IMAGE_ATTRIB_COMPONENTS: + if (image-planar_format == NULL) + return false; + *value = image-planar_format-components; + return true; default: return false; } @@ -393,11 +451,15 @@ intel_dup_image(__DRIimage *orig_image, void *loaderPrivate) } image-internal_format = orig_image-internal_format; + image-planar_format = orig_image-planar_format; image-dri_format = orig_image-dri_format; image-format = orig_image-format; image-offset = orig_image-offset; image-data= loaderPrivate; - + + memcpy(image-strides, orig_image-strides, sizeof(image-strides)); + memcpy(image-offsets, orig_image-offsets, sizeof(image-offsets)); + return image; } @@ -413,16 +475,72 @@ intel_validate_usage(__DRIimage *image, unsigned int use) } static __DRIimage * -intel_create_sub_image(__DRIimage *parent, - int width, int height, int dri_format, - int offset, int pitch, void *loaderPrivate) +intel_create_image_from_names(__DRIscreen *screen, + int width, int height, int fourcc, + int *names, int num_names, + int *strides, int *offsets, + void *loaderPrivate) { +struct intel_image_format *f = NULL; __DRIimage *image; -int cpp; +int i, index; + +if (screen == NULL || names == NULL || num_names != 1) +return NULL; + +for (i = 0; i ARRAY_SIZE(intel_image_formats); i++) { +if (intel_image_formats[i].fourcc == fourcc) { + f = intel_image_formats[i]; +} +} + +if (f == NULL) +return NULL; + +image = intel_create_image_from_name(screen, width, height, + __DRI_IMAGE_FORMAT_NONE, + names[0], strides[0], + loaderPrivate); + +if (image == NULL) +return NULL; + +image-planar_format = f; +for (i = 0; i f-nplanes; i++) { +index = f-planes[i].buffer_index; +image-offsets[index] = offsets[index]; +image-strides[index] = strides[index]; +} + +return image; +} + +static __DRIimage * +intel_from_planar(__DRIimage *parent, int plane, void *loaderPrivate) +{ +int width, height, offset, stride, dri_format, cpp, index, pitch; +struct intel_image_format *f; uint32_t mask_x, mask_y; +__DRIimage *image; + +if (parent == NULL || parent-planar_format == NULL) +return NULL; + +f = parent-planar_format; + +if (plane = f-nplanes) +return NULL; + +width = parent-region-width f-planes[plane].width_shift; +height = parent-region-height f-planes[plane].height_shift; +dri_format = f-planes[plane].dri_format; +index = f-planes[plane].buffer_index; +offset = parent-offsets[index]; +stride = parent-strides[index]; image = intel_allocate_image(dri_format, loaderPrivate); -cpp = _mesa_get_format_bytes(image-format); +cpp = _mesa_get_format_bytes(image-format); /* safe since no none format */ +pitch = stride / cpp; if (offset + height * cpp * pitch parent-region-bo-size) { _mesa_warning(NULL, intel_create_sub_image: subimage out of bounds); FREE(image); @@ -463,7 +581,8 @@ static struct __DRIimageExtensionRec intelImageExtension = { intel_query_image, intel_dup_image, intel_validate_usage, -intel_create_sub_image +intel_create_image_from_names, +intel_from_planar }; static const __DRIextension *intelScreenExtensions[] = { Tested-by: Pekka Paalanen ppaala...@gmail.com Date: 2012-08-31 EEST [17:43:58.976] weston 0.95.0 http://wayland.freedesktop.org/ Bug reports to: https://bugs.freedesktop.org/enter_bug.cgi?product=weston Build: 0.95.0-139-g5418a90 toytoolkit: don't ignore resizes with negative width or height (2012-08-16 10:33:56 -0400) [17:43:58.976] OS: Linux, 3.1.7, #2 SMP PREEMPT Thu Jan 5 10:42:13 EET 2012, x86_64 [17:43:58.977] Loading module '/home/pq/local/lib/weston/drm-backend.so' [17:43:58.977] initializing drm backend [17:43:59.326] using /dev/dri/card0 Mesa: Initializing x86-64 optimizations [17:43:59.336] EGL version: 1.4 (DRI2) [17:43:59.337] EGL vendor: Mesa Project [17:43:59.337] EGL client APIs: OpenGL OpenGL_ES OpenGL_ES2 [17:43:59.337] EGL extensions: EGL_MESA_drm_image EGL_WL_bind_wayland_display EGL_KHR_image_base EGL_KHR_gl_renderbuffer_image EGL_KHR_surfaceless_context EGL_KHR_create_context [17:43:59.337] GL version: OpenGL ES 2.0 Mesa 9.0-devel (git-caec6ce) [17:43:59.337] GLSL version
Re: [Mesa-dev] [PATCH weston] compositor: add support for OES_EGL_image_external
On Thu, 16 Aug 2012 17:28:20 -0500 Rob Clark rob.cl...@linaro.org wrote: From: Rob Clark r...@ti.com In cases where the GPU can natively handle certain YUV formats, eglQueryWaylandBufferWL() can return the value EGL_TEXTURE_EXTERNAL_WL and the compositor will treat the buffer as a single egl-image-external. See: http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image_external.txt v1: original v2: rename EGL_TEXTURE_EXTERNAL_OES - EGL_TEXTURE_EXTERNAL_WL and query for the extension Signed-off-by: Rob Clark r...@ti.com Looks good to me now. The only thing I could still say is that maybe it would be nice to log a warning, if the extension is not detected, but querySurface still returns EGL_TEXTURE_EXTERNAL_WL. Hmm, that reminds me, I don't think we have any EGL or GL error reporting in Weston... but that's a whole different story. Thanks, pq --- src/compositor.c | 47 +-- src/compositor.h |2 ++ src/weston-egl-ext.h |1 + 3 files changed, 40 insertions(+), 10 deletions(-) diff --git a/src/compositor.c b/src/compositor.c index b2a3ae9..5c6782e 100644 --- a/src/compositor.c +++ b/src/compositor.c @@ -719,14 +719,14 @@ ensure_textures(struct weston_surface *es, int num_textures) for (i = es-num_textures; i num_textures; i++) { glGenTextures(1, es-textures[i]); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - glTexParameteri(GL_TEXTURE_2D, + glBindTexture(es-target, es-textures[i]); + glTexParameteri(es-target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); - glTexParameteri(GL_TEXTURE_2D, + glTexParameteri(es-target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); } es-num_textures = num_textures; - glBindTexture(GL_TEXTURE_2D, 0); + glBindTexture(es-target, 0); } static void @@ -771,6 +771,7 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) if (wl_buffer_is_shm(buffer)) { es-pitch = wl_shm_buffer_get_stride(buffer) / 4; es-shader = ec-texture_shader_rgba; + es-target = GL_TEXTURE_2D; ensure_textures(es, 1); glBindTexture(GL_TEXTURE_2D, es-textures[0]); @@ -786,7 +787,7 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) for (i = 0; i es-num_images; i++) ec-destroy_image(ec-egl_display, es-images[i]); es-num_images = 0; - + es-target = GL_TEXTURE_2D; switch (format) { case EGL_TEXTURE_RGB: case EGL_TEXTURE_RGBA: @@ -794,6 +795,11 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) num_planes = 1; es-shader = ec-texture_shader_rgba; break; + case EGL_TEXTURE_EXTERNAL_WL: + num_planes = 1; + es-target = GL_TEXTURE_EXTERNAL_OES; + es-shader = ec-texture_shader_egl_external; + break; case EGL_TEXTURE_Y_UV_WL: num_planes = 2; es-shader = ec-texture_shader_y_uv; @@ -824,8 +830,8 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) es-num_images++; glActiveTexture(GL_TEXTURE0 + i); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - ec-image_target_texture_2d(GL_TEXTURE_2D, + glBindTexture(es-target, es-textures[i]); + ec-image_target_texture_2d(es-target, es-images[i]); } @@ -942,9 +948,9 @@ weston_surface_draw(struct weston_surface *es, struct weston_output *output, for (i = 0; i es-num_textures; i++) { glUniform1i(es-shader-tex_uniforms[i], i); glActiveTexture(GL_TEXTURE0 + i); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filter); - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filter); + glBindTexture(es-target, es-textures[i]); + glTexParameteri(es-target, GL_TEXTURE_MIN_FILTER, filter); + glTexParameteri(es-target, GL_TEXTURE_MAG_FILTER, filter); } v = ec-vertices.data; @@ -2842,6 +2848,19 @@ static const char texture_fragment_shader_rgba[] = FRAGMENT_SHADER_EXIT }\n; +static const char texture_fragment_shader_egl_external[] = + #extension GL_OES_EGL_image_external : require\n + precision mediump float;\n + varying vec2 v_texcoord;\n + uniform
Re: [Mesa-dev] [PATCH weston 2/2] compositor: add support for OES_EGL_image_external
On Mon, 13 Aug 2012 18:59:33 -0500 Rob Clark r...@ti.com wrote: On Mon, Aug 13, 2012 at 5:39 PM, Rob Clark rob.cl...@linaro.org wrote: From: Rob Clark r...@ti.com In cases where the GPU can natively handle certain YUV formats, eglQueryWaylandBufferWL() can return the value EGL_TEXTURE_EXTERNAL_OES and the compositor will treat the buffer as a single egl-image-external. See: http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image_external.txt Signed-off-by: Rob Clark r...@ti.com --- src/compositor.c | 42 -- src/compositor.h |2 ++ src/weston-egl-ext.h |1 + 3 files changed, 35 insertions(+), 10 deletions(-) diff --git a/src/compositor.c b/src/compositor.c index b2a3ae9..08e575a 100644 --- a/src/compositor.c +++ b/src/compositor.c @@ -719,14 +719,14 @@ ensure_textures(struct weston_surface *es, int num_textures) for (i = es-num_textures; i num_textures; i++) { glGenTextures(1, es-textures[i]); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - glTexParameteri(GL_TEXTURE_2D, + glBindTexture(es-target, es-textures[i]); + glTexParameteri(es-target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); - glTexParameteri(GL_TEXTURE_2D, + glTexParameteri(es-target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); } es-num_textures = num_textures; - glBindTexture(GL_TEXTURE_2D, 0); + glBindTexture(es-target, 0); } static void @@ -771,6 +771,7 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) if (wl_buffer_is_shm(buffer)) { es-pitch = wl_shm_buffer_get_stride(buffer) / 4; es-shader = ec-texture_shader_rgba; + es-target = GL_TEXTURE_2D; ensure_textures(es, 1); glBindTexture(GL_TEXTURE_2D, es-textures[0]); @@ -786,7 +787,7 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) for (i = 0; i es-num_images; i++) ec-destroy_image(ec-egl_display, es-images[i]); es-num_images = 0; - + es-target = GL_TEXTURE_2D; switch (format) { case EGL_TEXTURE_RGB: case EGL_TEXTURE_RGBA: @@ -794,6 +795,11 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) num_planes = 1; es-shader = ec-texture_shader_rgba; break; + case EGL_TEXTURE_EXTERNAL_OES: Kristian pointed out to me on IRC that this should be called something like EGL_TEXTURE_EXTERNAL_WL (or EGL_TEXTURE_EXTERNAL_OES_WL?), so I'll resubmit these patches.. + num_planes = 1; + es-target = GL_TEXTURE_EXTERNAL_OES; + es-shader = ec-texture_shader_egl_external; + break; case EGL_TEXTURE_Y_UV_WL: num_planes = 2; es-shader = ec-texture_shader_y_uv; @@ -824,8 +830,8 @@ weston_surface_attach(struct wl_surface *surface, struct wl_buffer *buffer) es-num_images++; glActiveTexture(GL_TEXTURE0 + i); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - ec-image_target_texture_2d(GL_TEXTURE_2D, + glBindTexture(es-target, es-textures[i]); + ec-image_target_texture_2d(es-target, es-images[i]); } @@ -942,9 +948,9 @@ weston_surface_draw(struct weston_surface *es, struct weston_output *output, for (i = 0; i es-num_textures; i++) { glUniform1i(es-shader-tex_uniforms[i], i); glActiveTexture(GL_TEXTURE0 + i); - glBindTexture(GL_TEXTURE_2D, es-textures[i]); - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filter); - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filter); + glBindTexture(es-target, es-textures[i]); + glTexParameteri(es-target, GL_TEXTURE_MIN_FILTER, filter); + glTexParameteri(es-target, GL_TEXTURE_MAG_FILTER, filter); } v = ec-vertices.data; @@ -2842,6 +2848,19 @@ static const char texture_fragment_shader_rgba[] = FRAGMENT_SHADER_EXIT }\n; +static const char texture_fragment_shader_egl_external[] = + #extension GL_OES_EGL_image_external : require\n + precision mediump float;\n + varying vec2 v_texcoord;\n + uniform
Re: [Mesa-dev] [PATCH] gbm : Fix build for wayland include
On Mon, 30 Jul 2012 12:02:06 -0400 Kristian Høgsberg hoegsb...@gmail.com wrote: On Thu, Jul 19, 2012 at 01:54:05PM +0900, Elvis Lee wrote: backends/gbm_dri.c fails to find wayland-server.h. Thanks, pushed. Yeah thanks, the build works for me too, now. Signed-off-by: Elvis Lee kwangwoong@lge.com --- src/gbm/Makefile.am |1 + 1 file changed, 1 insertion(+) diff --git a/src/gbm/Makefile.am b/src/gbm/Makefile.am index 5ca2839..f079da1 100644 --- a/src/gbm/Makefile.am +++ b/src/gbm/Makefile.am @@ -22,6 +22,7 @@ libgbm_la_LIBADD = $(LIBUDEV_LIBS) $(DLOPEN_LIBS) if HAVE_EGL_PLATFORM_WAYLAND AM_CPPFLAGS = -DHAVE_WAYLAND_PLATFORM +AM_CFLAGS += $(WAYLAND_CFLAGS) endif ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] mesa: loosen small matrix determinant check
On Tue, 24 Jul 2012 11:31:59 -0600 Brian Paul bri...@vmware.com wrote: When computing a matrix inverse, if the determinant is too small we could hit a divide by zero. There's a check to prevent this (we basically give up on computing the inverse and return the identity matrix.) This patches loosens this test to fix a lighting bug reported by Lars Henning Wendt. NOTE: This is a candidate for the 8.0 branch. --- src/mesa/math/m_matrix.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/src/mesa/math/m_matrix.c b/src/mesa/math/m_matrix.c index 02aedba..ef377ee 100644 --- a/src/mesa/math/m_matrix.c +++ b/src/mesa/math/m_matrix.c @@ -513,7 +513,7 @@ static GLboolean invert_matrix_3d_general( GLmatrix *mat ) det = pos + neg; - if (det*det 1e-25) + if (det 1e-25) return GL_FALSE; det = 1.0F / det; Hi, just a fly-by question; doesn't that break if determinant is negative? I.e. reflection transformations. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] intel: Change vendor string to Intel® Open Source Technology Center.
On Thu, 31 May 2012 16:19:17 -0700 Kenneth Graunke kenn...@whitecape.org wrote: Tungsten Graphics has not existed for several years, and the majority of ongoing development and support is done by Intel. I chose to include Open Source Technology Center to distinguish it from, say, the closed source Windows OpenGL driver. The one downside to this patch is that applications that pattern match against Intel may start applying workarounds meant for the Windows driver. However, it does seem like the right thing to do. This does change oglconform behavior. Acked-by: Eric Anholt e...@anholt.net Acked-by: Ian Romanick ian.d.roman...@intel.com Cc: Eugeni Dodonov eug...@dodonov.net Signed-off-by: Kenneth Graunke kenn...@whitecape.org --- src/mesa/drivers/dri/intel/intel_context.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/mesa/drivers/dri/intel/intel_context.c b/src/mesa/drivers/dri/intel/intel_context.c index 9deb4ca..712acb8 100644 --- a/src/mesa/drivers/dri/intel/intel_context.c +++ b/src/mesa/drivers/dri/intel/intel_context.c @@ -72,7 +72,7 @@ intelGetString(struct gl_context * ctx, GLenum name) switch (name) { case GL_VENDOR: - return (GLubyte *) Tungsten Graphics, Inc; + return (GLubyte *) Intel® Open Source Technology Center; Hi, that is an utf-8 character there, right? Is that really appropriate, given the API and all? I don't know about character encoding stuff, but that looks surprising. Just wondering... Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] intel: Change vendor string to Intel® Open Source Technology Center.
On Fri, 1 Jun 2012 09:18:44 +0300 Pekka Paalanen ppaala...@gmail.com wrote: On Thu, 31 May 2012 16:19:17 -0700 Kenneth Graunke kenn...@whitecape.org wrote: Tungsten Graphics has not existed for several years, and the majority of ongoing development and support is done by Intel. I chose to include Open Source Technology Center to distinguish it from, say, the closed source Windows OpenGL driver. The one downside to this patch is that applications that pattern match against Intel may start applying workarounds meant for the Windows driver. However, it does seem like the right thing to do. This does change oglconform behavior. Acked-by: Eric Anholt e...@anholt.net Acked-by: Ian Romanick ian.d.roman...@intel.com Cc: Eugeni Dodonov eug...@dodonov.net Signed-off-by: Kenneth Graunke kenn...@whitecape.org --- src/mesa/drivers/dri/intel/intel_context.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/mesa/drivers/dri/intel/intel_context.c b/src/mesa/drivers/dri/intel/intel_context.c index 9deb4ca..712acb8 100644 --- a/src/mesa/drivers/dri/intel/intel_context.c +++ b/src/mesa/drivers/dri/intel/intel_context.c @@ -72,7 +72,7 @@ intelGetString(struct gl_context * ctx, GLenum name) switch (name) { case GL_VENDOR: - return (GLubyte *) Tungsten Graphics, Inc; + return (GLubyte *) Intel® Open Source Technology Center; Hi, that is an utf-8 character there, right? Is that really appropriate, given the API and all? I don't know about character encoding stuff, but that looks surprising. Just wondering... Oops, sorry, I missed the other replies. Let me rephrase: is it ok by the GL spec to return non-ASCII strings here? Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] wayland-drm: remove wl_buffer.damage
This is a related fix for the Wayland change: commit 83685c506e76212ae4e5cb722205d98d3b0603b9 Author: Kristian Høgsberg k...@bitplanet.net Date: Mon Mar 26 16:33:24 2012 -0400 Remove wl_buffer.damage and simplify shm implementation Apparently, this should also fix a memory leak. When wl_buffer.damage was removed from Wayland and Mesa was not fixed, wl_buffer.destroy ended up in the (empty) damage function instead of calling wl_resource_destroy(). Spotted during build as: CC wayland-drm-protocol.lo wayland-drm.c:80:2: warning: initialization from incompatible pointer type wayland-drm.c:82:1: warning: excess elements in struct initializer wayland-drm.c:82:1: warning: (near initialization for 'drm_buffer_interface') Signed-off-by: Pekka Paalanen ppaala...@gmail.com --- src/egl/wayland/wayland-drm/wayland-drm.c |7 --- 1 files changed, 0 insertions(+), 7 deletions(-) diff --git a/src/egl/wayland/wayland-drm/wayland-drm.c b/src/egl/wayland/wayland-drm/wayland-drm.c index 42e6788..101b2c4 100644 --- a/src/egl/wayland/wayland-drm/wayland-drm.c +++ b/src/egl/wayland/wayland-drm/wayland-drm.c @@ -54,12 +54,6 @@ struct wl_drm_buffer { }; static void -buffer_damage(struct wl_client *client, struct wl_resource *buffer, - int32_t x, int32_t y, int32_t width, int32_t height) -{ -} - -static void destroy_buffer(struct wl_resource *resource) { struct wl_drm_buffer *buffer = resource-data; @@ -77,7 +71,6 @@ buffer_destroy(struct wl_client *client, struct wl_resource *resource) } const static struct wl_buffer_interface drm_buffer_interface = { - buffer_damage, buffer_destroy }; -- 1.7.3.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH] linker: Fix memory leak in count_uniform_size::visit_field.
On Thu, 29 Mar 2012 14:16:10 -0700 Ian Romanick i...@freedesktop.org wrote: On 03/29/2012 10:41 AM, Kenneth Graunke wrote: On 03/28/2012 11:43 PM, Vinson Lee wrote: Fixes a Coverity resource leak defect. NOTE: This is a candidate for the 8.0 branch. Signed-off-by: Vinson Leev...@freedesktop.org --- src/glsl/link_uniforms.cpp | 1 + 1 file changed, 1 insertion(+) diff --git a/src/glsl/link_uniforms.cpp b/src/glsl/link_uniforms.cpp index 613c9b7..86717f9 100644 --- a/src/glsl/link_uniforms.cpp +++ b/src/glsl/link_uniforms.cpp @@ -175,6 +175,7 @@ private: char *key = strdup(name); this-map-put(this-num_active_uniforms, key); + free(key); /* Each leaf uniform occupies one entry in the list of active * uniforms. It looks like string_to_uint_map::put (hash_table.h:247) already calls strdup() on the key. So I think we should just do: this-map-put(this-num_active_uniforms, name); and not bother duplicating it twice. I think that's correct. I seem to recall a similar case being fixed a few months ago. It was exactly this case with a suggested solution as Kenneth's above, and no, it wasn't fixed. I could not produce a non-regressing piglit test soon enough, had to move on, and didn't push it. My piglit results were non-deterministic. Glad to see it's getting fixed now. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] patches (Re: undefined C++ GLSL symbol error)
On Wed, 1 Feb 2012 06:14:40 -0800 (PST) Jose Fonseca jfons...@vmware.com wrote: - Original Message - On Wed, 1 Feb 2012 05:03:21 -0800 (PST) Jose Fonseca jfons...@vmware.com wrote: I wonder if people have use any tools to facility applying patches posted to mesa, as for me locate, save, and applying them takes a lot of time, and more often than not fails due to white space munging. I barely can keep up with the review requests anyway. If a patch is malformatted to begin with, I can't help it. But, if your email client happens to support running external commands on mail files, I have a script that applies them to a given local git repo, launching a terminal window for it and optionally running it though the kernel checkpatch.pl. On any errors, it drops you to a shell. Personally I use claws-mail, and could share my setup, if anyone is interested in homebrewn solution for the save apply part. I use webmail (mostly due to namely calendar integration, quick search of all email, easy access anywhere, etc), which makes this difficult. So I probably would need to tag and/or move the mail messages w/ the patches to an IMAP folder, and then have the fetchmail + apply + etc on the script which I'd run from a git checkout. I wouldn't mind take a look to your script. Attached. The scripts rely on me selecting the emails in order, and then clicking a menu item that runs the UI script with the email file names as arguments. Thanks, pq claws-git-am.bash Description: Binary data claws-git-am-UI.bash Description: Binary data ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] patches (Re: undefined C++ GLSL symbol error)
On Wed, 1 Feb 2012 05:03:21 -0800 (PST) Jose Fonseca jfons...@vmware.com wrote: I wonder if people have use any tools to facility applying patches posted to mesa, as for me locate, save, and applying them takes a lot of time, and more often than not fails due to white space munging. I barely can keep up with the review requests anyway. If a patch is malformatted to begin with, I can't help it. But, if your email client happens to support running external commands on mail files, I have a script that applies them to a given local git repo, launching a terminal window for it and optionally running it though the kernel checkpatch.pl. On any errors, it drops you to a shell. Personally I use claws-mail, and could share my setup, if anyone is interested in homebrewn solution for the save apply part. Thanks, pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] linker: fix strdup memory leak
On Mon, 19 Dec 2011 13:24:20 -0800 Ian Romanick i...@freedesktop.org wrote: On 12/19/2011 05:23 AM, Pekka Paalanen wrote: On Fri, 16 Dec 2011 10:46:11 -0800 Ian Romanicki...@freedesktop.org wrote: On 12/14/2011 11:26 PM, Pekka Paalanen wrote: string_to_uint_map::put() already does a strdup() for the key argument, so we leak the memory allocated by strdup() in link_uniforms.cpp. Remove the extra strdup(), fixes a few Valgrind detected leaks. Have you run piglit on this? I seem to recall adding the extra strdup for a reason. The hash table keeps a copy of the key pointer passed to it, and the underlying object may be deleted before the hash table is deleted. This can happen if the back-end optimizes some uniforms away after the linker has generated the list of active uniforms. I'm pretty sure there were one or two test cases that hit this on i965. Sorry, I didn't. Finally got piglit running, though readPixSanity failed on stencil values. Anyway it's running all.tests now for the upstream Mesa master, and then I'll run it with my patches. Oh, this is Sandybridge, btw. --- a/src/glsl/link_uniforms.cpp +++ b/src/glsl/link_uniforms.cpp @@ -174,8 +174,7 @@ private: if (this-map-get(id, name)) return; - char *key = strdup(name); - this-map-put(this-num_active_uniforms, key); + this-map-put(this-num_active_uniforms, name); /* Each leaf uniform occupies one entry in the list of active * uniforms. The whole visit_field() function is this: virtual void visit_field(const glsl_type *type, const char *name) { assert(!type-is_record()); assert(!(type-is_array() type-fields.array-is_record())); /* Count the number of samplers regardless of whether the uniform is * already in the hash table. The hash table prevents adding the same * uniform for multiple shader targets, but in this case we want to * count it for each shader target. */ const unsigned values = values_for_type(type); if (type-contains_sampler()) { this-num_shader_samplers += type-is_array() ? type-array_size() : 1; } else { /* Accumulate the total number of uniform slots used by this shader. * Note that samplers do not count against this limit because they * don't use any storage on current hardware. */ this-num_shader_uniform_components += values; } /* If the uniform is already in the map, there's nothing more to do. */ unsigned id; if (this-map-get(id, name)) return; char *key = strdup(name); this-map-put(this-num_active_uniforms, key); /* Each leaf uniform occupies one entry in the list of active * uniforms. */ this-num_active_uniforms++; this-num_values += values; } The variable 'key' is not used anywhere else but passed to string_to_uint_map::put(). That function in its whole is: void put(unsigned value, const char *key) { /* The low-level hash table structure returns NULL if key is not in the * hash table. However, users of this map might want to store zero as a * valid value in the table. Bias the value by +1 so that a * user-specified zero is stored as 1. This enables ::get to tell the * difference between a user-specified zero (returned as 1 by * hash_table_find) and the key not in the table (returned as 0 by * hash_table_find). * * The net effect is that we can't store UINT_MAX in the table. This is * because UINT_MAX+1 = 0. */ assert(value != UINT_MAX); hash_table_replace(this-ht, (void *) (intptr_t) (value + 1), strdup(key)); } Therefore, as far as I see, 'key' is not used for anything else than passed again through strdup(), which would effectively be: Ah yes, I forgot that I added that in string_to_uint_map::put. You are quite correct. Okay, the original patch is: Reviewed-by: Ian Romanick ian.d.roman...@intel.com You were right, it does regress in piglit. This particular patch causes varray-disabled to fail in quick.tests. From 4 runs of rebuilt Mesa, 3 fail in varray-disabled, and the remaining one fails in EXT_TIMER_QUERY:time-elapsed instead. Some of the rebuilds were from clean checkout (git clean -dxf), some simply by adding the patch on top of a previous build that produced the reference piglit results. Unfortunately I have to move on for now, so I'm not pushing this patch. Thanks. hash_table_replace(..., ..., strdup(strdup(name))); I don't see any case where this could not be a leak, or prevent a bug. What do I miss? Is there somewhere another put
Re: [Mesa-dev] [PATCH] mesa: fix a leak in _mesa_delete_texture_image()
On Fri, 16 Dec 2011 08:42:01 -0700 Brian Paul bri...@vmware.com wrote: On 12/16/2011 07:17 AM, Pekka Paalanen wrote: Valgrind complains about a definitely lost block allocated in intelNewTextureImage(). This leak was apparently created by 6e0f9001fe3fb191c2928bd09aa9e9d05ddf4ea9, mesa: move gl_texture_image::Data, RowStride, ImageOffsets to swrast, as it removes the free() from _mesa_delete_texture_image(). Put the free() back, fixes a Valgrind error. Signed-off-by: Pekka Paalanenppaala...@gmail.com Cc: Brian Paulbri...@vmware.com --- src/mesa/main/teximage.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/src/mesa/main/teximage.c b/src/mesa/main/teximage.c index 8a002b6..6318cb1 100644 --- a/src/mesa/main/teximage.c +++ b/src/mesa/main/teximage.c @@ -604,6 +604,7 @@ _mesa_delete_texture_image(struct gl_context *ctx, */ ASSERT(ctx-Driver.FreeTextureImageBuffer); ctx-Driver.FreeTextureImageBuffer( ctx, texImage ); + free(texImage); } Reviewed-by: Brian Paul bri...@vmware.com Can you commit this? Verified with piglit quick.tests to be regression-free. Pushed, thanks. - pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] linker: fix strdup memory leak
On Fri, 16 Dec 2011 10:46:11 -0800 Ian Romanick i...@freedesktop.org wrote: On 12/14/2011 11:26 PM, Pekka Paalanen wrote: string_to_uint_map::put() already does a strdup() for the key argument, so we leak the memory allocated by strdup() in link_uniforms.cpp. Remove the extra strdup(), fixes a few Valgrind detected leaks. Have you run piglit on this? I seem to recall adding the extra strdup for a reason. The hash table keeps a copy of the key pointer passed to it, and the underlying object may be deleted before the hash table is deleted. This can happen if the back-end optimizes some uniforms away after the linker has generated the list of active uniforms. I'm pretty sure there were one or two test cases that hit this on i965. Sorry, I didn't. Finally got piglit running, though readPixSanity failed on stencil values. Anyway it's running all.tests now for the upstream Mesa master, and then I'll run it with my patches. Oh, this is Sandybridge, btw. --- a/src/glsl/link_uniforms.cpp +++ b/src/glsl/link_uniforms.cpp @@ -174,8 +174,7 @@ private: if (this-map-get(id, name)) return; - char *key = strdup(name); - this-map-put(this-num_active_uniforms, key); + this-map-put(this-num_active_uniforms, name); /* Each leaf uniform occupies one entry in the list of active * uniforms. The whole visit_field() function is this: virtual void visit_field(const glsl_type *type, const char *name) { assert(!type-is_record()); assert(!(type-is_array() type-fields.array-is_record())); /* Count the number of samplers regardless of whether the uniform is * already in the hash table. The hash table prevents adding the same * uniform for multiple shader targets, but in this case we want to * count it for each shader target. */ const unsigned values = values_for_type(type); if (type-contains_sampler()) { this-num_shader_samplers += type-is_array() ? type-array_size() : 1; } else { /* Accumulate the total number of uniform slots used by this shader. * Note that samplers do not count against this limit because they * don't use any storage on current hardware. */ this-num_shader_uniform_components += values; } /* If the uniform is already in the map, there's nothing more to do. */ unsigned id; if (this-map-get(id, name)) return; char *key = strdup(name); this-map-put(this-num_active_uniforms, key); /* Each leaf uniform occupies one entry in the list of active * uniforms. */ this-num_active_uniforms++; this-num_values += values; } The variable 'key' is not used anywhere else but passed to string_to_uint_map::put(). That function in its whole is: void put(unsigned value, const char *key) { /* The low-level hash table structure returns NULL if key is not in the * hash table. However, users of this map might want to store zero as a * valid value in the table. Bias the value by +1 so that a * user-specified zero is stored as 1. This enables ::get to tell the * difference between a user-specified zero (returned as 1 by * hash_table_find) and the key not in the table (returned as 0 by * hash_table_find). * * The net effect is that we can't store UINT_MAX in the table. This is * because UINT_MAX+1 = 0. */ assert(value != UINT_MAX); hash_table_replace(this-ht, (void *) (intptr_t) (value + 1), strdup(key)); } Therefore, as far as I see, 'key' is not used for anything else than passed again through strdup(), which would effectively be: hash_table_replace(..., ..., strdup(strdup(name))); I don't see any case where this could not be a leak, or prevent a bug. What do I miss? Is there somewhere another put() that overwrites or overloads this version of put()? Thanks, pq ps. As for the other patch, thanks for the advice on how to fix it properly. I don't think I can do that this year, though. ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] [PATCH 1/2] linker: fix strdup memory leak
On Mon, 19 Dec 2011 11:02:31 -0800 Kenneth Graunke kenn...@whitecape.org wrote: On 12/19/2011 05:23 AM, Pekka Paalanen wrote: On Fri, 16 Dec 2011 10:46:11 -0800 Ian Romanick i...@freedesktop.org wrote: On 12/14/2011 11:26 PM, Pekka Paalanen wrote: string_to_uint_map::put() already does a strdup() for the key argument, so we leak the memory allocated by strdup() in link_uniforms.cpp. Remove the extra strdup(), fixes a few Valgrind detected leaks. Have you run piglit on this? I seem to recall adding the extra strdup for a reason. The hash table keeps a copy of the key pointer passed to it, and the underlying object may be deleted before the hash table is deleted. This can happen if the back-end optimizes some uniforms away after the linker has generated the list of active uniforms. I'm pretty sure there were one or two test cases that hit this on i965. Sorry, I didn't. Finally got piglit running, though readPixSanity failed on stencil values. Anyway it's running all.tests now for the upstream Mesa master, and then I'll run it with my patches. For future reference, you probably want to use quick.tests. It skips valgrind and doesn't iterate over every single visual, so it takes a fraction of the time but still runs basically all the tests. Thanks, I killed the first run after 17 hours. From the README I gathered that quick.tests was somehow lacking and all.tests would be preferable. Also, I had no idea of possible running times or what significantly there meant. - pq ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH] mesa: fix a leak in _mesa_delete_texture_image()
Valgrind complains about a definitely lost block allocated in intelNewTextureImage(). This leak was apparently created by 6e0f9001fe3fb191c2928bd09aa9e9d05ddf4ea9, mesa: move gl_texture_image::Data, RowStride, ImageOffsets to swrast, as it removes the free() from _mesa_delete_texture_image(). Put the free() back, fixes a Valgrind error. Signed-off-by: Pekka Paalanen ppaala...@gmail.com Cc: Brian Paul bri...@vmware.com --- src/mesa/main/teximage.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/src/mesa/main/teximage.c b/src/mesa/main/teximage.c index 8a002b6..6318cb1 100644 --- a/src/mesa/main/teximage.c +++ b/src/mesa/main/teximage.c @@ -604,6 +604,7 @@ _mesa_delete_texture_image(struct gl_context *ctx, */ ASSERT(ctx-Driver.FreeTextureImageBuffer); ctx-Driver.FreeTextureImageBuffer( ctx, texImage ); + free(texImage); } -- 1.7.3.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 1/2] linker: fix strdup memory leak
string_to_uint_map::put() already does a strdup() for the key argument, so we leak the memory allocated by strdup() in link_uniforms.cpp. Remove the extra strdup(), fixes a few Valgrind detected leaks. Signed-off-by: Pekka Paalanen ppaala...@gmail.com --- src/glsl/link_uniforms.cpp |3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/src/glsl/link_uniforms.cpp b/src/glsl/link_uniforms.cpp index c7de480..f6094d7 100644 --- a/src/glsl/link_uniforms.cpp +++ b/src/glsl/link_uniforms.cpp @@ -174,8 +174,7 @@ private: if (this-map-get(id, name)) return; - char *key = strdup(name); - this-map-put(this-num_active_uniforms, key); + this-map-put(this-num_active_uniforms, name); /* Each leaf uniform occupies one entry in the list of active * uniforms. -- 1.7.3.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
[Mesa-dev] [PATCH 2/2] mesa: free gl_uniform_storage::name
parcel_out_uniform_storage::visit_field() assigns a strdup()'d string into gl_uniform_storage::name, but it is never freed. Free gl_uniform_storage::name, fixes some Valgrind reported memory leaks. Signed-off-by: Pekka Paalanen ppaala...@gmail.com --- src/mesa/main/shaderobj.c |4 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/src/mesa/main/shaderobj.c b/src/mesa/main/shaderobj.c index 454007f..2275430 100644 --- a/src/mesa/main/shaderobj.c +++ b/src/mesa/main/shaderobj.c @@ -39,6 +39,7 @@ #include program/prog_parameter.h #include program/hash_table.h #include ralloc.h +#include ../glsl/ir_uniform.h /**/ /*** Shader object functions***/ @@ -276,6 +277,9 @@ _mesa_clear_shader_program_data(struct gl_context *ctx, struct gl_shader_program *shProg) { if (shProg-UniformStorage) { + unsigned i; + for (i = 0; i shProg-NumUserUniformStorage; ++i) + free(shProg-UniformStorage[i].name); ralloc_free(shProg-UniformStorage); shProg-NumUserUniformStorage = 0; shProg-UniformStorage = NULL; -- 1.7.3.4 ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Re: [Mesa-dev] RFC: ctx-Driver.Map/UnmapTextureImage() hooks
On Fri, 15 Jul 2011 12:22:41 -0600 Brian Paul bri...@vmware.com wrote: On 07/15/2011 10:07 AM, Dave Airlie wrote: On Fri, Jul 15, 2011 at 4:10 AM, Brian Paulbrian.e.p...@gmail.com wrote: The map-texture-image-v4 branch that I just pushed implements this change. It really cleaned things up for the better and will lead to a few more follow-on improvements. There's no obvious regressions with swrast or the gallium drivers. I updated the intel driver code and tested i915 and it seems OK too, but I didn't do a full piglit run on it. I also updated the legacy r600 driver in case anyone cares but didn't do any testing of it. I didn't update any of the other legacy DRI drivers. Would anyone object if we simply stop building mach64, r128, unichrome, sis, etc? I'd be happy to remove those drivers altogether for that matter. we could EOL those in 7.11, and if anyone wants to ship them, they can just build 7.11 for them. Sounds good to me. I think we'd only keep the swrast, intel and maybe r300/r600 drivers. Opinions? Um, don't kill nouveau_vieux, please. -- Pekka Paalanen http://www.iki.fi/pq/ ___ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev