[Bug 94675] issue with mpv using OGL output on latest xf86-video-ati

2016-03-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=94675

--- Comment #5 from John  ---
I've seen you reverted the revert, so I tried it.
Latest mesa and xf86 works fine again, I'm guessing this is now properly fixed.

Thank you!

-- 
You are receiving this mail because:
You are the assignee for the bug.___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
https://lists.x.org/mailman/listinfo/xorg-driver-ati


Re: Major/Minor Opcodes of failed requests

2016-03-30 Thread Ingo Bürk
Hi Lloyd,

Adam already decoded the opcode for you. Just a quick Google search of
request name + "BadAlloc" gives at least a few results. It might be
worth checking those out. I'm not familiar with GLX, unfortunately.


Regards
Ingo

On 03/30/2016 08:38 PM, Lloyd Brown wrote:
> Ingo,
>
> Thank you for this.
>
> Just for clarification, are we talking about system RAM or video card's RAM?
>
> The reason I ask is this.  Since we're an HPC lab, we do limit system
> memory via memory cgroups, based on what the user's job requested.  But
> since seeing your email, I've gone as high as 64GB in my request,
> verified that the cgroup reflected that, and the problem still
> occurred.  If we're talking about video card's RAM, we don't
> artificially limit it at all, and the card in question is a Tesla K80,
> which has 2 GPUs, and 12GB of video RAM per GPU.
>
> I wonder if there's some other limit going on, that I'm not aware of.
>
> Maybe it makes more sense to contact the Paraview software community, at
> this point.  They may have a better idea where this could be going wrong.
>
> Thanks for the info, though.  It was exactly the sort of thing I was
> hoping for.
>
> Lloyd
>
>
>
>
> On 03/30/2016 12:18 PM, Ingo Bürk wrote:
>> Hi Lloyd,
>>
>> see here: http://www.x.org/wiki/Development/Documentation/Protocol/OpCodes/
>>
>> In your case you are trying to allocate way too much memory. This can
>> happen, for example, if you by accident try to create enormously large
>> pixmaps. Of course there's many things that can cause this. Decoding the
>> opcode will help you debug it.
>>
>>
>> Regards
>> Ingo
>>
>> On 03/30/2016 06:03 PM, Lloyd Brown wrote:
>>> Can anyone help me understand where the error messages, especially the
>>> major and minor opcodes, come from in an error like this one?  Are these
>>> defined by Xorg, by the driver (Nvidia, in this case), or somewhere else
>>> entirely?
>>>
 X Error of failed request:  BadAlloc (insufficient resources for
 operation)
   Major opcode of failed request:  135 (GLX)
   Minor opcode of failed request:  34 ()
   Serial number of failed request:  26
   Current serial number in output stream:  27

>>> So, here's the background.  I'm launching Xorg to manage the GLX context
>>> for some processing applications.  When I use things like glxgears,
>>> glxspheres64 (from the VirtualGL project), glxinfo, or glmark2,
>>> everything works well.  But when I use the actual user application
>>> (pvserver, part of Paraview), it gives me this error shortly after I
>>> connect my paraview frontend, to the pvserver backend.
>>>
>>> Running the pvserver inside gdb, with a "break exit", lets me backtrace
>>> it, but all it really tells me is that it's occurring when the
>>> application is trying to establish it's context.
>>>
>>> I can continue to dink around with it, but if anyone can at least point
>>> me in the right direction, that would be helpful.
>>>
>>> Thanks,
>>>

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Major/Minor Opcodes of failed requests

2016-03-30 Thread Lloyd Brown
Ingo,

Thank you for this.

Just for clarification, are we talking about system RAM or video card's RAM?

The reason I ask is this.  Since we're an HPC lab, we do limit system
memory via memory cgroups, based on what the user's job requested.  But
since seeing your email, I've gone as high as 64GB in my request,
verified that the cgroup reflected that, and the problem still
occurred.  If we're talking about video card's RAM, we don't
artificially limit it at all, and the card in question is a Tesla K80,
which has 2 GPUs, and 12GB of video RAM per GPU.

I wonder if there's some other limit going on, that I'm not aware of.

Maybe it makes more sense to contact the Paraview software community, at
this point.  They may have a better idea where this could be going wrong.

Thanks for the info, though.  It was exactly the sort of thing I was
hoping for.

Lloyd




On 03/30/2016 12:18 PM, Ingo Bürk wrote:
> Hi Lloyd,
>
> see here: http://www.x.org/wiki/Development/Documentation/Protocol/OpCodes/
>
> In your case you are trying to allocate way too much memory. This can
> happen, for example, if you by accident try to create enormously large
> pixmaps. Of course there's many things that can cause this. Decoding the
> opcode will help you debug it.
>
>
> Regards
> Ingo
>
> On 03/30/2016 06:03 PM, Lloyd Brown wrote:
>> Can anyone help me understand where the error messages, especially the
>> major and minor opcodes, come from in an error like this one?  Are these
>> defined by Xorg, by the driver (Nvidia, in this case), or somewhere else
>> entirely?
>>
>>> X Error of failed request:  BadAlloc (insufficient resources for
>>> operation)
>>>   Major opcode of failed request:  135 (GLX)
>>>   Minor opcode of failed request:  34 ()
>>>   Serial number of failed request:  26
>>>   Current serial number in output stream:  27
>>>
>> So, here's the background.  I'm launching Xorg to manage the GLX context
>> for some processing applications.  When I use things like glxgears,
>> glxspheres64 (from the VirtualGL project), glxinfo, or glmark2,
>> everything works well.  But when I use the actual user application
>> (pvserver, part of Paraview), it gives me this error shortly after I
>> connect my paraview frontend, to the pvserver backend.
>>
>> Running the pvserver inside gdb, with a "break exit", lets me backtrace
>> it, but all it really tells me is that it's occurring when the
>> application is trying to establish it's context.
>>
>> I can continue to dink around with it, but if anyone can at least point
>> me in the right direction, that would be helpful.
>>
>> Thanks,
>>

-- 
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Major/Minor Opcodes of failed requests

2016-03-30 Thread Ingo Bürk
Hi Lloyd,

see here: http://www.x.org/wiki/Development/Documentation/Protocol/OpCodes/

In your case you are trying to allocate way too much memory. This can
happen, for example, if you by accident try to create enormously large
pixmaps. Of course there's many things that can cause this. Decoding the
opcode will help you debug it.


Regards
Ingo

On 03/30/2016 06:03 PM, Lloyd Brown wrote:
> Can anyone help me understand where the error messages, especially the
> major and minor opcodes, come from in an error like this one?  Are these
> defined by Xorg, by the driver (Nvidia, in this case), or somewhere else
> entirely?
>
>> X Error of failed request:  BadAlloc (insufficient resources for
>> operation)
>>   Major opcode of failed request:  135 (GLX)
>>   Minor opcode of failed request:  34 ()
>>   Serial number of failed request:  26
>>   Current serial number in output stream:  27
>>
> So, here's the background.  I'm launching Xorg to manage the GLX context
> for some processing applications.  When I use things like glxgears,
> glxspheres64 (from the VirtualGL project), glxinfo, or glmark2,
> everything works well.  But when I use the actual user application
> (pvserver, part of Paraview), it gives me this error shortly after I
> connect my paraview frontend, to the pvserver backend.
>
> Running the pvserver inside gdb, with a "break exit", lets me backtrace
> it, but all it really tells me is that it's occurring when the
> application is trying to establish it's context.
>
> I can continue to dink around with it, but if anyone can at least point
> me in the right direction, that would be helpful.
>
> Thanks,
>

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Major/Minor Opcodes of failed requests

2016-03-30 Thread Adam Jackson
On Wed, 2016-03-30 at 10:03 -0600, Lloyd Brown wrote:
> Can anyone help me understand where the error messages, especially the
> major and minor opcodes, come from in an error like this one?  Are these
> defined by Xorg, by the driver (Nvidia, in this case), or somewhere else
> entirely?

The major opcode is split in two ranges.  Major opcodes from 0 to 127
are reserved for requests in the core protocol.  Major opcodes from 128
to 255 are assigned dynamically to each extension as it is registered;
the minor opcode then determines which extension request it is.

> > X Error of failed request:  BadAlloc (insufficient resources for
> > operation)
> >   Major opcode of failed request:  135 (GLX)

Xlib is polite enough to map the major opcode to the extension here.
You can also see the assignments for a particular server by running
xdpyinfo -queryExtensions.

> >   Minor opcode of failed request:  34 ()

GLX request 34 is X_GLXCreateContextAtrribsARB. The easiest way to look
this up in general is to grep for the request number in the appropriate
extension header file, usually in /usr/include/X11/extensions, but for
GLX it's in /usr/include/GL.

The question, then, is why you'd be getting BadAlloc back from
glXCreateContextAttribsARB().

- ajax
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Major/Minor Opcodes of failed requests

2016-03-30 Thread Lloyd Brown
Can anyone help me understand where the error messages, especially the
major and minor opcodes, come from in an error like this one?  Are these
defined by Xorg, by the driver (Nvidia, in this case), or somewhere else
entirely?

> X Error of failed request:  BadAlloc (insufficient resources for
> operation)
>   Major opcode of failed request:  135 (GLX)
>   Minor opcode of failed request:  34 ()
>   Serial number of failed request:  26
>   Current serial number in output stream:  27
>

So, here's the background.  I'm launching Xorg to manage the GLX context
for some processing applications.  When I use things like glxgears,
glxspheres64 (from the VirtualGL project), glxinfo, or glmark2,
everything works well.  But when I use the actual user application
(pvserver, part of Paraview), it gives me this error shortly after I
connect my paraview frontend, to the pvserver backend.

Running the pvserver inside gdb, with a "break exit", lets me backtrace
it, but all it really tells me is that it's occurring when the
application is trying to establish it's context.

I can continue to dink around with it, but if anyone can at least point
me in the right direction, that would be helpful.

Thanks,

-- 
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

[PATCH xserver] glx: Fix computation of GLX_X_RENDERABLE fbconfig attribute

2016-03-30 Thread Adam Jackson
From the GLX spec:

"GLX_X_RENDERABLE is a boolean indicating whether X can be used to
render into a drawable created with the GLXFBConfig. This attribute
is True if the GLXFBConfig supports GLX windows and/or pixmaps."

Every backend was setting this to true unconditionally, and then the
core ignored that value and sent true unconditionally on its own. This
is broken for ARB_fbconfig_float and EXT_fbconfig_packed_float, which
only apply to pbuffers, which are not renderable from non-GLX APIs.

Instead compute GLX_X_RENDERABLE from the supported drawable types. The
dri backends were getting _that_ wrong too, so fix that as well.

This is not a functional change, as there are no mesa drivers that claim
to support __DRI_ATTRIB_{UNSIGNED_,}FLOAT_BIT yet.

Signed-off-by: Adam Jackson 
---
 glx/glxcmds.c  |  5 +++-
 glx/glxdri2.c  |  6 ++--
 glx/glxdricommon.c | 62 +++---
 glx/glxdricommon.h |  3 +-
 glx/glxdriswrast.c |  6 ++--
 glx/glxscreens.h   |  1 -
 hw/xquartz/GL/glcontextmodes.c |  1 -
 hw/xquartz/GL/visualConfigs.c  |  1 -
 hw/xwin/glx/indirect.c |  2 --
 9 files changed, 31 insertions(+), 56 deletions(-)

diff --git a/glx/glxcmds.c b/glx/glxcmds.c
index 0f0b714..f2faf99 100644
--- a/glx/glxcmds.c
+++ b/glx/glxcmds.c
@@ -1114,7 +1114,10 @@ DoGetFBConfigs(__GLXclientState * cl, unsigned screen)
 
 WRITE_PAIR(GLX_VISUAL_ID, modes->visualID);
 WRITE_PAIR(GLX_FBCONFIG_ID, modes->fbconfigID);
-WRITE_PAIR(GLX_X_RENDERABLE, GL_TRUE);
+WRITE_PAIR(GLX_X_RENDERABLE,
+   (modes->drawableType & (GLX_WINDOW_BIT | GLX_PIXMAP_BIT)
+? GL_TRUE
+: GL_FALSE));
 
 WRITE_PAIR(GLX_RGBA,
(modes->renderType & GLX_RGBA_BIT) ? GL_TRUE : GL_FALSE);
diff --git a/glx/glxdri2.c b/glx/glxdri2.c
index d1fc3f9..85e13c6 100644
--- a/glx/glxdri2.c
+++ b/glx/glxdri2.c
@@ -991,10 +991,8 @@ __glXDRIscreenProbe(ScreenPtr pScreen)
 
 initializeExtensions(>base);
 
-screen->base.fbconfigs = glxConvertConfigs(screen->core, 
screen->driConfigs,
-   GLX_WINDOW_BIT |
-   GLX_PIXMAP_BIT |
-   GLX_PBUFFER_BIT);
+screen->base.fbconfigs = glxConvertConfigs(screen->core,
+   screen->driConfigs);
 
 options = xnfalloc(sizeof(GLXOptions));
 memcpy(options, GLXOptions, sizeof(GLXOptions));
diff --git a/glx/glxdricommon.c b/glx/glxdricommon.c
index 62cce13..f6c6fcd 100644
--- a/glx/glxdricommon.c
+++ b/glx/glxdricommon.c
@@ -122,14 +122,28 @@ setScalar(__GLXconfig * config, unsigned int attrib, 
unsigned int value)
 }
 }
 
+static Bool
+render_type_is_pbuffer_only(unsigned renderType)
+{
+/* The GL_ARB_color_buffer_float spec says:
+ *
+ * "Note that floating point rendering is only supported for
+ * GLXPbuffer drawables.  The GLX_DRAWABLE_TYPE attribute of the
+ * GLXFBConfig must have the GLX_PBUFFER_BIT bit set and the
+ * GLX_RENDER_TYPE attribute must have the GLX_RGBA_FLOAT_BIT set."
+ */
+return !!(renderType & (__DRI_ATTRIB_UNSIGNED_FLOAT_BIT
+| __DRI_ATTRIB_FLOAT_BIT));
+}
+
 static __GLXconfig *
 createModeFromConfig(const __DRIcoreExtension * core,
  const __DRIconfig * driConfig,
- unsigned int visualType, unsigned int drawableType)
+ unsigned int visualType)
 {
 __GLXDRIconfig *config;
 GLint renderType = 0;
-unsigned int attrib, value;
+unsigned int attrib, value, drawableType = GLX_PBUFFER_BIT;
 int i;
 
 config = calloc(1, sizeof *config);
@@ -173,8 +187,10 @@ createModeFromConfig(const __DRIcoreExtension * core,
 }
 }
 
+if (!render_type_is_pbuffer_only(renderType))
+drawableType |= GLX_WINDOW_BIT | GLX_PIXMAP_BIT;
+
 config->config.next = NULL;
-config->config.xRenderable = GL_TRUE;
 config->config.visualType = visualType;
 config->config.renderType = renderType;
 config->config.drawableType = drawableType;
@@ -183,23 +199,9 @@ createModeFromConfig(const __DRIcoreExtension * core,
 return >config;
 }
 
-static Bool
-render_type_is_pbuffer_only(unsigned renderType)
-{
-/* The GL_ARB_color_buffer_float spec says:
- *
- * "Note that floating point rendering is only supported for
- * GLXPbuffer drawables.  The GLX_DRAWABLE_TYPE attribute of the
- * GLXFBConfig must have the GLX_PBUFFER_BIT bit set and the
- * GLX_RENDER_TYPE attribute must have the GLX_RGBA_FLOAT_BIT set."
- */
-return !!(renderType & (__DRI_ATTRIB_UNSIGNED_FLOAT_BIT
-| __DRI_ATTRIB_FLOAT_BIT));
-}
-
 __GLXconfig *
 

Re: [PATCH xserver] glx: Use __glXInitExtensionEnableBits in all backends (v2)

2016-03-30 Thread Adam Jackson
On Wed, 2016-03-30 at 18:48 +0100, Jon Turney wrote:
> On 30/03/2016 16:06, Adam Jackson wrote:
> > 
> > On xquartz this enables SGI_make_current_read, which is a mostly
> > harmless lie as CGL doesn't implement it, as well as SGIX_pbuffer,
> > which
> > is fine because no pbuffer-enabled configs are created.
> > 
> > On xwin this enables SGIX_pbuffer and ARB_multisample in all cases.
> > Again this is harmless if the backend doesn't support the features,
> > since no fbconfigs will be created to expose them.
> > 
> > It also adds SGIX_visual_select_group to both xquartz and xwin.
> > Amusingly, both were filling in the appropriate field in the
> > fbconfig
> > already.
> > 
> > v2: Warn about missing WGL extensions (Emil)
> Thanks for adding that.
> 
> Please apply the attached to fix compilation.

Oh god. I'm awful.

remote: I: patch #78777 updated using rev 
0a69c1e2fa0ea63b02fff98e68d9f56a369e882b.
remote: I: 1 patch(es) updated to state Accepted.
To ssh://git.freedesktop.org/git/xorg/xserver
   b08526e..0a69c1e  master -> master

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] glx: Use __glXInitExtensionEnableBits in all backends (v2)

2016-03-30 Thread Jon Turney

On 30/03/2016 16:06, Adam Jackson wrote:

On xquartz this enables SGI_make_current_read, which is a mostly
harmless lie as CGL doesn't implement it, as well as SGIX_pbuffer, which
is fine because no pbuffer-enabled configs are created.

On xwin this enables SGIX_pbuffer and ARB_multisample in all cases.
Again this is harmless if the backend doesn't support the features,
since no fbconfigs will be created to expose them.

It also adds SGIX_visual_select_group to both xquartz and xwin.
Amusingly, both were filling in the appropriate field in the fbconfig
already.

v2: Warn about missing WGL extensions (Emil)


Thanks for adding that.

Please apply the attached to fix compilation.

From 2457bf3695212c46ef494863fbf1c774b4da9573 Mon Sep 17 00:00:00 2001
From: Jon Turney 
Date: Wed, 30 Mar 2016 18:31:38 +0100
Subject: [PATCH xserver] xwin/glx: Build fix for warnings about missing WGL
 extensioons

Signed-off-by: Jon Turney 
---
 hw/xwin/glx/indirect.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/xwin/glx/indirect.c b/hw/xwin/glx/indirect.c
index 5d7ebf5..26832e6 100644
--- a/hw/xwin/glx/indirect.c
+++ b/hw/xwin/glx/indirect.c
@@ -634,7 +634,7 @@ glxWinScreenProbe(ScreenPtr pScreen)
 if (strstr(wgl_extensions, "WGL_ARB_make_current_read"))
 screen->has_WGL_ARB_make_current_read = TRUE;
 else
-LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_make_current_read\n")
+LogMessage(X_WARNING, "AIGLX: missing 
WGL_ARB_make_current_read\n");
 
 if (strstr(gl_extensions, "GL_WIN_swap_hint")) {
 __glXEnableExtension(screen->base.glx_enable_bits,
@@ -659,12 +659,12 @@ glxWinScreenProbe(ScreenPtr pScreen)
 if (strstr(wgl_extensions, "WGL_ARB_pbuffer"))
 screen->has_WGL_ARB_pbuffer = TRUE;
 else
-LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_pbuffer\n")
+LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_pbuffer\n");
 
 if (strstr(wgl_extensions, "WGL_ARB_multisample"))
 screen->has_WGL_ARB_multisample = TRUE;
 else
-LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_multisample\n")
+LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_multisample\n");
 
 screen->base.destroy = glxWinScreenDestroy;
 screen->base.createContext = glxWinCreateContext;
-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 3/3] present: Only requeue if target MSC is not reached after an unflip

2016-03-30 Thread Martin Peres



On 28/03/16 05:24, Michel Dänzer wrote:

On 01.03.2016 20:26, Martin Peres wrote:

On 25/02/16 17:28, Adam Jackson wrote:

On Thu, 2016-02-25 at 09:49 +, Chris Wilson wrote:

On Wed, Feb 24, 2016 at 04:52:59PM +0900, Michel Dänzer wrote:

From: Michel Dänzer 

While present_pixmap decrements target_msc by 1 for
present_queue_vblank,
it leaves the original vblank->target_msc intact. So incrementing the
latter for requeueing resulted in the requeued presentation being
executed too late.

My mistake. Yes, the local target_msc is decremented but after
vblank->target_msc is assigned.


Also, no need to requeue if the target MSC is already reached.

This further reduces stutter when a popup menu appears on top of a
flipping fullscreen window.

Signed-off-by: Michel Dänzer 

Reviewed-by: Chris Wilson 
-Chris

Series merged:

remote: I: patch #74919 updated using rev
1a9f8c4623c4e6b6955cb6d5f44d29c244dfd32a.
remote: I: patch #74915 updated using rev
e7a35b9e16aa12970908f5d55371bb1b862f8f24.
remote: I: patch #74910 updated using rev
b4ac7b142fa3c536e9b283cfd34b94d82c03aac6.
remote: I: 3 patch(es) updated to state Accepted.
To ssh://git.freedesktop.org/git/xorg/xserver
 0461bca..b4ac7b1  master -> master



For some reason, this patch prevents kde from starting.


Does this still happen with 3b385105 ("present: Only requeue for next
MSC after flip failure")?


Sorry again for the delay.

I verified that this particular commits fixes it. Now I can run the the 
upstream HEAD on both modesetting and the intel ddx.


Thanks!
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 00/13] GLX 1.4 cleanup and GLX_EXT_libglvnd

2016-03-30 Thread Adam Jackson
On Wed, 2016-03-30 at 13:11 +0100, Emil Velikov wrote:

> There's a few small suggestions - of which the "print a warning if the
> WGL extension is missing" (patch 5&7) and Eric's "use strdup over
> malloc/memset" (patch 13) being somewhat serious. With those addressed
> the series is
> Reviewed-by: Emil Velikov 
> 
> Would be great to hear from Jon about patches 5 & 7 though. I suspect
> that without the warnings he'll have lots of 'fun' moments debugging.

Merged this series, with v2 of 7 and 13, and more elaborate commit
message for 6:

remote: I: patch #77728 updated using rev 
3a21da59e59cf11a9113d71e3431c4bd394ff1e8.
remote: I: patch #78130 updated using rev 
410aec82556def5395f51299bcefbeb7d0bda604.
remote: I: patch #78127 updated using rev 
f95645c6f70019316f8ad77b7beb84530fc0505f.
remote: I: patch #78129 updated using rev 
b2ef7df476af619903ef7f6b6962b371ae14306c.
remote: I: patch #78125 updated using rev 
9b2fc6d98691966f1c9186edad956f78c31f3698.
remote: I: patch #78124 updated using rev 
15af78fc56569dc3b6a7f2c5a6a49edb602111b7.
remote: I: patch #78758 updated using rev 
77bdaa1313aa55191b49ec73c1e377928ca294fe.
remote: I: patch #78128 updated using rev 
2a72789ee8e88f612dff48ebe2ebe9fecda7a95d.
remote: I: patch #78135 updated using rev 
36bcbf76dcc7e88cac093f8fb656c525bfeaf65d.
remote: I: patch #78131 updated using rev 
23cce73221c0b96e7778da34616f8c3f4d6aa819.
remote: E: failed to find patch for rev 
e21de4bf3c5ff8cbb9c5ea023d04162e5e56b3df.
remote: I: patch #78132 updated using rev 
2e8781ead3067b195baec2e76a28091575679383.
remote: I: patch #78750 updated using rev 
b08526eecf1e165ed9ec2e6b571a5a616a9b696e.
remote: I: 12 patch(es) updated to state Accepted.
To ssh://git.freedesktop.org/git/xorg/xserver
   44e1c97..b08526e  master -> master

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver] glx: Use __glXInitExtensionEnableBits in all backends (v2)

2016-03-30 Thread Adam Jackson
On xquartz this enables SGI_make_current_read, which is a mostly
harmless lie as CGL doesn't implement it, as well as SGIX_pbuffer, which
is fine because no pbuffer-enabled configs are created.

On xwin this enables SGIX_pbuffer and ARB_multisample in all cases.
Again this is harmless if the backend doesn't support the features,
since no fbconfigs will be created to expose them.

It also adds SGIX_visual_select_group to both xquartz and xwin.
Amusingly, both were filling in the appropriate field in the fbconfig
already.

v2: Warn about missing WGL extensions (Emil)

Reviewed-by: Eric Anholt 
Reviewed-by: Emil Velikov 
Signed-off-by: Adam Jackson 
---
 hw/xquartz/GL/indirect.c | 12 +---
 hw/xwin/glx/indirect.c   | 29 +
 2 files changed, 10 insertions(+), 31 deletions(-)

diff --git a/hw/xquartz/GL/indirect.c b/hw/xquartz/GL/indirect.c
index 544cb78..398cdf1 100644
--- a/hw/xquartz/GL/indirect.c
+++ b/hw/xquartz/GL/indirect.c
@@ -545,22 +545,12 @@ __glXAquaScreenProbe(ScreenPtr pScreen)
 screen->base.fbconfigs = __glXAquaCreateVisualConfigs(
 >base.numFBConfigs, pScreen->myNum);
 
+__glXInitExtensionEnableBits(screen->glx_enable_bits);
 __glXScreenInit(>base, pScreen);
 
 screen->base.GLXmajor = 1;
 screen->base.GLXminor = 4;
 
-memset(screen->glx_enable_bits, 0, __GLX_EXT_BYTES);
-
-__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_info");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_rating");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_import_context");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_OML_swap_method");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIX_fbconfig");
-
-__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIS_multisample");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_ARB_multisample");
-
 //__glXEnableExtension(screen->glx_enable_bits, "GLX_ARB_create_context");
 //__glXEnableExtension(screen->glx_enable_bits, 
"GLX_ARB_create_context_profile");
 
diff --git a/hw/xwin/glx/indirect.c b/hw/xwin/glx/indirect.c
index 7828b6c..cbbc113 100644
--- a/hw/xwin/glx/indirect.c
+++ b/hw/xwin/glx/indirect.c
@@ -642,17 +642,12 @@ glxWinScreenProbe(ScreenPtr pScreen)
 // Based on the WGL extensions available, enable various GLX extensions
 // XXX: make this table-driven ?
 //
-memset(screen->glx_enable_bits, 0, __GLX_EXT_BYTES);
-
-__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_info");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_rating");
-__glXEnableExtension(screen->glx_enable_bits, 
"GLX_EXT_import_context");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_OML_swap_method");
-__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIX_fbconfig");
-__glXEnableExtension(screen->glx_enable_bits, 
"GLX_SGI_make_current_read");
+__glXInitExtensionEnableBits(screen->glx_enable_bits);
 
 if (strstr(wgl_extensions, "WGL_ARB_make_current_read"))
 screen->has_WGL_ARB_make_current_read = TRUE;
+else
+LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_make_current_read\n")
 
 if (strstr(gl_extensions, "GL_WIN_swap_hint")) {
 __glXEnableExtension(screen->glx_enable_bits,
@@ -674,21 +669,15 @@ glxWinScreenProbe(ScreenPtr pScreen)
 /*   screen->has_WGL_ARB_render_texture = TRUE; */
 /* } */
 
-if (strstr(wgl_extensions, "WGL_ARB_pbuffer")) {
-__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIX_pbuffer");
-LogMessage(X_INFO, "AIGLX: enabled GLX_SGIX_pbuffer\n");
+if (strstr(wgl_extensions, "WGL_ARB_pbuffer"))
 screen->has_WGL_ARB_pbuffer = TRUE;
-}
+else
+LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_pbuffer\n")
 
-if (strstr(wgl_extensions, "WGL_ARB_multisample")) {
-__glXEnableExtension(screen->glx_enable_bits,
- "GLX_ARB_multisample");
-__glXEnableExtension(screen->glx_enable_bits,
- "GLX_SGIS_multisample");
-LogMessage(X_INFO,
-   "AIGLX: enabled GLX_ARB_multisample and 
GLX_SGIS_multisample\n");
+if (strstr(wgl_extensions, "WGL_ARB_multisample"))
 screen->has_WGL_ARB_multisample = TRUE;
-}
+else
+LogMessage(X_WARNING, "AIGLX: missing WGL_ARB_multisample\n")
 
 screen->base.destroy = glxWinScreenDestroy;
 screen->base.createContext = glxWinCreateContext;
-- 
2.5.0

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver 13/13] glx: Implement GLX_EXT_libglvnd (v2)

2016-03-30 Thread Adam Jackson
For the dri2 backend, we depend on xfree86 already, so we can walk the
options for the screen looking for a vendor string from xorg.conf.  For
the swrast backend we don't have that luxury, so just say mesa.  This
extension isn't really meaningful on Windows or OSX yet (since libglvnd
isn't really functional there yet), so on those platforms we don't say
anything and return BadValue for the token from QueryServerString.

v2: Use xnf* allocators when parsing options (Eric and Emil)

Reviewed-by: Eric Anholt 
Reviewed-by: Emil Velikov 
Signed-off-by: Adam Jackson 
---
 glx/extension_string.c   |  1 +
 glx/extension_string.h   |  1 +
 glx/glxcmds.c| 10 ++
 glx/glxdri2.c| 22 ++
 glx/glxdriswrast.c   |  3 +++
 glx/glxscreens.c |  4 
 glx/glxscreens.h |  1 +
 hw/xfree86/man/xorg.conf.man |  6 ++
 8 files changed, 48 insertions(+)

diff --git a/glx/extension_string.c b/glx/extension_string.c
index 7e74090..d1da481 100644
--- a/glx/extension_string.c
+++ b/glx/extension_string.c
@@ -85,6 +85,7 @@ static const struct extension_info known_glx_extensions[] = {
 { GLX(EXT_fbconfig_packed_float),   VER(0,0), N, },
 { GLX(EXT_framebuffer_sRGB),VER(0,0), N, },
 { GLX(EXT_import_context),  VER(0,0), Y, },
+{ GLX(EXT_libglvnd),VER(0,0), N, },
 { GLX(EXT_stereo_tree), VER(0,0), N, },
 { GLX(EXT_texture_from_pixmap), VER(0,0), N, },
 { GLX(EXT_visual_info), VER(0,0), Y, },
diff --git a/glx/extension_string.h b/glx/extension_string.h
index 425a805..a10d710 100644
--- a/glx/extension_string.h
+++ b/glx/extension_string.h
@@ -47,6 +47,7 @@ enum {
 EXT_create_context_es2_profile_bit,
 EXT_fbconfig_packed_float_bit,
 EXT_import_context_bit,
+EXT_libglvnd_bit,
 EXT_stereo_tree_bit,
 EXT_texture_from_pixmap_bit,
 EXT_visual_info_bit,
diff --git a/glx/glxcmds.c b/glx/glxcmds.c
index 4693e68..0f0b714 100644
--- a/glx/glxcmds.c
+++ b/glx/glxcmds.c
@@ -2444,6 +2444,10 @@ __glXDisp_QueryExtensionsString(__GLXclientState * cl, 
GLbyte * pc)
 return Success;
 }
 
+#ifndef GLX_VENDOR_NAMES_EXT
+#define GLX_VENDOR_NAMES_EXT 0x20F6
+#endif
+
 int
 __glXDisp_QueryServerString(__GLXclientState * cl, GLbyte * pc)
 {
@@ -2471,6 +2475,12 @@ __glXDisp_QueryServerString(__GLXclientState * cl, 
GLbyte * pc)
 case GLX_EXTENSIONS:
 ptr = pGlxScreen->GLXextensions;
 break;
+case GLX_VENDOR_NAMES_EXT:
+if (pGlxScreen->glvnd) {
+ptr = pGlxScreen->glvnd;
+break;
+}
+/* else fall through */
 default:
 return BadValue;
 }
diff --git a/glx/glxdri2.c b/glx/glxdri2.c
index 15253d1..d1fc3f9 100644
--- a/glx/glxdri2.c
+++ b/glx/glxdri2.c
@@ -934,12 +934,23 @@ initializeExtensions(__GLXscreen * screen)
 /* white lie */
 extern glx_func_ptr glXGetProcAddressARB(const char *);
 
+enum {
+GLXOPT_VENDOR_LIBRARY,
+};
+
+static const OptionInfoRec GLXOptions[] = {
+{ GLXOPT_VENDOR_LIBRARY, "GlxVendorLibrary", OPTV_STRING, {0}, FALSE },
+{ -1, NULL, OPTV_NONE, {0}, FALSE },
+};
+
 static __GLXscreen *
 __glXDRIscreenProbe(ScreenPtr pScreen)
 {
 const char *driverName, *deviceName;
 __GLXDRIscreen *screen;
 ScrnInfoPtr pScrn = xf86ScreenToScrn(pScreen);
+const char *glvnd = NULL;
+OptionInfoPtr options;
 
 screen = calloc(1, sizeof *screen);
 if (screen == NULL)
@@ -985,6 +996,17 @@ __glXDRIscreenProbe(ScreenPtr pScreen)
GLX_PIXMAP_BIT |
GLX_PBUFFER_BIT);
 
+options = xnfalloc(sizeof(GLXOptions));
+memcpy(options, GLXOptions, sizeof(GLXOptions));
+xf86ProcessOptions(pScrn->scrnIndex, pScrn->options, options);
+glvnd = xf86GetOptValString(options, GLXOPT_VENDOR_LIBRARY);
+if (glvnd)
+screen->base.glvnd = xnfstrdup(glvnd);
+free(options);
+
+if (!screen->base.glvnd)
+screen->base.glvnd = strdup("mesa");
+
 __glXScreenInit(>base, pScreen);
 
 screen->enterVT = pScrn->EnterVT;
diff --git a/glx/glxdriswrast.c b/glx/glxdriswrast.c
index 0b5122f..1e46d97 100644
--- a/glx/glxdriswrast.c
+++ b/glx/glxdriswrast.c
@@ -487,6 +487,9 @@ __glXDRIscreenProbe(ScreenPtr pScreen)
GLX_PIXMAP_BIT |
GLX_PBUFFER_BIT);
 
+#if !defined(XQUARTZ) && !defined(WIN32)
+screen->base.glvnd = strdup("mesa");
+#endif
 __glXScreenInit(>base, pScreen);
 
 __glXsetGetProcAddress(glXGetProcAddressARB);
diff --git a/glx/glxscreens.c b/glx/glxscreens.c
index 7e083cf..536c0c4 100644
--- a/glx/glxscreens.c
+++ b/glx/glxscreens.c
@@ -384,6 +384,9 @@ __glXScreenInit(__GLXscreen * pGlxScreen, ScreenPtr pScreen)
 
 dixSetPrivate(>devPrivates, 

Re: [PATCH xserver 06/13] glx: Enable GLX_SGI_make_current_read in the core

2016-03-30 Thread Emil Velikov
On 30 March 2016 at 14:58, Adam Jackson  wrote:
> On Wed, 2016-03-30 at 12:12 +0100, Emil Velikov wrote:
>> On 23 March 2016 at 22:46, Adam Jackson  wrote:
>> > diff --git a/glx/glxdri2.c b/glx/glxdri2.c
>> > index c56a376..71dab2a 100644
>> > --- a/glx/glxdri2.c
>> > +++ b/glx/glxdri2.c
>> > @@ -900,13 +900,6 @@ initializeExtensions(__GLXDRIscreen * screen)
>> >  }
>> >
>> >  for (i = 0; extensions[i]; i++) {
>> > -if (strcmp(extensions[i]->name, __DRI_READ_DRAWABLE) == 0) {
>> > -__glXEnableExtension(screen->glx_enable_bits,
>> > - "GLX_SGI_make_current_read");
>> > -
>> > -LogMessage(X_INFO, "AIGLX: enabled 
>> > GLX_SGI_make_current_read\n");
>> > -}
>> > -
>> Afaics we never had a DRI2 based dri module that provided this
>> extension. Which brings the question if this has ever been tested and
>> confirmed working. Can we have a small note about this in the commit
>> message ?
>
> Not quite. I am dismayed to report that piglit does not appear to
> contain any tests for separate drawable and readable, so whether this
> has been tested, who knows. And you are correct (and I'm somewhat
> surprised to discover) that Mesa currently does not expose this
> extension. But it used to:
>
> https://cgit.freedesktop.org/mesa/mesa/commit/?id=ad3221587164c10ae16d85db514484b717cabc6f
>
Must have butchered something... the local tree does not go that far in history.

> GLX 1.3 implies equivalent functionality, spelled glXMakeContextCurrent
> instead of glXMakeCurrentReadSGI. The dispatch for both the extension
> and 1.3 versions of the interface has remained wired up in xserver, and
> note that the bindContext call down to the driver _always_ takes both
> drawable and readable arguments regardless of which Make*Current*
> request triggered it.
>
> So I don't think this is making anything any worse.
>
Definitely - make_context_current (and bindContext) have been using
separate readable/drawable since forever. Can you please add some of
the above explanation in the commit history - for posterity.

Thanks
Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 07/13] glx: Use __glXInitExtensionEnableBits in all backends

2016-03-30 Thread Adam Jackson
On Wed, 2016-03-30 at 12:48 +0100, Emil Velikov wrote:

> Similar to patch 5 - we assume that things will be fine if the base
> implementation lacks the WGL extensions. As this can cause the
> occasional wtf moment - wouldn't it be better to have a big fat
> warning for each of these (three so far) missing extensions alongside
> the proposed behaviour ?

Excluding make_current_read I don't think it can make much difference.
To trigger a failure mode here you'd need an application that already
works _without_ pbuffers or multisampling, that optionally supports
them, but that will fail fatally if the extension string is found but
not a matching fbconfig (instead of just falling back to the mode in
which it already works). Apps are dumb, but I've yet to find one _that_
dumb.

> Jon, any ideas how (un)common it is for WGL_ARB_make_current_read,
> WGL_ARB_pbuffer and/or WGL_ARB_multisample to be missing ?

I had this exact question! Some kind soul on #dri-devel (I forget who,
but thank you) pointed me to:

http://opengl.gpuinfo.org/gl_extensions.php

It's not the best search interface, but we can draw some conclusions
anyway. To a first approximation, all WGL implementations supply
WGL_ARB_extensions_string (as otherwise you can't query for extensions
at all); 1036 reports in that db support it, and 201 do not. Of those
201, there are two that are WGL (and not GLES) implementations anyway:
the GDI Generic implementation included with Windows (which isn't
accelerated, so we wouldn't get here anyway) and VirtualBox's chromium-
based driver. The same set is the set of drivers not supporting
WGL_ARB_pbuffer.

WGL_ARB_make_current_read is unsupported by 207 reports. Beyond the
above, the affected drivers are: vmware with what looks to be llvmpipe
(for shame!), Intel GMA5/600, PowerVR SGX545, Matrox M-series, and XGI
Volari.

WGL_ARB_multisample is unsupported by 252 reports. Beyond the non-
pbuffer set this seems to be true of R200-era radeons, geforce 4 and
below, and pretty much any Intel chip in compatibility contexts.

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 06/13] glx: Enable GLX_SGI_make_current_read in the core

2016-03-30 Thread Adam Jackson
On Wed, 2016-03-30 at 12:12 +0100, Emil Velikov wrote:
> On 23 March 2016 at 22:46, Adam Jackson  wrote:
> > diff --git a/glx/glxdri2.c b/glx/glxdri2.c
> > index c56a376..71dab2a 100644
> > --- a/glx/glxdri2.c
> > +++ b/glx/glxdri2.c
> > @@ -900,13 +900,6 @@ initializeExtensions(__GLXDRIscreen * screen)
> >  }
> > 
> >  for (i = 0; extensions[i]; i++) {
> > -if (strcmp(extensions[i]->name, __DRI_READ_DRAWABLE) == 0) {
> > -__glXEnableExtension(screen->glx_enable_bits,
> > - "GLX_SGI_make_current_read");
> > -
> > -LogMessage(X_INFO, "AIGLX: enabled 
> > GLX_SGI_make_current_read\n");
> > -}
> > -
> Afaics we never had a DRI2 based dri module that provided this
> extension. Which brings the question if this has ever been tested and
> confirmed working. Can we have a small note about this in the commit
> message ?

Not quite. I am dismayed to report that piglit does not appear to
contain any tests for separate drawable and readable, so whether this
has been tested, who knows. And you are correct (and I'm somewhat
surprised to discover) that Mesa currently does not expose this
extension. But it used to:

https://cgit.freedesktop.org/mesa/mesa/commit/?id=ad3221587164c10ae16d85db514484b717cabc6f

GLX 1.3 implies equivalent functionality, spelled glXMakeContextCurrent
instead of glXMakeCurrentReadSGI. The dispatch for both the extension
and 1.3 versions of the interface has remained wired up in xserver, and
note that the bindContext call down to the driver _always_ takes both
drawable and readable arguments regardless of which Make*Current*
request triggered it.

So I don't think this is making anything any worse.

- ajax
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xf86-video-amdgpu] DRI3: Refuse to open DRM file descriptor for ssh clients (v2)

2016-03-30 Thread Alex Deucher
On Wed, Mar 30, 2016 at 5:36 AM, Michel Dänzer  wrote:
> From: Michel Dänzer 
>
> Fixes hangs when attempting to use DRI3 on display connections forwarded
> via SSH.
>
> Don't do this for Xorg > 1.18.99.1 since the corresponding xserver
> change has landed in Git master.
>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93261
>
> (Ported from radeon commit 0b3aac1de9db42bfca545fa331e4985836682ec7)
>
> Signed-off-by: Michel Dänzer 

Reviewed-by: Alex Deucher 

> ---
>  src/amdgpu_dri3.c | 39 ++-
>  1 file changed, 38 insertions(+), 1 deletion(-)
>
> diff --git a/src/amdgpu_dri3.c b/src/amdgpu_dri3.c
> index 06d0668..c3042e7 100644
> --- a/src/amdgpu_dri3.c
> +++ b/src/amdgpu_dri3.c
> @@ -38,6 +38,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>
>
>  static int
> @@ -87,6 +88,38 @@ amdgpu_dri3_open(ScreenPtr screen, RRProviderPtr provider, 
> int *out)
> return Success;
>  }
>
> +#if DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
> XORG_VERSION_NUMERIC(1,18,99,1,0)
> +
> +static int
> +amdgpu_dri3_open_client(ClientPtr client, ScreenPtr screen,
> +   RRProviderPtr provider, int *out)
> +{
> +   const char *cmdname = GetClientCmdName(client);
> +   Bool is_ssh = FALSE;
> +
> +   /* If the executable name is "ssh", assume that this client connection
> +* is forwarded from another host via SSH
> +*/
> +   if (cmdname) {
> +   char *cmd = strdup(cmdname);
> +
> +   /* Cut off any colon and whatever comes after it, see
> +* 
> https://lists.freedesktop.org/archives/xorg-devel/2015-December/048164.html
> +*/
> +   cmd = strtok(cmd, ":");
> +
> +   is_ssh = strcmp(basename(cmd), "ssh") == 0;
> +   free(cmd);
> +   }
> +
> +   if (!is_ssh)
> +   return amdgpu_dri3_open(screen, provider, out);
> +
> +   return BadAccess;
> +}
> +
> +#endif /* DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
> XORG_VERSION_NUMERIC(1,18,99,1,0) */
> +
>  static PixmapPtr amdgpu_dri3_pixmap_from_fd(ScreenPtr screen,
> int fd,
> CARD16 width,
> @@ -172,9 +205,13 @@ static int amdgpu_dri3_fd_from_pixmap(ScreenPtr screen,
>  }
>
>  static dri3_screen_info_rec amdgpu_dri3_screen_info = {
> +#if DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
> XORG_VERSION_NUMERIC(1,18,99,1,0)
> +   .version = 1,
> +   .open_client = amdgpu_dri3_open_client,
> +#else
> .version = 0,
> -
> .open = amdgpu_dri3_open,
> +#endif
> .pixmap_from_fd = amdgpu_dri3_pixmap_from_fd,
> .fd_from_pixmap = amdgpu_dri3_fd_from_pixmap
>  };
> --
> 2.8.0.rc3
>
> ___
> xorg-driver-ati mailing list
> xorg-driver-ati@lists.x.org
> https://lists.x.org/mailman/listinfo/xorg-driver-ati
___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
https://lists.x.org/mailman/listinfo/xorg-driver-ati


Re: [PATCH xserver 07/13] glx: Use __glXInitExtensionEnableBits in all backends

2016-03-30 Thread Jon Turney

On 30/03/2016 12:48, Emil Velikov wrote:

On 23 March 2016 at 22:46, Adam Jackson wrote:

On xquartz this enables SGI_make_current_read, which is a mostly
harmless lie as CGL doesn't implement it, as well as SGIX_pbuffer, which
is fine because no pbuffer-enabled configs are created.

On xwin this enables SGIX_pbuffer and ARB_multisample in all cases.
Again this is harmless if the backend doesn't support the features,
since no fbconfigs will be created to expose them.

It also adds SGIX_visual_select_group to both xquartz and xwin.
Amusingly, both were filling in the appropriate field in the fbconfig
already.

[...]

Similar to patch 5 - we assume that things will be fine if the base
implementation lacks the WGL extensions. As this can cause the
occasional wtf moment - wouldn't it be better to have a big fat
warning for each of these (three so far) missing extensions alongside
the proposed behaviour ?

Jon, any ideas how (un)common it is for WGL_ARB_make_current_read,
WGL_ARB_pbuffer and/or WGL_ARB_multisample to be missing ?


Very unlikely, I think.

The only common case would be where the VGA driver is in use and the 
'GDI generic' software renderer is offered, which is explicitly 
blacklisted anyhow, (so glxWinScreenProbe() fails and we fall back to 
swrast)


I guess some logging here that these expected extensions aren't present 
might be useful.

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 00/13] GLX 1.4 cleanup and GLX_EXT_libglvnd

2016-03-30 Thread Emil Velikov
Hi Adam,

On 23 March 2016 at 22:46, Adam Jackson  wrote:
> The previous series didn't quite do what I expected, because xwin and
> xquartz didn't set the extension enable bits the same way as DRI. This
> series moves the enable bits into the common GLX screen data, fixes the
> default enable state to apply to all backends, and moves more of the
> setup boilerplate code into the core.
>
> The last patch finishes support for GLX_EXT_libglvnd by returning a
> vendor library string in glXQueryServerString. This is to enable libglvnd
> to select the correct implementation for applications that address multiple
> GL screens (or displays). At the moment this extension is only exposed for
> non-OSX and non-Windows builds, as libglvnd really isn't functional there.
> The swrast backend simply hardcodes the vendor to mesa. The DRI2 backend
> allows you to override the vendor string with Option "GlxVendorLibrary"
> in xorg.conf, in either the Device or Screen sections.
>
There's a few small suggestions - of which the "print a warning if the
WGL extension is missing" (patch 5&7) and Eric's "use strdup over
malloc/memset" (patch 13) being somewhat serious. With those addressed
the series is
Reviewed-by: Emil Velikov 

Would be great to hear from Jon about patches 5 & 7 though. I suspect
that without the warnings he'll have lots of 'fun' moments debugging.

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 01/13] glx: Remove default server glx extension string

2016-03-30 Thread Emil Velikov
On 30 March 2016 at 12:38, Emil Velikov  wrote:
> On 23 March 2016 at 22:46, Adam Jackson  wrote:
>
>> --- a/hw/xquartz/GL/indirect.c
>> +++ b/hw/xquartz/GL/indirect.c
>> @@ -566,8 +566,6 @@ __glXAquaScreenProbe(ScreenPtr pScreen)
>>  unsigned int buffer_size =
>>  __glXGetExtensionString(screen->glx_enable_bits, NULL);
>>  if (buffer_size > 0) {
>> -free(screen->base.GLXextensions);
>> -
>>  screen->base.GLXextensions = xnfalloc(buffer_size);
>>  __glXGetExtensionString(screen->glx_enable_bits,
>>  screen->base.GLXextensions);
>> diff --git a/hw/xwin/glx/indirect.c b/hw/xwin/glx/indirect.c
>> index e4be642..e515d18 100644
>> --- a/hw/xwin/glx/indirect.c
>> +++ b/hw/xwin/glx/indirect.c
>> @@ -743,8 +743,6 @@ glxWinScreenProbe(ScreenPtr pScreen)
>>  unsigned int buffer_size =
>>  __glXGetExtensionString(screen->glx_enable_bits, NULL);
>>  if (buffer_size > 0) {
>> -free(screen->base.GLXextensions);
>> -
>
> These two have a comment "(overrides that set by __glXScreenInit())"
> just above the hunk that is free to go now.
>
The whole hunk is getting removed by a latter commit so there isn't
much use of respinning things for such trivialities :-)

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 12/13] glx: Compute the GLX extension string from __glXScreenInit

2016-03-30 Thread Emil Velikov
On 23 March 2016 at 22:46, Adam Jackson  wrote:

> --- a/glx/glxscreens.c
> +++ b/glx/glxscreens.c
> @@ -383,6 +383,14 @@ __glXScreenInit(__GLXscreen * pGlxScreen, ScreenPtr 
> pScreen)
>  }
>
>  dixSetPrivate(>devPrivates, glxScreenPrivateKey, pGlxScreen);
> +
> +i = __glXGetExtensionString(pGlxScreen->glx_enable_bits, NULL);
> +if (i > 0) {
> +pGlxScreen->GLXextensions = xnfalloc(i);
> +(void) __glXGetExtensionString(pGlxScreen->glx_enable_bits,
> +   pGlxScreen->GLXextensions);
> +}
> +
Better to keep this hunk just after the NULL initialization of
pGlxScreen->GLXextensions ?

>  }
>
>  void
> diff --git a/hw/xquartz/GL/indirect.c b/hw/xquartz/GL/indirect.c
> index 9eaeb94..2d88ef2 100644
> --- a/hw/xquartz/GL/indirect.c
> +++ b/hw/xquartz/GL/indirect.c
> @@ -542,20 +542,6 @@ __glXAquaScreenProbe(ScreenPtr pScreen)
>  __glXInitExtensionEnableBits(screen->base.glx_enable_bits);
>  __glXScreenInit(>base, pScreen);
>
> -//__glXEnableExtension(screen->base.glx_enable_bits, 
> "GLX_ARB_create_context");
> -//__glXEnableExtension(screen->base.glx_enable_bits, 
> "GLX_ARB_create_context_profile");
> -
Not sure what the intent behind these was, so one might as well move
them before the __glXScreenInit() call. Just like the xwin backend.

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 07/13] glx: Use __glXInitExtensionEnableBits in all backends

2016-03-30 Thread Emil Velikov
On 23 March 2016 at 22:46, Adam Jackson  wrote:
> On xquartz this enables SGI_make_current_read, which is a mostly
> harmless lie as CGL doesn't implement it, as well as SGIX_pbuffer, which
> is fine because no pbuffer-enabled configs are created.
>
> On xwin this enables SGIX_pbuffer and ARB_multisample in all cases.
> Again this is harmless if the backend doesn't support the features,
> since no fbconfigs will be created to expose them.
>
> It also adds SGIX_visual_select_group to both xquartz and xwin.
> Amusingly, both were filling in the appropriate field in the fbconfig
> already.
>
> Signed-off-by: Adam Jackson 
> ---
>  hw/xquartz/GL/indirect.c | 12 +---
>  hw/xwin/glx/indirect.c   | 23 +++
>  2 files changed, 4 insertions(+), 31 deletions(-)
>
> diff --git a/hw/xquartz/GL/indirect.c b/hw/xquartz/GL/indirect.c
> index 544cb78..398cdf1 100644
> --- a/hw/xquartz/GL/indirect.c
> +++ b/hw/xquartz/GL/indirect.c
> @@ -545,22 +545,12 @@ __glXAquaScreenProbe(ScreenPtr pScreen)
>  screen->base.fbconfigs = __glXAquaCreateVisualConfigs(
>  >base.numFBConfigs, pScreen->myNum);
>
> +__glXInitExtensionEnableBits(screen->glx_enable_bits);
>  __glXScreenInit(>base, pScreen);
>
>  screen->base.GLXmajor = 1;
>  screen->base.GLXminor = 4;
>
> -memset(screen->glx_enable_bits, 0, __GLX_EXT_BYTES);
> -
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_info");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_rating");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_import_context");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_OML_swap_method");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIX_fbconfig");
> -
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIS_multisample");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_ARB_multisample");
> -
>  //__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_ARB_create_context");
>  //__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_ARB_create_context_profile");
>
> diff --git a/hw/xwin/glx/indirect.c b/hw/xwin/glx/indirect.c
> index 7828b6c..cf51c9b 100644
> --- a/hw/xwin/glx/indirect.c
> +++ b/hw/xwin/glx/indirect.c
> @@ -642,14 +642,7 @@ glxWinScreenProbe(ScreenPtr pScreen)
>  // Based on the WGL extensions available, enable various GLX 
> extensions
>  // XXX: make this table-driven ?
>  //
> -memset(screen->glx_enable_bits, 0, __GLX_EXT_BYTES);
> -
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_EXT_visual_info");
> -__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_EXT_visual_rating");
> -__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_EXT_import_context");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_OML_swap_method");
> -__glXEnableExtension(screen->glx_enable_bits, "GLX_SGIX_fbconfig");
> -__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_SGI_make_current_read");
> +__glXInitExtensionEnableBits(screen->glx_enable_bits);
>
>  if (strstr(wgl_extensions, "WGL_ARB_make_current_read"))
>  screen->has_WGL_ARB_make_current_read = TRUE;
> @@ -674,21 +667,11 @@ glxWinScreenProbe(ScreenPtr pScreen)
>  /*   screen->has_WGL_ARB_render_texture = TRUE; */
>  /* } */
>
> -if (strstr(wgl_extensions, "WGL_ARB_pbuffer")) {
> -__glXEnableExtension(screen->glx_enable_bits, 
> "GLX_SGIX_pbuffer");
> -LogMessage(X_INFO, "AIGLX: enabled GLX_SGIX_pbuffer\n");
> +if (strstr(wgl_extensions, "WGL_ARB_pbuffer"))
>  screen->has_WGL_ARB_pbuffer = TRUE;
> -}
>
> -if (strstr(wgl_extensions, "WGL_ARB_multisample")) {
> -__glXEnableExtension(screen->glx_enable_bits,
> - "GLX_ARB_multisample");
> -__glXEnableExtension(screen->glx_enable_bits,
> - "GLX_SGIS_multisample");
> -LogMessage(X_INFO,
> -   "AIGLX: enabled GLX_ARB_multisample and 
> GLX_SGIS_multisample\n");
> +if (strstr(wgl_extensions, "WGL_ARB_multisample"))
>  screen->has_WGL_ARB_multisample = TRUE;
> -}
Similar to patch 5 - we assume that things will be fine if the base
implementation lacks the WGL extensions. As this can cause the
occasional wtf moment - wouldn't it be better to have a big fat
warning for each of these (three so far) missing extensions alongside
the proposed behaviour ?

Jon, any ideas how (un)common it is for WGL_ARB_make_current_read,
WGL_ARB_pbuffer and/or WGL_ARB_multisample to be missing ?

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 01/13] glx: Remove default server glx extension string

2016-03-30 Thread Emil Velikov
On 23 March 2016 at 22:46, Adam Jackson  wrote:

> --- a/hw/xquartz/GL/indirect.c
> +++ b/hw/xquartz/GL/indirect.c
> @@ -566,8 +566,6 @@ __glXAquaScreenProbe(ScreenPtr pScreen)
>  unsigned int buffer_size =
>  __glXGetExtensionString(screen->glx_enable_bits, NULL);
>  if (buffer_size > 0) {
> -free(screen->base.GLXextensions);
> -
>  screen->base.GLXextensions = xnfalloc(buffer_size);
>  __glXGetExtensionString(screen->glx_enable_bits,
>  screen->base.GLXextensions);
> diff --git a/hw/xwin/glx/indirect.c b/hw/xwin/glx/indirect.c
> index e4be642..e515d18 100644
> --- a/hw/xwin/glx/indirect.c
> +++ b/hw/xwin/glx/indirect.c
> @@ -743,8 +743,6 @@ glxWinScreenProbe(ScreenPtr pScreen)
>  unsigned int buffer_size =
>  __glXGetExtensionString(screen->glx_enable_bits, NULL);
>  if (buffer_size > 0) {
> -free(screen->base.GLXextensions);
> -

These two have a comment "(overrides that set by __glXScreenInit())"
just above the hunk that is free to go now.

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver 06/13] glx: Enable GLX_SGI_make_current_read in the core

2016-03-30 Thread Emil Velikov
On 23 March 2016 at 22:46, Adam Jackson  wrote:
> Signed-off-by: Adam Jackson 
> ---
>  glx/extension_string.c | 2 +-
>  glx/glxdri2.c  | 7 ---
>  glx/glxdriswrast.c | 1 -
>  3 files changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/glx/extension_string.c b/glx/extension_string.c
> index 30c3416..7e74090 100644
> --- a/glx/extension_string.c
> +++ b/glx/extension_string.c
> @@ -92,7 +92,7 @@ static const struct extension_info known_glx_extensions[] = 
> {
>
>  { GLX(MESA_copy_sub_buffer),VER(0,0), N, },
>  { GLX(OML_swap_method), VER(0,0), Y, },
> -{ GLX(SGI_make_current_read),   VER(1,3), N, },
> +{ GLX(SGI_make_current_read),   VER(1,3), Y, },
>  { GLX(SGI_swap_control),VER(0,0), N, },
>  { GLX(SGIS_multisample),VER(0,0), Y, },
>  { GLX(SGIX_fbconfig),   VER(1,3), Y, },
> diff --git a/glx/glxdri2.c b/glx/glxdri2.c
> index c56a376..71dab2a 100644
> --- a/glx/glxdri2.c
> +++ b/glx/glxdri2.c
> @@ -900,13 +900,6 @@ initializeExtensions(__GLXDRIscreen * screen)
>  }
>
>  for (i = 0; extensions[i]; i++) {
> -if (strcmp(extensions[i]->name, __DRI_READ_DRAWABLE) == 0) {
> -__glXEnableExtension(screen->glx_enable_bits,
> - "GLX_SGI_make_current_read");
> -
> -LogMessage(X_INFO, "AIGLX: enabled GLX_SGI_make_current_read\n");
> -}
> -
Afaics we never had a DRI2 based dri module that provided this
extension. Which brings the question if this has ever been tested and
confirmed working. Can we have a small note about this in the commit
message ?

-Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: 1.18.3 call for patches

2016-03-30 Thread Michel Dänzer
On 30.03.2016 05:48, Adam Jackson wrote:
> I've pushed a few high-priority patches to the 1.18 branch. If you
> don't see your favorite there, or if you think any of them look
> suspect, please speak up soon. I plan push out a 1.18.3 release this
> Friday if I don't hear otherwise.

Looks good to me, thanks for doing this.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xf86-video-amdgpu] DRI3: Refuse to open DRM file descriptor for ssh clients (v2)

2016-03-30 Thread Michel Dänzer
From: Michel Dänzer 

Fixes hangs when attempting to use DRI3 on display connections forwarded
via SSH.

Don't do this for Xorg > 1.18.99.1 since the corresponding xserver
change has landed in Git master.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93261

(Ported from radeon commit 0b3aac1de9db42bfca545fa331e4985836682ec7)

Signed-off-by: Michel Dänzer 
---
 src/amdgpu_dri3.c | 39 ++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/src/amdgpu_dri3.c b/src/amdgpu_dri3.c
index 06d0668..c3042e7 100644
--- a/src/amdgpu_dri3.c
+++ b/src/amdgpu_dri3.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 
 
 static int
@@ -87,6 +88,38 @@ amdgpu_dri3_open(ScreenPtr screen, RRProviderPtr provider, 
int *out)
return Success;
 }
 
+#if DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
XORG_VERSION_NUMERIC(1,18,99,1,0)
+
+static int
+amdgpu_dri3_open_client(ClientPtr client, ScreenPtr screen,
+   RRProviderPtr provider, int *out)
+{
+   const char *cmdname = GetClientCmdName(client);
+   Bool is_ssh = FALSE;
+
+   /* If the executable name is "ssh", assume that this client connection
+* is forwarded from another host via SSH
+*/
+   if (cmdname) {
+   char *cmd = strdup(cmdname);
+
+   /* Cut off any colon and whatever comes after it, see
+* 
https://lists.freedesktop.org/archives/xorg-devel/2015-December/048164.html
+*/
+   cmd = strtok(cmd, ":");
+
+   is_ssh = strcmp(basename(cmd), "ssh") == 0;
+   free(cmd);
+   }
+
+   if (!is_ssh)
+   return amdgpu_dri3_open(screen, provider, out);
+
+   return BadAccess;
+}
+
+#endif /* DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
XORG_VERSION_NUMERIC(1,18,99,1,0) */
+
 static PixmapPtr amdgpu_dri3_pixmap_from_fd(ScreenPtr screen,
int fd,
CARD16 width,
@@ -172,9 +205,13 @@ static int amdgpu_dri3_fd_from_pixmap(ScreenPtr screen,
 }
 
 static dri3_screen_info_rec amdgpu_dri3_screen_info = {
+#if DRI3_SCREEN_INFO_VERSION >= 1 && XORG_VERSION_CURRENT <= 
XORG_VERSION_NUMERIC(1,18,99,1,0)
+   .version = 1,
+   .open_client = amdgpu_dri3_open_client,
+#else
.version = 0,
-
.open = amdgpu_dri3_open,
+#endif
.pixmap_from_fd = amdgpu_dri3_pixmap_from_fd,
.fd_from_pixmap = amdgpu_dri3_fd_from_pixmap
 };
-- 
2.8.0.rc3

___
xorg-driver-ati mailing list
xorg-driver-ati@lists.x.org
https://lists.x.org/mailman/listinfo/xorg-driver-ati


[PATCH xserver] os: Use strtok instead of xstrtokenize in ComputeLocalClient

2016-03-30 Thread Michel Dänzer
From: Michel Dänzer 

Fixes leaking the memory pointed to by the members of the array returned
by xstrtokenize.

Signed-off-by: Michel Dänzer 
---
 os/access.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/os/access.c b/os/access.c
index 58f95a9..8828e08 100644
--- a/os/access.c
+++ b/os/access.c
@@ -1131,19 +1131,20 @@ ComputeLocalClient(ClientPtr client)
  * is forwarded from another host via SSH
  */
 if (cmdname) {
-char **cmd;
+char *cmd = strdup(cmdname);
 Bool ret;
 
 /* Cut off any colon and whatever comes after it, see
  * 
https://lists.freedesktop.org/archives/xorg-devel/2015-December/048164.html
  */
-cmd = xstrtokenize(cmdname, ":");
+cmd = strtok(cmd, ":");
 
 #if !defined(WIN32) || defined(__CYGWIN__)
-cmd[0] = basename(cmd[0]);
+ret = strcmp(basename(cmd), "ssh") != 0;
+#else
+ret = strcmp(cmd, "ssh") != 0;
 #endif
 
-ret = strcmp(cmd[0], "ssh") != 0;
 free(cmd);
 
 return ret;
-- 
2.8.0.rc3

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: Missing letters after suspend

2016-03-30 Thread sam tygier
On 28/03/16 07:28, martin f krafft wrote:
> Hi,
> 
> I am not sure the following is an X.org issue, but I hope you'll let
> me start here.
> 
> Please see attached screenshot. This is on a laptop and occurs
> occasionally after I bring the system back from suspend. Everything
> works just fine, except certain letters (it's always a different set
> it seems) are just not displaying.
> 
> At first I thought this is limited to GTK apps (Firefox,
> Thunderbird, ssh-ask-pass, gscan2pdf), but I also see this with e.g.
> the Awesome window manager, which does not link with GTK.

I have also seen this, only twice, both in the past week or so. I suspend an 
resume 3 or 4 times per day, so this is fairly rare. I am on Thinkpad X230, 
with i7-3520M with Intel HD Graphics 4000.
Fedora 23 x86-64
Xorg 1.18.1-3.fc23
mesa 11.1.0-2.20151218.fc23
xorg-x11-drv-intel 2.99.917-19.20151206

Also:
https://twitter.com/mjg59/status/708549476076679168

> Have you encountered this before? What is going on? Which software
> is at fault?
> 
> How can I fix this (without rebooting)?

Logging out and in again seemed to fix it.

Sam


___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: [PATCH xserver RFC v2] glamor: fix wrong offset on composite rectangles

2016-03-30 Thread Olivier Fourdan
Hi all,

Anyone to give some feedback on this patch?

It fixes bug #94568 for me (there is a simple reproducer there) and I have not 
noticed any ill effect in my (limited) testing here.

I tried to see if rendercheck would detect such an issue but apprently it 
doesn't, even with a patch in render check to map its window farther from (0,0).

But the good point is rendercheck doesn't seem to report any regression with 
this patch either (but again, I am not sure I can trust rendercheck on that).

Cheers,
Olivier

- Original Message -
> When using PictOpSrc, the destination is wrongly shifted back to (0, 0).
> 
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94568
> 
> Signed-off-by: Olivier Fourdan 
> ---
>  v2: Cleaup-up, move relevant code where it's actually used;
>  Note: I am not entirely confident with this patch, it fixes the issue
>  for me but I am definitely not certain it's correct...
> 
>  glamor/glamor_compositerects.c | 18 +-
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/glamor/glamor_compositerects.c b/glamor/glamor_compositerects.c
> index 885a6c0..199e627 100644
> --- a/glamor/glamor_compositerects.c
> +++ b/glamor/glamor_compositerects.c
> @@ -107,7 +107,6 @@ glamor_composite_rectangles(CARD8 op,
>  struct glamor_pixmap_private *priv;
>  pixman_region16_t region;
>  pixman_box16_t *boxes;
> -int dst_x, dst_y;
>  int num_boxes;
>  PicturePtr source = NULL;
>  Bool need_free_region = FALSE;
> @@ -225,17 +224,18 @@ glamor_composite_rectangles(CARD8 op,
> RegionExtents()->x2, RegionExtents()->y2,
> RegionNumRects());
>  
> -glamor_get_drawable_deltas(dst->pDrawable, pixmap, _x, _y);
> -pixman_region_translate(, dst_x, dst_y);
> -
> -DEBUGF("%s: pixmap +(%d, %d) extents (%d, %d),(%d, %d)\n",
> -   __FUNCTION__, dst_x, dst_y,
> -   RegionExtents()->x1, RegionExtents()->y1,
> -   RegionExtents()->x2, RegionExtents()->y2);
> -
>  boxes = pixman_region_rectangles(, _boxes);
>  if (op == PictOpSrc || op == PictOpClear) {
>  CARD32 pixel;
> +int dst_x, dst_y;
> +
> +glamor_get_drawable_deltas(dst->pDrawable, pixmap, _x, _y);
> +pixman_region_translate(, dst_x, dst_y);
> +
> +DEBUGF("%s: pixmap +(%d, %d) extents (%d, %d),(%d, %d)\n",
> +   __FUNCTION__, dst_x, dst_y,
> +   RegionExtents()->x1, RegionExtents()->y1,
> +   RegionExtents()->x2, RegionExtents()->y2);
>  
>  if (op == PictOpClear)
>  pixel = 0;
> --
> 2.5.0
> 
> ___
> xorg-devel@lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: https://lists.x.org/mailman/listinfo/xorg-devel
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver v2] xfree86: Immediately handle failure to set HW cursor

2016-03-30 Thread Alexandre Courbot
There is currently no reliable way to report failure to set a HW
cursor. Still such failures can happen if e.g. the MODE_CURSOR DRM
ioctl fails (which currently happens at least with modesetting on Tegra
for format incompatibility reasons).

As failures are currently handled by setting the HW cursor size to
(0,0), the fallback to SW cursor will not happen until the next time the
cursor changes and xf86CursorSetCursor() is called again. In the
meantime, the cursor will be invisible to the user.

This patch addresses that by adding _xf86CrtcFuncs::set_cursor_check and
_xf86CursorInfoRec::ShowCursorCheck hook variants that return booleans.
This allows to propagate errors up to xf86CursorSetCursor(), which can
then fall back to using the SW cursor immediately.

Signed-off-by: Alexandre Courbot 
---
Changes since v1:
- Keep the old hooks unchanged to preserve compatibility

 hw/xfree86/drivers/modesetting/drmmode_display.c | 15 +++-
 hw/xfree86/modes/xf86Crtc.h  |  4 
 hw/xfree86/modes/xf86Cursors.c   | 30 +---
 hw/xfree86/ramdac/xf86Cursor.h   |  1 +
 hw/xfree86/ramdac/xf86HWCurs.c   | 18 ++
 5 files changed, 50 insertions(+), 18 deletions(-)

diff --git a/hw/xfree86/drivers/modesetting/drmmode_display.c 
b/hw/xfree86/drivers/modesetting/drmmode_display.c
index f573a27f4985..28d663c3d22e 100644
--- a/hw/xfree86/drivers/modesetting/drmmode_display.c
+++ b/hw/xfree86/drivers/modesetting/drmmode_display.c
@@ -532,7 +532,7 @@ drmmode_set_cursor_position(xf86CrtcPtr crtc, int x, int y)
 drmModeMoveCursor(drmmode->fd, drmmode_crtc->mode_crtc->crtc_id, x, y);
 }
 
-static void
+static Bool
 drmmode_set_cursor(xf86CrtcPtr crtc)
 {
 drmmode_crtc_private_ptr drmmode_crtc = crtc->driver_private;
@@ -551,7 +551,7 @@ drmmode_set_cursor(xf86CrtcPtr crtc)
   handle, ms->cursor_width, ms->cursor_height,
   cursor->bits->xhot, cursor->bits->yhot);
 if (!ret)
-return;
+return TRUE;
 if (ret == -EINVAL)
 use_set_cursor2 = FALSE;
 }
@@ -566,7 +566,10 @@ drmmode_set_cursor(xf86CrtcPtr crtc)
 cursor_info->MaxWidth = cursor_info->MaxHeight = 0;
 drmmode_crtc->drmmode->sw_cursor = TRUE;
 /* fallback to swcursor */
+   return FALSE;
 }
+
+return TRUE;
 }
 
 static void
@@ -599,12 +602,12 @@ drmmode_hide_cursor(xf86CrtcPtr crtc)
  ms->cursor_width, ms->cursor_height);
 }
 
-static void
-drmmode_show_cursor(xf86CrtcPtr crtc)
+static Bool
+drmmode_show_cursor_check(xf86CrtcPtr crtc)
 {
 drmmode_crtc_private_ptr drmmode_crtc = crtc->driver_private;
 drmmode_crtc->cursor_up = TRUE;
-drmmode_set_cursor(crtc);
+return drmmode_set_cursor(crtc);
 }
 
 static void
@@ -844,7 +847,7 @@ static const xf86CrtcFuncsRec drmmode_crtc_funcs = {
 .set_mode_major = drmmode_set_mode_major,
 .set_cursor_colors = drmmode_set_cursor_colors,
 .set_cursor_position = drmmode_set_cursor_position,
-.show_cursor = drmmode_show_cursor,
+.show_cursor_check = drmmode_show_cursor_check,
 .hide_cursor = drmmode_hide_cursor,
 .load_cursor_argb = drmmode_load_cursor_argb,
 
diff --git a/hw/xfree86/modes/xf86Crtc.h b/hw/xfree86/modes/xf86Crtc.h
index 8b0160845a02..4bc4d6fc78fd 100644
--- a/hw/xfree86/modes/xf86Crtc.h
+++ b/hw/xfree86/modes/xf86Crtc.h
@@ -187,6 +187,8 @@ typedef struct _xf86CrtcFuncs {
  */
 void
  (*show_cursor) (xf86CrtcPtr crtc);
+Bool
+ (*show_cursor_check) (xf86CrtcPtr crtc);
 
 /**
  * Hide cursor
@@ -982,6 +984,8 @@ extern _X_EXPORT void
  */
 extern _X_EXPORT void
  xf86_show_cursors(ScrnInfoPtr scrn);
+extern _X_EXPORT Bool
+ xf86_show_cursors_check(ScrnInfoPtr scrn);
 
 /**
  * Called by the driver to turn cursors off
diff --git a/hw/xfree86/modes/xf86Cursors.c b/hw/xfree86/modes/xf86Cursors.c
index 5df1ab73a37e..ae581001c222 100644
--- a/hw/xfree86/modes/xf86Cursors.c
+++ b/hw/xfree86/modes/xf86Cursors.c
@@ -333,17 +333,23 @@ xf86_hide_cursors(ScrnInfoPtr scrn)
 }
 }
 
-static void
+static Bool
 xf86_crtc_show_cursor(xf86CrtcPtr crtc)
 {
 if (!crtc->cursor_shown && crtc->cursor_in_range) {
-crtc->funcs->show_cursor(crtc);
-crtc->cursor_shown = TRUE;
+if (crtc->funcs->show_cursor_check) {
+crtc->cursor_shown = crtc->funcs->show_cursor_check(crtc);
+} else {
+crtc->funcs->show_cursor(crtc);
+crtc->cursor_shown = TRUE;
+}
 }
+
+return crtc->cursor_shown;
 }
 
-void
-xf86_show_cursors(ScrnInfoPtr scrn)
+Bool
+xf86_show_cursors_check(ScrnInfoPtr scrn)
 {
 xf86CrtcConfigPtr xf86_config = XF86_CRTC_CONFIG_PTR(scrn);
 int c;
@@ -352,9 +358,17 @@ xf86_show_cursors(ScrnInfoPtr scrn)
 for (c = 0; c < xf86_config->num_crtc; c++) {
 xf86CrtcPtr crtc = xf86_config->crtc[c];