Re: WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast

2018-09-10 Thread Daniel Drake
On Fri, Sep 7, 2018 at 7:05 AM, Jasper St. Pierre  wrote:
> So this is a fun question and took me a day or two of random spelunking.
> Let's start with the last question, since it gives us a good starting point:
> why are the PCI IDs necessary?
>
> The answer is "DRI2 needs to figure out the driver to load if the user
> doesn't pass it into DRI2Connect".
> https://gitlab.freedesktop.org/xorg/xserver/blob/master/hw/xfree86/dri2/dri2.c#L1440

Thanks Jasper. Highly informative as usual!

> Let's now ask and answer two more follow-up questions: 1. Why is the server
> using DRI2, 2. Why does the server need the driver name, and 3. Why doesn't
> mesa pass the driver name along?
>
> My best guess for why DRI2 is being used is that xf86-video-intel turns it
> off by default, because ickle didn't like the implicit synchronization that
> DRI3 had and refused to fix some bugs in it. So if you load
> xf86-video-intel, unless you configure it to turn on DRI3, you get DRI2.
> Yay.
>
> As for why mesa doesn't pass the driver name along, the answer just is that
> it doesn't. Maybe it should?
> https://github.com/mesa3d/mesa/blob/bd963f84302adb563136712c371023f15dadbea7/src/glx/dri2_glx.c#L1196
>
> DRI3 works a bit differently -- an FD is passed to the X server by mesa, and
> the DDX figures out how to interpret that FD. The full flow in rootless X is
> that logind picks an FD, passes that to the X server, and then the DDX
> driver (likely -modesetting) calls drmGetDeviceNameFromFd2, and all the
> logic is encapsulated in libdrm and mesa. But the generic DRI2 doesn't have
> an FD, really, so you have to get something.

I really appreciate all the info and explanations. This part (pass
driver name and/or switch to DRI3) looks like it may be the most
easily actionable improvement that could be made, I'll look into it as
time permits.

Daniel
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast

2018-09-06 Thread Jasper St. Pierre
On Mon, Aug 27, 2018 at 1:07 AM Daniel Drake  wrote:

> Hi,
>
> I'm looking at a strange issue which has taken me across WebKit,
> glvnd, mesa and X, and has left me somewhat confused regarding if I've
> found any real bugs here, or just expected behaviour (my graphics
> knowledge doesn't go far beyond the basics).
>
> The issue:
> Under xserver-1.18 + mesa-18.1, on Intel GeminiLake, the
> Webkit-powered GNOME online accounts UI shows a blank window (instead
> of the web service login UI). The logs show a webkit crash at the same
> time, because it doesn't handle a GLXBadFBConfig X error.
>
> On the Webkit side, it is failing to find an appropriate GLXFBConfig
> that corresponds to the X visual of the window, which is using a depth
> 32 RGBA visual. It then ends up passing a NULL config to
> glXCreateContextAttribsARB() which results in an error.
>
> Inspecting the available visuals and GLXFBConfigs with glxinfo, I
> observe that there is only one visual with depth 32 (the one being
> used here), but there isn't even a single GLXFBConfig with depth 32.
>
> Looking on the X server side, I observe the active code that first
> deals with the fbconfigs list is glxdriswrast.c __glXDRIscreenProbe,
> which is calling into mesa's driSWRastCreateNewScreen() and getting
> the available fbconfigs from there.
>
> I then spotted a log message:
>   (EE) modeset(0): [DRI2] No driver mapping found for PCI device 0x8086 /
> 0x3184
>
> and then I find hw/xfree86/dri2/pci_ids/i965_pci_ids.h, which (on this
> old X) is missing GeminiLake PCI IDs, so I add it there. Now I have my
> depth 32 fbconfig with the right visual assigned and webkit works.
>
>
> Questions:
>
> 1. What should webkit be doing in event of it not being to find a
> GLXFBConfig that corresponds to the X visual of it's window?
>
>
> 2. Why is swrast coming into the picture? Is swrast being used for
> rendering?
>
> I was surprised to see that appear in the traces. I had assumed that
> with a new enough mesa, I would be avoiding software rendering
> codepaths.
>
> I don't think it's using swrast for rendering because I feel like I
> would have noticed corresponding slow performance, also even before my
> changes glxinfo says:
>
>   direct rendering: Yes
>   Extended renderer info (GLX_MESA_query_renderer):
> Vendor: Intel Open Source Technology Center (0x8086)
> Device: Mesa DRI Intel(R) UHD Graphics 605 (Geminilake)  (0x3184)
> Version: 18.1.6
> Accelerated: yes
>
> If swrast is not being used for rendering, why is it being used to
> determine what the available fbconfigs are? Is that a bug?
>
>
> 3. Should swrast offer a depth 32 GLXFBConfig?
>
> If I were on a setup that really uses swrast for rendering (e.g. if
> mesa doesn't provide an accelerated graphics driver), I assume this
> webkit crash would be hit there too, due to not having a depth 32
> fbconfig.
>
> Should it have one?
>
> I didn't investigate in detail, but it looks like mesa's
> dri_fill_in_modes() (perhaps via its calls down to
> llvmpipe_is_format_supported()) declares that depth 32 is not
> supported in the swrast codepath.
>
>
> 4. Why is there still a list of PCI IDs in the X server?
>
> I was under the impression that these days, rendering stuff has been
> handed off to mesa, and display stuff has been handed off to KMS. Both
> the kernel and mesa have corresponding drivers for those functions
> (and their own lists of PCI IDs).
>
> I was then surprised to see the X server also maintaining a list of
> PCI IDs and it having a significant effect on which codepaths are
> followed.
>
>
> Thanks for any clarifications!
>

So this is a fun question and took me a day or two of random spelunking.
Let's start with the last question, since it gives us a good starting
point: why are the PCI IDs necessary?

The answer is "DRI2 needs to figure out the driver to load if the user
doesn't pass it into DRI2Connect".
https://gitlab.freedesktop.org/xorg/xserver/blob/master/hw/xfree86/dri2/dri2.c#L1440

Let's now ask and answer two more follow-up questions: 1. Why is the server
using DRI2, 2. Why does the server need the driver name, and 3. Why doesn't
mesa pass the driver name along?

My best guess for why DRI2 is being used is that xf86-video-intel turns it
off by default, because ickle didn't like the implicit synchronization that
DRI3 had and refused to fix some bugs in it. So if you load
xf86-video-intel, unless you configure it to turn on DRI3, you get DRI2.
Yay.

As for why mesa doesn't pass the driver name along, the answer just is that
it doesn't. Maybe it should?
https://github.com/mesa3d/mesa/blob/bd963f84302adb563136712c371023f15dadbea7/src/glx/dri2_glx.c#L1196

DRI3 works a bit differently -- an FD is passed to the X server by mesa,
and the DDX figures out how to interpret that FD. The full flow in rootless
X is that logind picks an FD, passes that to the X server, and then the DDX
driver (likely -modesetting) calls drmGetDeviceNameFromFd2, and all the
logic is 

Re: WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast

2018-09-05 Thread Emil Velikov
Hi Daniel,

On 27 August 2018 at 09:07, Daniel Drake  wrote:

> Questions:
>
> 1. What should webkit be doing in event of it not being to find a
> GLXFBConfig that corresponds to the X visual of it's window?
>
>
Attempt another config that user(webkit) knows how to work with?

> 2. Why is swrast coming into the picture? Is swrast being used for rendering?
>

You're using the modesetting DDX, so the 2D acceleration comes from
glamor/OpenGL.
When the driver name cannot be retrieved, glamor will use ... well no
driver - aka swrast.

Possible solutions/workarounds:
 - try the intel ddx - not everyone is using it, it sees development
yet no releases :-\
 - audit the codepaths if one cannot get the driver name otherwise

> 4. Why is there still a list of PCI IDs in the X server?
>
Nobody had the time/interest to fix that up. I did fold the 5 (IIRC)
different codepaths in Mesa down to 1.
ETIME and EWORK kind of got in the way of fixing up the X server.

I guess it could go up my priority list, if employer (Collabora) suggests it ;-)

HTH
Emil
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast

2018-08-27 Thread Daniel Drake
Hi,

I'm looking at a strange issue which has taken me across WebKit,
glvnd, mesa and X, and has left me somewhat confused regarding if I've
found any real bugs here, or just expected behaviour (my graphics
knowledge doesn't go far beyond the basics).

The issue:
Under xserver-1.18 + mesa-18.1, on Intel GeminiLake, the
Webkit-powered GNOME online accounts UI shows a blank window (instead
of the web service login UI). The logs show a webkit crash at the same
time, because it doesn't handle a GLXBadFBConfig X error.

On the Webkit side, it is failing to find an appropriate GLXFBConfig
that corresponds to the X visual of the window, which is using a depth
32 RGBA visual. It then ends up passing a NULL config to
glXCreateContextAttribsARB() which results in an error.

Inspecting the available visuals and GLXFBConfigs with glxinfo, I
observe that there is only one visual with depth 32 (the one being
used here), but there isn't even a single GLXFBConfig with depth 32.

Looking on the X server side, I observe the active code that first
deals with the fbconfigs list is glxdriswrast.c __glXDRIscreenProbe,
which is calling into mesa's driSWRastCreateNewScreen() and getting
the available fbconfigs from there.

I then spotted a log message:
  (EE) modeset(0): [DRI2] No driver mapping found for PCI device 0x8086 / 0x3184

and then I find hw/xfree86/dri2/pci_ids/i965_pci_ids.h, which (on this
old X) is missing GeminiLake PCI IDs, so I add it there. Now I have my
depth 32 fbconfig with the right visual assigned and webkit works.


Questions:

1. What should webkit be doing in event of it not being to find a
GLXFBConfig that corresponds to the X visual of it's window?


2. Why is swrast coming into the picture? Is swrast being used for rendering?

I was surprised to see that appear in the traces. I had assumed that
with a new enough mesa, I would be avoiding software rendering
codepaths.

I don't think it's using swrast for rendering because I feel like I
would have noticed corresponding slow performance, also even before my
changes glxinfo says:

  direct rendering: Yes
  Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) UHD Graphics 605 (Geminilake)  (0x3184)
Version: 18.1.6
Accelerated: yes

If swrast is not being used for rendering, why is it being used to
determine what the available fbconfigs are? Is that a bug?


3. Should swrast offer a depth 32 GLXFBConfig?

If I were on a setup that really uses swrast for rendering (e.g. if
mesa doesn't provide an accelerated graphics driver), I assume this
webkit crash would be hit there too, due to not having a depth 32
fbconfig.

Should it have one?

I didn't investigate in detail, but it looks like mesa's
dri_fill_in_modes() (perhaps via its calls down to
llvmpipe_is_format_supported()) declares that depth 32 is not
supported in the swrast codepath.


4. Why is there still a list of PCI IDs in the X server?

I was under the impression that these days, rendering stuff has been
handed off to mesa, and display stuff has been handed off to KMS. Both
the kernel and mesa have corresponding drivers for those functions
(and their own lists of PCI IDs).

I was then surprised to see the X server also maintaining a list of
PCI IDs and it having a significant effect on which codepaths are
followed.


Thanks for any clarifications!

Daniel
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel