Re: WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast

2018-09-06 Thread Jasper St. Pierre
On Mon, Aug 27, 2018 at 1:07 AM Daniel Drake  wrote:

> Hi,
>
> I'm looking at a strange issue which has taken me across WebKit,
> glvnd, mesa and X, and has left me somewhat confused regarding if I've
> found any real bugs here, or just expected behaviour (my graphics
> knowledge doesn't go far beyond the basics).
>
> The issue:
> Under xserver-1.18 + mesa-18.1, on Intel GeminiLake, the
> Webkit-powered GNOME online accounts UI shows a blank window (instead
> of the web service login UI). The logs show a webkit crash at the same
> time, because it doesn't handle a GLXBadFBConfig X error.
>
> On the Webkit side, it is failing to find an appropriate GLXFBConfig
> that corresponds to the X visual of the window, which is using a depth
> 32 RGBA visual. It then ends up passing a NULL config to
> glXCreateContextAttribsARB() which results in an error.
>
> Inspecting the available visuals and GLXFBConfigs with glxinfo, I
> observe that there is only one visual with depth 32 (the one being
> used here), but there isn't even a single GLXFBConfig with depth 32.
>
> Looking on the X server side, I observe the active code that first
> deals with the fbconfigs list is glxdriswrast.c __glXDRIscreenProbe,
> which is calling into mesa's driSWRastCreateNewScreen() and getting
> the available fbconfigs from there.
>
> I then spotted a log message:
>   (EE) modeset(0): [DRI2] No driver mapping found for PCI device 0x8086 /
> 0x3184
>
> and then I find hw/xfree86/dri2/pci_ids/i965_pci_ids.h, which (on this
> old X) is missing GeminiLake PCI IDs, so I add it there. Now I have my
> depth 32 fbconfig with the right visual assigned and webkit works.
>
>
> Questions:
>
> 1. What should webkit be doing in event of it not being to find a
> GLXFBConfig that corresponds to the X visual of it's window?
>
>
> 2. Why is swrast coming into the picture? Is swrast being used for
> rendering?
>
> I was surprised to see that appear in the traces. I had assumed that
> with a new enough mesa, I would be avoiding software rendering
> codepaths.
>
> I don't think it's using swrast for rendering because I feel like I
> would have noticed corresponding slow performance, also even before my
> changes glxinfo says:
>
>   direct rendering: Yes
>   Extended renderer info (GLX_MESA_query_renderer):
> Vendor: Intel Open Source Technology Center (0x8086)
> Device: Mesa DRI Intel(R) UHD Graphics 605 (Geminilake)  (0x3184)
> Version: 18.1.6
> Accelerated: yes
>
> If swrast is not being used for rendering, why is it being used to
> determine what the available fbconfigs are? Is that a bug?
>
>
> 3. Should swrast offer a depth 32 GLXFBConfig?
>
> If I were on a setup that really uses swrast for rendering (e.g. if
> mesa doesn't provide an accelerated graphics driver), I assume this
> webkit crash would be hit there too, due to not having a depth 32
> fbconfig.
>
> Should it have one?
>
> I didn't investigate in detail, but it looks like mesa's
> dri_fill_in_modes() (perhaps via its calls down to
> llvmpipe_is_format_supported()) declares that depth 32 is not
> supported in the swrast codepath.
>
>
> 4. Why is there still a list of PCI IDs in the X server?
>
> I was under the impression that these days, rendering stuff has been
> handed off to mesa, and display stuff has been handed off to KMS. Both
> the kernel and mesa have corresponding drivers for those functions
> (and their own lists of PCI IDs).
>
> I was then surprised to see the X server also maintaining a list of
> PCI IDs and it having a significant effect on which codepaths are
> followed.
>
>
> Thanks for any clarifications!
>

So this is a fun question and took me a day or two of random spelunking.
Let's start with the last question, since it gives us a good starting
point: why are the PCI IDs necessary?

The answer is "DRI2 needs to figure out the driver to load if the user
doesn't pass it into DRI2Connect".
https://gitlab.freedesktop.org/xorg/xserver/blob/master/hw/xfree86/dri2/dri2.c#L1440

Let's now ask and answer two more follow-up questions: 1. Why is the server
using DRI2, 2. Why does the server need the driver name, and 3. Why doesn't
mesa pass the driver name along?

My best guess for why DRI2 is being used is that xf86-video-intel turns it
off by default, because ickle didn't like the implicit synchronization that
DRI3 had and refused to fix some bugs in it. So if you load
xf86-video-intel, unless you configure it to turn on DRI3, you get DRI2.
Yay.

As for why mesa doesn't pass the driver name along, the answer just is that
it doesn't. Maybe it should?
https://github.com/mesa3d/mesa/blob/bd963f84302adb563136712c371023f15dadbea7/src/glx/dri2_glx.c#L1196

DRI3 works a bit differently -- an FD is passed to the X server by mesa,
and the DDX figures out how to interpret that FD. The full flow in rootless
X is that logind picks an FD, passes that to the X server, and then the DDX
driver (likely -modesetting) calls drmGetDeviceNameFromFd2, and all the
logic is 

[PATCH xserver 3/3] glamor: add support for NV12 in Xv

2018-09-06 Thread Julien Isorce
Useful when video decoders only support NV12. Currently
glamor Xv only supports I420 and YV12.

Note that Intel's sna supports I420, YV12, YUY2, UYVY, NV12.

Test: xvinfo | grep NV12
Test: gst-launch-1.0 videotestsrc ! video/x-raw, format=NV12 ! xvimagesink

Signed-off-by: Julien Isorce 
---
 glamor/glamor_xv.c | 180 +
 1 file changed, 155 insertions(+), 25 deletions(-)

diff --git a/glamor/glamor_xv.c b/glamor/glamor_xv.c
index 62fc4ff..5631293 100644
--- a/glamor/glamor_xv.c
+++ b/glamor/glamor_xv.c
@@ -59,8 +59,40 @@ typedef struct tagREF_TRANSFORM {
 #define RTFContrast(a)   (1.0 + ((a)*1.0)/1000.0)
 #define RTFHue(a)   (((a)*3.1416)/1000.0)
 
-static const glamor_facet glamor_facet_xv_planar = {
-.name = "xv_planar",
+static const glamor_facet glamor_facet_xv_planar_2 = {
+.name = "xv_planar_2",
+
+.version = 120,
+
+.source_name = "v_texcoord0",
+.vs_vars = ("attribute vec2 position;\n"
+"attribute vec2 v_texcoord0;\n"
+"varying vec2 tcs;\n"),
+.vs_exec = (GLAMOR_POS(gl_Position, position)
+"tcs = v_texcoord0;\n"),
+
+.fs_vars = ("uniform sampler2D y_sampler;\n"
+"uniform sampler2D u_sampler;\n"
+"uniform vec4 offsetyco;\n"
+"uniform vec4 ucogamma;\n"
+"uniform vec4 vco;\n"
+"varying vec2 tcs;\n"),
+.fs_exec = (
+"float sample;\n"
+"vec4 temp1;\n"
+"sample = texture2D(y_sampler, tcs).w;\n"
+"temp1.xyz = offsetyco.www * vec3(sample) + 
offsetyco.xyz;\n"
+"sample = texture2D(u_sampler, tcs).x;\n"
+"temp1.xyz = ucogamma.xyz * vec3(sample) + 
temp1.xyz;\n"
+"sample = texture2D(u_sampler, tcs).y;\n"
+"temp1.xyz = clamp(vco.xyz * vec3(sample) + temp1.xyz, 
0.0, 1.0);\n"
+"temp1.w = 1.0;\n"
+"gl_FragColor = temp1;\n"
+),
+};
+
+static const glamor_facet glamor_facet_xv_planar_3 = {
+.name = "xv_planar_3",
 
 .version = 120,
 
@@ -110,26 +142,50 @@ Atom glamorBrightness, glamorContrast, glamorSaturation, 
glamorHue,
 XvImageRec glamor_xv_images[] = {
 XVIMAGE_YV12,
 XVIMAGE_I420,
+XVIMAGE_NV12
 };
 int glamor_xv_num_images = ARRAY_SIZE(glamor_xv_images);
 
 static void
-glamor_init_xv_shader(ScreenPtr screen)
+glamor_init_xv_shader(ScreenPtr screen, int id)
 {
 glamor_screen_private *glamor_priv = glamor_get_screen_private(screen);
 GLint sampler_loc;
+const glamor_facet *glamor_facet_xv_planar = NULL;
+
+switch (id) {
+case FOURCC_YV12:
+case FOURCC_I420:
+glamor_facet_xv_planar = _facet_xv_planar_3;
+break;
+case FOURCC_NV12:
+glamor_facet_xv_planar = _facet_xv_planar_2;
+break;
+default:
+break;
+}
 
 glamor_build_program(screen,
  _priv->xv_prog,
- _facet_xv_planar, NULL, NULL, NULL);
+ glamor_facet_xv_planar, NULL, NULL, NULL);
 
 glUseProgram(glamor_priv->xv_prog.prog);
 sampler_loc = glGetUniformLocation(glamor_priv->xv_prog.prog, "y_sampler");
 glUniform1i(sampler_loc, 0);
 sampler_loc = glGetUniformLocation(glamor_priv->xv_prog.prog, "u_sampler");
 glUniform1i(sampler_loc, 1);
-sampler_loc = glGetUniformLocation(glamor_priv->xv_prog.prog, "v_sampler");
-glUniform1i(sampler_loc, 2);
+
+switch (id) {
+case FOURCC_YV12:
+case FOURCC_I420:
+sampler_loc = glGetUniformLocation(glamor_priv->xv_prog.prog, 
"v_sampler");
+glUniform1i(sampler_loc, 2);
+break;
+case FOURCC_NV12:
+break;
+default:
+break;
+}
 
 }
 
@@ -227,6 +283,21 @@ glamor_xv_query_image_attributes(int id,
 offsets[2] = size;
 size += tmp;
 break;
+case FOURCC_NV12:
+*w = ALIGN(*w, 2);
+*h = ALIGN(*h, 2);
+size = ALIGN(*w, 4);
+if (pitches)
+pitches[0] = size;
+size *= *h;
+if (offsets)
+offsets[1] = offsets[2] = size;
+tmp = ALIGN(*w, 4);
+if (pitches)
+pitches[1] = pitches[2] = tmp;
+tmp *= (*h >> 1);
+size += tmp;
+break;
 }
 return size;
 }
@@ -240,7 +311,7 @@ static REF_TRANSFORM trans[2] = {
 };
 
 void
-glamor_xv_render(glamor_port_private *port_priv)
+glamor_xv_render(glamor_port_private *port_priv, int id)
 {
 ScreenPtr screen = port_priv->pPixmap->drawable.pScreen;
 glamor_screen_private *glamor_priv = glamor_get_screen_private(screen);
@@ -264,7 +335,7 @@ glamor_xv_render(glamor_port_private *port_priv)
 int dst_box_index;
 
 if (!glamor_priv->xv_prog.prog)
-glamor_init_xv_shader(screen);
+

[PATCH xserver 0/3] Xv: add NV12 support in glamor

2018-09-06 Thread Julien Isorce
Some video decoders can only output NV12 and currently glamor Xv only
supports I420 and YV12.

Tested with xf86-video-ati, xf86-video-amdgpu and xf86-video-modesetting
on AMD graphics but should work on any setup that can use glamor.

Test: gst-launch-1.0 videotestsrc ! video/x-raw, format=NV12 ! xvimagesink

Julien Isorce (3):
  xfree86: define FOURCC_NV12 and XVIMAGE_NV12
  glamor: add support for GL_RG
  glamor: add support for NV12 in Xv

 glamor/glamor.c|   2 +
 glamor/glamor.h|   1 +
 glamor/glamor_priv.h   |   4 +-
 glamor/glamor_transfer.c   |  10 ++-
 glamor/glamor_utils.h  |   4 +
 glamor/glamor_xv.c | 180 ++---
 hw/xfree86/common/fourcc.h |  20 +
 7 files changed, 193 insertions(+), 28 deletions(-)

-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver 1/3] xfree86: define FOURCC_NV12 and XVIMAGE_NV12

2018-09-06 Thread Julien Isorce
Useful for glamor.

Signed-off-by: Julien Isorce 
---
 hw/xfree86/common/fourcc.h | 20 
 1 file changed, 20 insertions(+)

diff --git a/hw/xfree86/common/fourcc.h b/hw/xfree86/common/fourcc.h
index e6126b7..a19e686 100644
--- a/hw/xfree86/common/fourcc.h
+++ b/hw/xfree86/common/fourcc.h
@@ -156,4 +156,24 @@
 XvTopToBottom \
}
 
+#define FOURCC_NV12 0x3231564e
+#define XVIMAGE_NV12 \
+   { \
+FOURCC_NV12, \
+XvYUV, \
+LSBFirst, \
+{'N','V','1','2', \
+  0x00,0x00,0x00,0x10,0x80,0x00,0x00,0xAA,0x00,0x38,0x9B,0x71}, \
+12, \
+XvPlanar, \
+2, \
+0, 0, 0, 0, \
+8, 8, 8, \
+1, 2, 2, \
+1, 2, 2, \
+{'Y','U','V', \
+  0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}, \
+XvTopToBottom \
+   }
+
 #endif  /* _XF86_FOURCC_H_ */
-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver 2/3] glamor: add support for GL_RG

2018-09-06 Thread Julien Isorce
Allow to upload the CbCr plane of an NV12 image into a GL texture.

Signed-off-by: Julien Isorce 
---
 glamor/glamor.c  |  2 ++
 glamor/glamor.h  |  1 +
 glamor/glamor_priv.h |  4 +++-
 glamor/glamor_transfer.c | 10 --
 glamor/glamor_utils.h|  4 
 5 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/glamor/glamor.c b/glamor/glamor.c
index 9bf1707..f24cc9f 100644
--- a/glamor/glamor.c
+++ b/glamor/glamor.c
@@ -204,6 +204,8 @@ glamor_create_pixmap(ScreenPtr screen, int w, int h, int 
depth,
 
 pixmap_priv = glamor_get_pixmap_private(pixmap);
 
+pixmap_priv->is_cbcr = (usage == GLAMOR_CREATE_FORMAT_CBCR);
+
 format = gl_iformat_for_pixmap(pixmap);
 
 pitch = (((w * pixmap->drawable.bitsPerPixel + 7) / 8) + 3) & ~3;
diff --git a/glamor/glamor.h b/glamor/glamor.h
index 09e9c89..8d79597 100644
--- a/glamor/glamor.h
+++ b/glamor/glamor.h
@@ -126,6 +126,7 @@ extern _X_EXPORT Bool glamor_destroy_pixmap(PixmapPtr 
pixmap);
 #define GLAMOR_CREATE_FBO_NO_FBO0x103
 #define GLAMOR_CREATE_NO_LARGE  0x105
 #define GLAMOR_CREATE_PIXMAP_NO_TEXTURE 0x106
+#define GLAMOR_CREATE_FORMAT_CBCR   0x107
 
 /* @glamor_egl_exchange_buffers: Exchange the underlying buffers(KHR 
image,fbo).
  *
diff --git a/glamor/glamor_priv.h b/glamor/glamor_priv.h
index 7d9a7d4..68cb248 100644
--- a/glamor/glamor_priv.h
+++ b/glamor/glamor_priv.h
@@ -378,6 +378,8 @@ typedef struct glamor_pixmap_private {
  * names.
  */
 glamor_pixmap_fbo **fbo_array;
+
+Bool is_cbcr;
 } glamor_pixmap_private;
 
 extern DevPrivateKeyRec glamor_pixmap_private_key;
@@ -899,7 +901,7 @@ int glamor_xv_put_image(glamor_port_private *port_priv,
 Bool sync,
 RegionPtr clipBoxes);
 void glamor_xv_core_init(ScreenPtr screen);
-void glamor_xv_render(glamor_port_private *port_priv);
+void glamor_xv_render(glamor_port_private *port_priv, int id);
 
 #include "glamor_utils.h"
 
diff --git a/glamor/glamor_transfer.c b/glamor/glamor_transfer.c
index ebb5101..421ed3a 100644
--- a/glamor/glamor_transfer.c
+++ b/glamor/glamor_transfer.c
@@ -27,6 +27,7 @@
 void
 glamor_format_for_pixmap(PixmapPtr pixmap, GLenum *format, GLenum *type)
 {
+glamor_pixmap_private   *priv = glamor_get_pixmap_private(pixmap);
 switch (pixmap->drawable.depth) {
 case 24:
 case 32:
@@ -38,8 +39,13 @@ glamor_format_for_pixmap(PixmapPtr pixmap, GLenum *format, 
GLenum *type)
 *type = GL_UNSIGNED_INT_2_10_10_10_REV;
 break;
 case 16:
-*format = GL_RGB;
-*type = GL_UNSIGNED_SHORT_5_6_5;
+if (priv->is_cbcr) {
+  *format = priv->fbo->format;
+  *type = GL_UNSIGNED_BYTE;
+} else {
+  *format = GL_RGB;
+  *type = GL_UNSIGNED_SHORT_5_6_5;
+}
 break;
 case 15:
 *format = GL_BGRA;
diff --git a/glamor/glamor_utils.h b/glamor/glamor_utils.h
index 0d5674d..1890c1f 100644
--- a/glamor/glamor_utils.h
+++ b/glamor/glamor_utils.h
@@ -613,11 +613,15 @@ gl_iformat_for_pixmap(PixmapPtr pixmap)
 {
 glamor_screen_private *glamor_priv =
 glamor_get_screen_private((pixmap)->drawable.pScreen);
+glamor_pixmap_private *pixmap_priv = glamor_get_pixmap_private(pixmap);
 
 if (glamor_priv->gl_flavor == GLAMOR_GL_DESKTOP &&
 ((pixmap)->drawable.depth == 1 || (pixmap)->drawable.depth == 8)) {
 return glamor_priv->one_channel_format;
 } else if (glamor_priv->gl_flavor == GLAMOR_GL_DESKTOP &&
+   (pixmap)->drawable.depth == 16 && pixmap_priv->is_cbcr) {
+return GL_RG;
+} else if (glamor_priv->gl_flavor == GLAMOR_GL_DESKTOP &&
(pixmap)->drawable.depth == 30) {
 return GL_RGB10_A2;
 } else {
-- 
2.7.4

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: [PATCH xserver] glamor_egl: request GL2.1 when requesting Desktop GL context

2018-09-06 Thread Eric Anholt
Icenowy Zheng  writes:

> Some devices cannot support OpenGL 2.1, which is the minimum desktop GL
> version required by glamor. However, they may support OpenGL ES 2.0,
> which is the GLES version required by glamor. Usually in this situation
> the desktop GL version supported is 2.0 or 1.4.
>
> Currently, as no requirements are passed when creating desktop GL
> context, a OpenGL 1.4/2.0 context will be created, and glamor will
> arguing that the context is not suitable, although the GPU supports a
> suitable GLES context.
>
> Add version number 2.1 requirement when requesting non-core desktop GL
> context (core context has at least 3.1), so it will fall back to create
> GLES contexts when the version number requirement is not met.
>
> Tested on a Intel 945GMS integrated GPU, which supports GL 1.4 and GLES
> 2.0. Before applying this, it will fail to launch X server when no
> configuration is present because of glamor initialization failure, after
> applying glamor will start with GLES.

In commit:

commit 4218a1e066cf39bb980ebbc9f69536c85232da5c
Author: Olivier Fourdan 
Date:   Thu Feb 5 11:59:22 2015 +0100

glamor: check max native ALU instructions

a check was introduced on desktop GL to keep i915 from using glamor,
because it just falls back to swrast all the time.  Before enabling
glamor on i915 again after Mesa stopped exposing desktop 2.x on i915
(for similar reasons, but for chromium), we would want to diagnose those
and have some less-slow-paths for glamor on i915.


signature.asc
Description: PGP signature
___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

[PATCH xserver] meson: Fix building with -Ddga=false

2018-09-06 Thread Lyude Paul
We forget to assign a value to xf86dgaproto_dep if -Ddga=false, which
causes the meson build to fail:

meson.build:448:0: ERROR:  Unknown variable "xf86dgaproto_dep".

A full log can be found at /home/lyudess/build/xserver/meson-logs/meson-log.txt
FAILED: build.ninja

So, just set it to an empty dependency to fix that.

Signed-off-by: Lyude Paul 
---
 meson.build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/meson.build b/meson.build
index 53cdbe2be..29794f083 100644
--- a/meson.build
+++ b/meson.build
@@ -407,6 +407,7 @@ if not build_xv
 endif
 
 build_dga = false
+xf86dgaproto_dep = dependency('', required: false)
 if get_option('dga') == 'auto'
 xf86dgaproto_dep = dependency('xf86dgaproto', version: '>= 2.0.99.1', 
required: false)
 if xf86dgaproto_dep.found()
-- 
2.17.1

___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: vesa module's name for its exposed video port?

2018-09-06 Thread Adam Jackson
On Thu, 2018-09-06 at 16:39 +, m...@crcomp.net wrote:
> Greetings,
> 
> A grep of Xorg.0.log [1] seems to indicate that only the vesa driver is
> loaded and then used by X on my PC.
> On the other hand, when the intel driver loads on my laptop with an 
> external VGA port, the intel driver uses VGA as the name for the exposed 
> port. You can then use Identifier "VGA" [2] to override the EDID, which 
> the X probe returns.
> What's the name that the vesa module uses for its exposed video
> port? (It's the vesa port name that needs to appear in xorg.conf in
> order to force a mode.) 

This is misguided in a couple of ways. The automatic matching of
Monitor sections by output name is only done for RANDR-1.2-capable
drivers, which vesa is not. For vesa-like drivers, if there's only one
Monitor section in the config file, it will be used (iirc). Otherwise,
one names the monitor whatever one wants, and simply names it again in
the Device section to bind the two together, a la:

Section "Monitor"
Identifier "Spartacus"
# ...
EndSection

Section "Device"
Identifier "default"
Driver "vesa"
Monitor "Spartacus"
EndSection

That said: vesa, among its many other limitations, can't set arbitrary
modes. If the mode isn't pre-programmed into the video BIOS, it's not
available, and no amount of xorg.conf is going to save you. The
complete X log would give more clues about what modes are available and
why you're not getting the ones you want, but you're almost certainly
better off figuring out why the intel driver isn't working for you
(which the X log probably also has some hints about).

- ajax
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

vesa module's name for its exposed video port?

2018-09-06 Thread mail
Greetings,

A grep of Xorg.0.log [1] seems to indicate that only the vesa driver is
loaded and then used by X on my PC.
On the other hand, when the intel driver loads on my laptop with an 
external VGA port, the intel driver uses VGA as the name for the exposed 
port. You can then use Identifier "VGA" [2] to override the EDID, which 
the X probe returns.
What's the name that the vesa module uses for its exposed video
port? (It's the vesa port name that needs to appear in xorg.conf in
order to force a mode.) 

Note.

1.

$ grep oadModule Xorg.0.log
[23.960] (II) LoadModule: "glx"
[24.181] (II) LoadModule: "intel"
[24.195] (II) UnloadModule: "intel"
[24.195] (II) LoadModule: "modesetting"
[24.230] (II) LoadModule: "scfb"
[24.232] (II) LoadModule: "vesa"
[24.718] (II) UnloadModule: "modesetting"
[24.718] (II) LoadModule: "vbe"
[24.751] (II) LoadModule: "int10"
[24.830] (II) LoadModule: "ddc"
[25.515] (II) LoadModule: "shadow"
[25.540] (II) LoadModule: "fb"
[25.562] (II) UnloadModule: "scfb"
[25.562] (II) LoadModule: "int10"
[28.960] (II) LoadModule: "kbd"
[29.671] (II) LoadModule: "mouse"

2. 

$ cat xorg.conf
Section "Monitor"
 Identifier "VGA"
 Modeline "1920x1080" 148.50  1920 2008 2052 2200  1080 1084 1089 1125 
+hsync +vsync
EndSection

Thank you,

-- 
Don

There was a young lady named Bright, Whose speed was far faster than light;
She set out one day, In a relative way, And returned on the previous night.

___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: Smooth gdm -> GNOME3 session transition broken with 1.20 modesetting driver

2018-09-06 Thread Hans de Goede

Hi,

On 05-09-18 20:35, Daniel Stone wrote:

Hi Hans,

On Wed, 5 Sep 2018 at 19:23, Hans de Goede  wrote:

Under Fedora 29 (xserver-1.20) the transition from GDM to
the GNOME3 session is no longer smooth, it seems that the
screen is cleared to black when the Xserver starts instead
of inheriting the framebuffer contents from GDM as before.

Changing the DDX driver from modesetting to intel fixes this,
I think this may be caused by the new atomic support in the
modesetting driver.


It's caused by support for modifiers: this allows Mesa to use a
multi-planar framebuffer (auxiliary compression plane), which the new
X server can't then read back from because drmModeGetFB only supports
a single plane.


I do not think that that is the problem in my case. I'm seeing this
when transitioning from a GDM greeter session which is GNOME shell
as Wayland compositor to a GNOME3 on Xorg user session.

And replacing the DDX driver in the *user* session with the intel
ddx fixes this. So I think this is another issue.

Regards,

Hans


___
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Re: When starting Evolution /usr/lib/gdm3/gdm-x-session[2309]: (II) modeset information is printed to syslog

2018-09-06 Thread Chris
On Thu, 2018-09-06 at 12:21 +0200, Michel Dänzer wrote:
> On 2018-09-05 9:16 p.m., Adam Jackson wrote:
> > On Sat, 2018-09-01 at 10:24 -0500, Chris wrote:
> > 
> > > When starting Evolution this is output to syslog and periodically
> > > after it's running:
> > > 
> > > https://pastebin.com/zndBukUG
> > 
> > Evolution, or something it provokes, is asking the server for the
> > list
> > of available video modes. It's doing so with
> > XRRGetScreenResources(),
> > apparently, which prompts the X server to go re-check every
> > available
> > output to see if anything has changed. This is silly, it should be
> > using XRRGetScreenResourcesCurrent() and relying on hotplug events
> > to
> > trigger re-polling. Now, maybe the X server shouldn't print the
> > modes
> > in the log when that happens, [...]
> 
> FWIW, it probably shouldn't indeed, at least not at the default log
> verbosity.
> 
AFAICT my rsyslog.conf it's the default. I don't know if uncommenting
these lines would help or not:

### Debugging ###
# $DebugFile /var/log/rsyslog-debug
# $DebugLevel 2

# syslog.* /var/log/syslog.debug;RSYSLOG_DebugFormat
# $DebugFile /var/log/syslog.debug
# $DebugLevel 2


-- 
Chris
KeyID 0xE372A7DA98E6705C
31.11972; -97.90167 (Elev. 1092 ft)
06:49:25 up 5 days, 20:07, 1 user, load average: 0.69, 0.53, 0.49
Description:Ubuntu 18.04.1 LTS, kernel 4.15.0-33-generic


signature.asc
Description: This is a digitally signed message part
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s

Re: When starting Evolution /usr/lib/gdm3/gdm-x-session[2309]: (II) modeset information is printed to syslog

2018-09-06 Thread Michel Dänzer
On 2018-09-05 9:16 p.m., Adam Jackson wrote:
> On Sat, 2018-09-01 at 10:24 -0500, Chris wrote:
> 
>> When starting Evolution this is output to syslog and periodically after it's 
>> running:
>>
>> https://pastebin.com/zndBukUG
> 
> Evolution, or something it provokes, is asking the server for the list
> of available video modes. It's doing so with XRRGetScreenResources(),
> apparently, which prompts the X server to go re-check every available
> output to see if anything has changed. This is silly, it should be
> using XRRGetScreenResourcesCurrent() and relying on hotplug events to
> trigger re-polling. Now, maybe the X server shouldn't print the modes
> in the log when that happens, [...]

FWIW, it probably shouldn't indeed, at least not at the default log
verbosity.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
xorg@lists.x.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: https://lists.x.org/mailman/listinfo/xorg
Your subscription address: %(user_address)s