Re: Nested procedures into X12
On Wed, 2015-06-17 at 14:05 -0700, Jasper St. Pierre wrote: And yet it's still fast enough. Eliminating roundtrips would be nice, but we should make sure we're eliminating the roundtrips that matter. On a 2.3GHz haswell I get about 100k roundtrips per second (measuring with x11perf -pointer). 92 round trips is 0.92 milliseconds. That's twice as long as the vblank interval with CVT-R timing. So unless you're racing the scanline and/or scheduling slow work for the top of the frame, you have about a 2% chance that mapping a window means missing a frame. Maybe mutter gets that right, but I kind of doubt it has that good of an internal cost model. A 2.3GHz haswell is a fast, modern machine. On a 1.7GHz ivybridge I get about 50k/sec. On an SGI Indy you could expect maybe 1.8k/sec; 92 round trips would be 5ms, a third of a frame. Granted not many people make 150MHz MIPS machines these days, but they _are_ making atoms. - ajax ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Nested procedures into X12
On Wed, 2015-06-17 at 16:52 -0700, Keith Packard wrote: Having the code shared is nice, but the big cost of GL these days is all of the rendering and shader compiler state. Steaming megabytes of data for every application. Memory is cheap these days; maybe we don't care anymore. There's a reason I said vulkan, but sure. For that matter we could go reinvent D11. If all we're talking about are ridiculous rendering, then it's actually easier than you fear -- the cost is in computing the operation mask, not the blt operation that follows from that. And the operation mask is independent of the source or destination operands. Imagine doing: ProcPolyArc() { if (!arc_started) { start_drawing_arcs_in_another_thread(stuff); ClientSleep(); } if (arc_finished) { mask = mask_from_async_drawing(client); PushPixels(src, dst, mask); } } You'd need to verify that the GC hadn't been changed between the two steps, but otherwise I think this would work fine. Not just that the GC hasn't changed, that it's still valid for the drawable; window resize will invalidate your composite clip. And you need to redo all the object lookup since the drawable may have been destroyed and another created with the same XID, or the security policy may have changed. And if Xinerama is involved you'd need to build the mask in the dispatch layer since otherwise you'd be breaking atomicity across screens. Which, if someone really wants to implement all that, I certainly won't stop them. - ajax ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: xvfb: add randr support (v2)
On 06/08/2015 03:14 PM, Siim Põder wrote: Hi This was sent to xorg-devel a few years ago. It still applies and still appears to work. I resending this because it affects me. Comments or application to the tree would be greatly appreciated :) The motivation for getting this is chrome remote desktop that runs under Xvfb and wants to use RANDR to adjust screen size according to the remote desktop client screen size. Apparently there are other use cases as well, the bug mentions gnome-settings-daemon testing. Not that this patch hurts or anything, but is there any particular reason this remote desktop thing is using Xvfb rather than Xorg + xf86-video-dummy? I thought there was an effort to kill off the redundant non-Xorg DDXes at some point. I posted a patch series a while back to upgrade the dummy driver to be able to support arbitrary resizing, including resizing to larger than you started with: http://lists.x.org/archives/xorg-devel/2015-January/045395.html Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=26391 Signed-off-by: Lambros Lambrou lambroslamb...@google.com Signed-off-by: Mike Frysinger vap...@gentoo.org Signed-off-by: Michal Srb m...@suse.com Signed-off-by: Siim Põder s...@p6drad-teel.net --- Second version, modified according to Keith suggestion. Tested by adding second mode and switching - worked correctly. diff --git a/hw/vfb/InitOutput.c b/hw/vfb/InitOutput.c index 97eccfd..bfca068 100644 --- a/hw/vfb/InitOutput.c +++ b/hw/vfb/InitOutput.c @@ -66,6 +66,7 @@ from The Open Group. #include dix.h #include miline.h #include glx_extinit.h +#include randrstr.h #define VFB_DEFAULT_WIDTH 1280 #define VFB_DEFAULT_HEIGHT 1024 @@ -785,6 +786,125 @@ vfbCloseScreen(ScreenPtr pScreen) } static Bool +vfbRROutputValidateMode(ScreenPtr pScreen, +RROutputPtr output, +RRModePtr mode) +{ +rrScrPriv(pScreen); + +if (pScrPriv-minWidth = mode-mode.width +pScrPriv-maxWidth = mode-mode.width +pScrPriv-minHeight = mode-mode.height +pScrPriv-maxHeight = mode-mode.height) +return TRUE; +else +return FALSE; +} + +static Bool +vfbRRScreenSetSize(ScreenPtr pScreen, + CARD16 width, + CARD16 height, + CARD32 mmWidth, + CARD32 mmHeight) +{ +// Prevent screen updates while we change things around +SetRootClip(pScreen, FALSE); + +pScreen-width = width; +pScreen-height = height; +pScreen-mmWidth = mmWidth; +pScreen-mmHeight = mmHeight; + +// Restore the ability to update screen, now with new dimensions +SetRootClip(pScreen, TRUE); + +RRScreenSizeNotify (pScreen); +RRTellChanged(pScreen); + +return TRUE; +} + +static Bool +vfbRRCrtcSet(ScreenPtr pScreen, + RRCrtcPtr crtc, + RRModePtr mode, + int x, + int y, + Rotation rotation, + int numOutput, + RROutputPtr *outputs) +{ + return RRCrtcNotify(crtc, mode, x, y, rotation, NULL, numOutput, outputs); +} + +static Bool +vfbRRGetInfo(ScreenPtr pScreen, Rotation *rotations) +{ +return TRUE; +} + +static Bool +vfbRandRInit(ScreenPtr pScreen) +{ +rrScrPrivPtr pScrPriv; +#if RANDR_12_INTERFACE +RRModePtr mode; +RRCrtcPtr crtc; +RROutputPtroutput; +xRRModeInfo modeInfo; +char name[64]; +#endif + +if (!RRScreenInit (pScreen)) + return FALSE; +pScrPriv = rrGetScrPriv(pScreen); +pScrPriv-rrGetInfo = vfbRRGetInfo; +#if RANDR_12_INTERFACE +pScrPriv-rrCrtcSet = vfbRRCrtcSet; +pScrPriv-rrScreenSetSize = vfbRRScreenSetSize; +pScrPriv-rrOutputSetProperty = NULL; +#if RANDR_13_INTERFACE +pScrPriv-rrOutputGetProperty = NULL; +#endif +pScrPriv-rrOutputValidateMode = vfbRROutputValidateMode; +pScrPriv-rrModeDestroy = NULL; + +RRScreenSetSizeRange (pScreen, + 1, 1, + pScreen-width, pScreen-height); + +sprintf (name, %dx%d, pScreen-width, pScreen-height); +memset (modeInfo, '\0', sizeof (modeInfo)); +modeInfo.width = pScreen-width; +modeInfo.height = pScreen-height; +modeInfo.nameLength = strlen (name); + +mode = RRModeGet (modeInfo, name); +if (!mode) + return FALSE; + +crtc = RRCrtcCreate (pScreen, NULL); +if (!crtc) + return FALSE; + +output = RROutputCreate (pScreen, screen, 6, NULL); +if (!output) + return FALSE; +if (!RROutputSetClones (output, NULL, 0)) + return FALSE; +if (!RROutputSetModes (output, mode, 1, 0)) + return FALSE; +if (!RROutputSetCrtcs (output, crtc, 1)) + return FALSE; +if (!RROutputSetConnection (output,
Re: [PATCH 1/2] glamor: add support for allocating linear buffers
On 15 June 2015 at 05:08, Michel Dänzer mic...@daenzer.net wrote: On 12.06.2015 08:48, Dave Airlie wrote: We need this for doing USB offload scenarios using glamor and modesetting driver. unfortunately only gbm in mesa 10.6 has support for the linear API. Signed-off-by: Dave Airlie airl...@redhat.com --- configure.ac | 5 + glamor/glamor.h | 3 ++- glamor/glamor_egl.c | 5 - glamor/glamor_egl_stubs.c | 2 +- glamor/glamor_fbo.c | 10 +- hw/xwayland/xwayland-glamor.c | 2 +- include/dix-config.h.in | 3 +++ 7 files changed, 21 insertions(+), 9 deletions(-) diff --git a/configure.ac b/configure.ac index f760730..d0908e5 100644 --- a/configure.ac +++ b/configure.ac @@ -2105,6 +2105,11 @@ if test x$GLAMOR = xyes; then if test x$GBM = xyes; then AC_DEFINE(GLAMOR_HAS_GBM, 1, [Build glamor with GBM-based EGL support]) + PKG_CHECK_MODULES(GBM_HAS_LINEAR, gbm = 10.6.0, [GBM_HAS_LINEAR=yes], [GBM_HAS_LINEAR=no]) + if test x$GBM_HAS_LINEAR = xyes; then + AC_DEFINE(GLAMOR_HAS_GBM_LINEAR, 1, + [Build glamor/gbm has linear support]) + fi It would be better to use AC_CHECK_DECL for this, as is done in xf86-video-amdgpu. That way it does the right thing even if somebody backports GBM_BO_USE_LINEAR to an older version of Mesa. Fwiw I second this suggestion. As we know GBM is an API and others may provide alternative implementation of it (for example CrOS has minigbm). Imho using the mesa version in (m)any of the pc files is a bug, which should be addressed, rather than relied upon. Thanks Emil ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: xvfb: add randr support (v2)
Hi Quoting Aaron Plattner (2015-06-18 15:28:18) On 06/08/2015 03:14 PM, Siim Põder wrote: Not that this patch hurts or anything, but is there any particular reason this remote desktop thing is using Xvfb rather than Xorg + xf86-video-dummy? I thought there was an effort to kill off the redundant non-Xorg DDXes at some point. Xorg + video-dummy appears to work just fine for chrome remote desktop when convinced to. I tried it using the xpra.org Xdummy xorg.conf and it was mostly OK out of the box (with some minor quirks). I guess I should make sure there is a bug filed against chrome to start using Xorg + video-dummy where possible. I posted a patch series a while back to upgrade the dummy driver to be able to support arbitrary resizing, including resizing to larger than you started with: http://lists.x.org/archives/xorg-devel/2015-January/045395.html Very nice, I will try that out with chrome-remote-desktop. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=26391 Signed-off-by: Lambros Lambrou lambroslamb...@google.com Signed-off-by: Mike Frysinger vap...@gentoo.org Signed-off-by: Michal Srb m...@suse.com Signed-off-by: Siim Põder s...@p6drad-teel.net --- Second version, modified according to Keith suggestion. Tested by adding second mode and switching - worked correctly. diff --git a/hw/vfb/InitOutput.c b/hw/vfb/InitOutput.c index 97eccfd..bfca068 100644 --- a/hw/vfb/InitOutput.c +++ b/hw/vfb/InitOutput.c @@ -66,6 +66,7 @@ from The Open Group. #include dix.h #include miline.h #include glx_extinit.h +#include randrstr.h #define VFB_DEFAULT_WIDTH 1280 #define VFB_DEFAULT_HEIGHT 1024 @@ -785,6 +786,125 @@ vfbCloseScreen(ScreenPtr pScreen) } static Bool +vfbRROutputValidateMode(ScreenPtr pScreen, +RROutputPtr output, +RRModePtr mode) +{ +rrScrPriv(pScreen); + +if (pScrPriv-minWidth = mode-mode.width +pScrPriv-maxWidth = mode-mode.width +pScrPriv-minHeight = mode-mode.height +pScrPriv-maxHeight = mode-mode.height) +return TRUE; +else +return FALSE; +} + +static Bool +vfbRRScreenSetSize(ScreenPtr pScreen, + CARD16 width, + CARD16 height, + CARD32 mmWidth, + CARD32 mmHeight) +{ +// Prevent screen updates while we change things around +SetRootClip(pScreen, FALSE); + +pScreen-width = width; +pScreen-height = height; +pScreen-mmWidth = mmWidth; +pScreen-mmHeight = mmHeight; + +// Restore the ability to update screen, now with new dimensions +SetRootClip(pScreen, TRUE); + +RRScreenSizeNotify (pScreen); +RRTellChanged(pScreen); + +return TRUE; +} + +static Bool +vfbRRCrtcSet(ScreenPtr pScreen, + RRCrtcPtr crtc, + RRModePtr mode, + int x, + int y, + Rotation rotation, + int numOutput, + RROutputPtr *outputs) +{ + return RRCrtcNotify(crtc, mode, x, y, rotation, NULL, numOutput, outputs); +} + +static Bool +vfbRRGetInfo(ScreenPtr pScreen, Rotation *rotations) +{ +return TRUE; +} + +static Bool +vfbRandRInit(ScreenPtr pScreen) +{ +rrScrPrivPtr pScrPriv; +#if RANDR_12_INTERFACE +RRModePtr mode; +RRCrtcPtr crtc; +RROutputPtroutput; +xRRModeInfo modeInfo; +char name[64]; +#endif + +if (!RRScreenInit (pScreen)) + return FALSE; +pScrPriv = rrGetScrPriv(pScreen); +pScrPriv-rrGetInfo = vfbRRGetInfo; +#if RANDR_12_INTERFACE +pScrPriv-rrCrtcSet = vfbRRCrtcSet; +pScrPriv-rrScreenSetSize = vfbRRScreenSetSize; +pScrPriv-rrOutputSetProperty = NULL; +#if RANDR_13_INTERFACE +pScrPriv-rrOutputGetProperty = NULL; +#endif +pScrPriv-rrOutputValidateMode = vfbRROutputValidateMode; +pScrPriv-rrModeDestroy = NULL; + +RRScreenSetSizeRange (pScreen, + 1, 1, + pScreen-width, pScreen-height); + +sprintf (name, %dx%d, pScreen-width, pScreen-height); +memset (modeInfo, '\0', sizeof (modeInfo)); +modeInfo.width = pScreen-width; +modeInfo.height = pScreen-height; +modeInfo.nameLength = strlen (name); + +mode = RRModeGet (modeInfo, name); +if (!mode) + return FALSE; + +crtc = RRCrtcCreate (pScreen, NULL); +if (!crtc) + return FALSE; + +output = RROutputCreate (pScreen, screen, 6, NULL); +if (!output) + return FALSE; +if (!RROutputSetClones (output, NULL, 0)) + return FALSE; +if (!RROutputSetModes (output, mode, 1, 0)) + return FALSE; +if (!RROutputSetCrtcs
Re: [PATCH 1/2] glamor: add support for allocating linear buffers
On 19 June 2015 at 02:02, Emil Velikov emil.l.veli...@gmail.com wrote: On 15 June 2015 at 05:08, Michel Dänzer mic...@daenzer.net wrote: On 12.06.2015 08:48, Dave Airlie wrote: We need this for doing USB offload scenarios using glamor and modesetting driver. unfortunately only gbm in mesa 10.6 has support for the linear API. Signed-off-by: Dave Airlie airl...@redhat.com --- configure.ac | 5 + glamor/glamor.h | 3 ++- glamor/glamor_egl.c | 5 - glamor/glamor_egl_stubs.c | 2 +- glamor/glamor_fbo.c | 10 +- hw/xwayland/xwayland-glamor.c | 2 +- include/dix-config.h.in | 3 +++ 7 files changed, 21 insertions(+), 9 deletions(-) diff --git a/configure.ac b/configure.ac index f760730..d0908e5 100644 --- a/configure.ac +++ b/configure.ac @@ -2105,6 +2105,11 @@ if test x$GLAMOR = xyes; then if test x$GBM = xyes; then AC_DEFINE(GLAMOR_HAS_GBM, 1, [Build glamor with GBM-based EGL support]) + PKG_CHECK_MODULES(GBM_HAS_LINEAR, gbm = 10.6.0, [GBM_HAS_LINEAR=yes], [GBM_HAS_LINEAR=no]) + if test x$GBM_HAS_LINEAR = xyes; then + AC_DEFINE(GLAMOR_HAS_GBM_LINEAR, 1, + [Build glamor/gbm has linear support]) + fi It would be better to use AC_CHECK_DECL for this, as is done in xf86-video-amdgpu. That way it does the right thing even if somebody backports GBM_BO_USE_LINEAR to an older version of Mesa. Fwiw I second this suggestion. As we know GBM is an API and others may provide alternative implementation of it (for example CrOS has minigbm). Imho using the mesa version in (m)any of the pc files is a bug, which should be addressed, rather than relied upon. I'll change it because I prefer the suggestion, but GBM is both an API and an ABI, so really the pc.in represents the API version, so any compatible API should use the mesa version in its gbm.pc file until such time as someone splits the gbm API out into a separate project, at which point you'd want to continue the versioning from the mesa point to avoid epochs. So I don't take your argument, the API version is what we ship in the gbm.pc file, compatible implementations should make the same API changes in their same versions. Now the problem I don't solve here, is how to know if the GBM ABI supports passing the linear flag in, since we don't bump the library version often, and I've no idea if there is any nice way to make it discoverable. Dave. ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
Re: Nested procedures into X12
Adam Jackson a...@nwnk.net writes: Not just that the GC hasn't changed, that it's still valid for the drawable; window resize will invalidate your composite clip. Nah, the expense is computing the mask values from the geometry; that's completely independent of both source and dest, and depends only on the dashing and line style bits. It'd be even easier for Render where trapezoids don't depend on the picture at all. Which, if someone really wants to implement all that, I certainly won't stop them. Only entertaining as an intellectual exercise. Machines are 'fast enough' these days that finding a useful drawing request that takes enough time to notice is pretty darn hard. -- -keith signature.asc Description: PGP signature ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
[xrandr 1/2] Mark disabling an output as a change in its CRTC
When an output is disabled via the cmdline, we can use that information to prevent assigning the current CRTC to the output and free it up for reuse by other outputs in the first pass of picking CRTC. Reported-and-tested-by: Nathan Schulte nmschu...@gmail.com Signed-off-by: Chris Wilson ch...@chris-wilson.co.uk --- xrandr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xrandr.c b/xrandr.c index fbfd93e..c0feac3 100644 --- a/xrandr.c +++ b/xrandr.c @@ -3029,7 +3029,7 @@ main (int argc, char **argv) if (!config_output) argerr (%s must be used after --output\n, argv[i]); set_name_xid (config_output-mode, None); set_name_xid (config_output-crtc, None); - config_output-changes |= changes_mode; + config_output-changes |= changes_mode | changes_crtc; continue; } if (!strcmp (--fb, argv[i])) { -- 2.1.4 ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
[xrandr 2/2] Mark all CRTC as currently unused for second picking CRTC pass
We perform two passes over the CRTC in order to find the preferred CRTC for each enabled output. In the first pass, we try to preserve the existing output - CRTC relationships (to avoid unnecessary flicker). If that pass fails, we try again but with all outputs first disabled. However, the logic to preserve an active CRTC was not disabled along with the outputs - meaning that if one was active but its associated output was disabled by the user, then that CRTC would remain unavailable for other outputs. The result would be that we would try to assign more CRTC than available (i.e. if the user request 3 new HDMI outputs on a system with only 3 CRTC, and wished to switch off an active internal panel, we would report cannot find CRTC even though that configuration could be established.) Reported-and-tested-by: Nathan Schulte nmschu...@gmail.com Signed-off-by: Chris Wilson ch...@chris-wilson.co.uk --- xrandr.c | 13 + 1 file changed, 13 insertions(+) diff --git a/xrandr.c b/xrandr.c index c0feac3..181c76e 100644 --- a/xrandr.c +++ b/xrandr.c @@ -2243,6 +2243,8 @@ static void pick_crtcs (void) { output_t *output; +int saved_crtc_noutput[num_crtcs]; +int n; /* * First try to match up newly enabled outputs with spare crtcs @@ -2274,7 +2276,18 @@ pick_crtcs (void) */ for (output = all_outputs; output; output = output-next) output-current_crtc_info = output-crtc_info; + +/* Mark all CRTC as currently unused */ +for (n = 0; n num_crtcs; n++) { + saved_crtc_noutput[n] = crtcs[n].crtc_info-noutput; + crtcs[n].crtc_info-noutput = 0; +} + pick_crtcs_score (all_outputs); + +for (n = 0; n num_crtcs; n++) + crtcs[n].crtc_info-noutput = saved_crtc_noutput[n]; + for (output = all_outputs; output; output = output-next) { if (output-mode_info !output-crtc_info) -- 2.1.4 ___ xorg-devel@lists.x.org: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel