Re: enabling kms for i915 disables brightness control and xrandr
On Mon, 2009-03-30 at 14:34 -0700, Jesse Barnes wrote: On Mon, 30 Mar 2009 15:37:19 +0200 Soeren Sonnenburg so...@debian.org wrote: On Sun, 2009-03-29 at 15:22 +0200, Soeren Sonnenburg wrote: On Sun, 2009-03-29 at 14:07 +0100, Sitsofe Wheeler wrote: (CC'ing dri-devel, Eric Anholt and Jesse Barnes) On Sun, Mar 29, 2009 at 12:34:01PM +0200, Soeren Sonnenburg wrote: Dear all, I am not sure if this is just a user error/ too old userspace problem, [User stated that 2.6.3 intel driver is being used in another email] Just to not cause any (further?) confusion, I was using 2.6.1 when I wrote that email and after compiling/upgrading to 2.6.3 a number of problems vanished (like Xv works, screen resolution is correct, things are *a lot* faster with UXA). However, brightness still only works via setpci and xrandr only shows the 1024x600 modeline. but I recognized that when I enable kernel based modesetting on a intel 945 based samsung nc 10 netbook, I loose brightness control from within X and X resolutions are wrong (instead of 1024x600 it is 1024x1024) and xrandr does no longer have the all the modelines ranging from 1024x600 to 640x350... Trying to change resolutions / setting the brightness via xrandr results in error messages being printed. Furthermore, this same flag disables Xv support.. However, screen switches between terminal and X are quite fast now (without any flicker) and suspend and everything works stably. I recognized that I can set the brightness via setpci -s 00:02.1 F4.B=XX (XX ranging from 00 to FF) just fine... One more thing: Connecting an external display I could turn it on via xrandr --auto successfully. However, it was not possible to turn the internal display off (xrandr --output LVDS1 --off) and switching to the console with the external display connected I managed to freeze the machine. Turning the external display off worked somehow (the display then showed vertical stripes) but I could switch to console. Have you filed bugs for these problems? I think the brightness control issue is due to some missing bits on the 2D side (we don't enable backlight property control when KMS is active, but should), the other issues sound like bugs. I did nothing besides asking here on lkml. Where should I report what? Soeren -- For the one fact about the future of which we can be certain is that it will be utterly fantastic. -- Arthur C. Clarke, 1962 -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20965] New: Dynamic ligthing is broken in Mesa-7.4 release
http://bugs.freedesktop.org/show_bug.cgi?id=20965 Summary: Dynamic ligthing is broken in Mesa-7.4 release Product: Mesa Version: unspecified Platform: x86 (IA32) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Drivers/DRI/r200 AssignedTo: dri-devel@lists.sourceforge.net ReportedBy: smoki00...@gmail.com Created an attachment (id=24388) -- (http://bugs.freedesktop.org/attachment.cgi?id=24388) Doom3-lights_regression Seems to dynamic ligthing effect is broken in Mesa-7.4 for r200 (at least i only tried on 9250 card), it was working for many years (Mesa-7.3=). There are screenshots from Doom 3 and RTCW, but it can be also easely noticed in every quake3 based games with that effect turned on. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20965] Dynamic ligthing is broken in Mesa-7.4 release
http://bugs.freedesktop.org/show_bug.cgi?id=20965 --- Comment #1 from smoki smoki00...@gmail.com 2009-03-31 00:00:57 PST --- Created an attachment (id=24389) -- (http://bugs.freedesktop.org/attachment.cgi?id=24389) RTCW-lights_regression -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20966] Weird fonts with mesa-7.4 on low resolutions
http://bugs.freedesktop.org/show_bug.cgi?id=20966 --- Comment #1 from smoki smoki00...@gmail.com 2009-03-31 00:22:06 PST --- Created an attachment (id=24391) -- (http://bugs.freedesktop.org/attachment.cgi?id=24391) r...@640x480-mesa-7.4-bad -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20966] New: Weird fonts with mesa-7.4 on low resolutions
http://bugs.freedesktop.org/show_bug.cgi?id=20966 Summary: Weird fonts with mesa-7.4 on low resolutions Product: Mesa Version: unspecified Platform: x86 (IA32) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Drivers/DRI/r200 AssignedTo: dri-devel@lists.sourceforge.net ReportedBy: smoki00...@gmail.com Created an attachment (id=24390) -- (http://bugs.freedesktop.org/attachment.cgi?id=24390) r...@640x480-mesa-7.3-good Fonts regression on r200, they are good in 7.3=. This can be seen in q3 based games and doom3 also. I can only noticed this on low resolution like 640x480 and 800x600, but not at 1024x768 which is my desktop resolution. Again, in 7.3= fonts are proper at every resolutions. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20966] Weird fonts with mesa-7.4 on low resolutions
http://bugs.freedesktop.org/show_bug.cgi?id=20966 --- Comment #2 from smoki smoki00...@gmail.com 2009-03-31 00:23:33 PST --- Created an attachment (id=24392) -- (http://bugs.freedesktop.org/attachment.cgi?id=24392) r...@640x480-mesa-7.4-worst -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20954] mesa/drm(git): kernel panic with radeon driver (Radeon 9500 Pro )
http://bugs.freedesktop.org/show_bug.cgi?id=20954 Michel Dänzer mic...@daenzer.net changed: What|Removed |Added Component|Drivers/DRI/Radeon |DRM/Radeon Product|Mesa|DRI Version|CVS |DRI CVS -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: DRI2 + buffer creation
On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote: So I've been playing a bit more with DRI2 and I'm having trouble finding how the buffer creation was meant to work for depth buffers. If my app uses a visual 0xbc 24 tc 0 24 0 r . . 8 8 8 0 0 16 0 0 0 0 0 0 0 None which is 24-bits + 16-bit depth, I don't have enough information in the DDX to create a depth buffer with a cpp of 2, the DDX can only see the drawable information it knows nothing about the visual. Now it goes and creates a set of 4 buffers to give back to Mesa, which then goes and takes the cpp of the depth buffer as 4 when clearly it would need to be 2, then bad things happen. So should I just be creating the depth buffer in Mesa, does the DDX need to know about it all really... Yep, go create the depth buffer in Mesa. The DDX doesn't really need to know about it. Alan. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20673] crash in radeon driver
http://bugs.freedesktop.org/show_bug.cgi?id=20673 --- Comment #2 from up.whate...@gmail.com 2009-03-31 02:37:19 PST --- I got the same problem ob my Radeon 9700, using Compiz and EXA acceleration. It first happened when I was browsing lots of Photos in Nautilus Thumbnail view, but today I found a really fast way to reproduce it: 1. Get MPEG4 Modifier from http://moitah.net/ and run it with mono. 2. Drag the lower right corner of the file-open dialog arround to resize it (which causes a lot of redrawing). 3. After some seconds X will crash. Backtrace: 0: /usr/X11R6/bin/X(xorg_backtrace+0x3b) [0x813516b] 1: /usr/X11R6/bin/X(xf86SigHandler+0x55) [0x80c7be5] 2: [0xb7f8c400] 3: /usr/lib/dri/r300_dri.so(_mesa_update_texture+0x2b1) [0x9f4aae21] 4: /usr/lib/dri/r300_dri.so(_mesa_update_state_locked+0x756) [0x9f494fb6] 5: /usr/lib/dri/r300_dri.so(_mesa_update_state+0x2a) [0x9f4951aa] 6: /usr/lib/dri/r300_dri.so(_mesa_GetIntegerv+0x278) [0x9f567148] 7: /usr/lib/xorg/modules/extensions//libglx.so [0xb7912132] 8: /usr/lib/xorg/modules/extensions//libglx.so [0xb79042e8] 9: /usr/lib/xorg/modules/extensions//libglx.so [0xb79031a7] 10: /usr/lib/xorg/modules/extensions//libglx.so [0xb7907d6a] 11: /usr/X11R6/bin/X(Dispatch+0x33f) [0x808d57f] 12: /usr/X11R6/bin/X(main+0x3bd) [0x80722ed] 13: /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7b4f775] 14: /usr/X11R6/bin/X [0x80717a1] Saw signal 11. Server aborting. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 12778] suspend regression from 29rc5 to 29rc6
http://bugzilla.kernel.org/show_bug.cgi?id=12778 Martin Pitt martin.p...@ubuntu.com changed: What|Removed |Added CC||martin.p...@ubuntu.com --- Comment #9 from Martin Pitt martin.p...@ubuntu.com 2009-03-31 09:47:03 --- I also suffer from this for a while now. I reported the problem against xserver-xorg-video-intel back then (see https://bugs.freedesktop.org/show_bug.cgi?id=20520 for all the logs), and just got pointed to this bug here. When the freeze happens (a few minutes after resuming), I also get those kernel messages: Mar 29 23:32:54 tick kernel: [14858.069290] [drm:i915_get_vblank_counter] *ERROR* trying to get vblank count f or disabled pipe 1 Mar 29 23:32:54 tick kernel: [14858.074255] mtrr: no MTRR for d000,1000 found I see nothing else, in Xorg.log, etc., just the intel register dump shows that the chip crashed (MI_MODE: 0x). -- Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are watching the assignee of the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: DRI2 + buffer creation
On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote: On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote: So I've been playing a bit more with DRI2 and I'm having trouble finding how the buffer creation was meant to work for depth buffers. If my app uses a visual 0xbc 24 tc 0 24 0 r . . 8 8 8 0 0 16 0 0 0 0 0 0 0 None which is 24-bits + 16-bit depth, I don't have enough information in the DDX to create a depth buffer with a cpp of 2, the DDX can only see the drawable information it knows nothing about the visual. Now it goes and creates a set of 4 buffers to give back to Mesa, which then goes and takes the cpp of the depth buffer as 4 when clearly it would need to be 2, then bad things happen. So should I just be creating the depth buffer in Mesa, does the DDX need to know about it all really... Yep, go create the depth buffer in Mesa. The DDX doesn't really need to know about it. Creating the depth buffer and the other aux buffers through the X server is more complicated, but it was done that way for a reason. Two different client can render to the same GLX drawable and in that case they need to share the aux buffers for the drawable. I considered letting the DRI driver create the buffers, but in that case it needs to tell the X server about it, and then you get extra roundtrips and races between DRI clients to create the buffers. So creating them in the X server is the simplest solution, given what we have to support. As it is, the hw specific part of X creates the buffers from the tokens passed in by the DRI driver and can implement whichever convention the DRI driver expects. For example, for intel, if both depth and stencil are requested, the DDX driver knows to only allocate one BO for the two buffers and the DRI driver expects this. Likewise, if radeon expects a 16 bit depth buffer when there is no stencil, that's the behaviour the DDX should implement. If a situation arises where a combination of buffers required for a visual doesn't uniquely imply which buffer sizes are expected (say, a fbconfig without stencil could use either a 16 bit or a 32 bit depth buffer), we need to introduce new DRI2 buffer tokens along the lines of Depth16 and Depth32 so the DRI driver can communicate which one it wants. cheers, Kristian -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 12166] [mi] EQ overflowing. The server is probably stuck in an infinite loop.
http://bugzilla.kernel.org/show_bug.cgi?id=12166 Daniel Klaffenbach k...@abwesend.de changed: What|Removed |Added CC||k...@abwesend.de --- Comment #25 from Daniel Klaffenbach k...@abwesend.de 2009-03-31 13:06:31 --- I have the same problem with a Radeon Xpress 200M on 2.6.29 and DRI. I am not using any framebuffer: (II) RADEON(0): Output: S-video, Detected Monitor Type: 0 [mi] EQ overflowing. The server is probably stuck in an infinite loop. Backtrace: 0: /usr/bin/X(xorg_backtrace+0x37) [0x81353d7] [mi] mieqEnequeue: out-of-order valuator event; dropping. [mi] EQ overflowing. The server is probably stuck in an infinite loop. [mi] mieqEnequeue: out-of-order valuator event; dropping. [mi] EQ overflowing. The server is probably stuck in an infinite loop. -- Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are watching the assignee of the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: DRI2 + buffer creation
2009/3/31 Kristian Høgsberg k...@bitplanet.net: On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote: On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote: So I've been playing a bit more with DRI2 and I'm having trouble finding how the buffer creation was meant to work for depth buffers. If my app uses a visual 0xbc 24 tc 0 24 0 r . . 8 8 8 0 0 16 0 0 0 0 0 0 0 None which is 24-bits + 16-bit depth, I don't have enough information in the DDX to create a depth buffer with a cpp of 2, the DDX can only see the drawable information it knows nothing about the visual. Now it goes and creates a set of 4 buffers to give back to Mesa, which then goes and takes the cpp of the depth buffer as 4 when clearly it would need to be 2, then bad things happen. So should I just be creating the depth buffer in Mesa, does the DDX need to know about it all really... Yep, go create the depth buffer in Mesa. The DDX doesn't really need to know about it. Creating the depth buffer and the other aux buffers through the X server is more complicated, but it was done that way for a reason. Two different client can render to the same GLX drawable and in that case they need to share the aux buffers for the drawable. I considered letting the DRI driver create the buffers, but in that case it needs to tell the X server about it, and then you get extra roundtrips and races between DRI clients to create the buffers. So creating them in the X server is the simplest solution, given what we have to support. As it is, the hw specific part of X creates the buffers from the tokens passed in by the DRI driver and can implement whichever convention the DRI driver expects. For example, for intel, if both depth and stencil are requested, the DDX driver knows to only allocate one BO for the two buffers and the DRI driver expects this. Likewise, if radeon expects a 16 bit depth buffer when there is no stencil, that's the behaviour the DDX should implement. If a situation arises where a combination of buffers required for a visual doesn't uniquely imply which buffer sizes are expected (say, a fbconfig without stencil could use either a 16i bit or a 32 bit depth buffer), we need to introduce new DRI2 buffer tokens along the lines of Depth16 and Depth32 so the DRI driver can communicate which one it wants. But you don't give the DDX enough information to make this decision, I get the drawable, an attachments list and a count, I don't get the visual or fbconfig. Now radeon isn't special Intel has the same bug. Most GPUs have 3 cases they can deal with, z24s8, z24, z16. However if I pick a z16 visual there is no way the DDX can differentiate this from a z24, all it gets is a drawable. It then uses CreatePixmap which creates a pixmap that is 2x too large, wasting VRAM, and with the wrong bpp component. What I've done now is intercept it on the mesa side and hack the bpp down to 2, but this still wastes memory and ignores the actual problem that the DDX doesn't have the info to make the correct decision. Dave. cheers, Kristian -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: DRI2 + buffer creation
2009/3/31 Dave Airlie airl...@gmail.com: 2009/3/31 Kristian Høgsberg k...@bitplanet.net: On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote: On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote: So I've been playing a bit more with DRI2 and I'm having trouble finding how the buffer creation was meant to work for depth buffers. If my app uses a visual 0xbc 24 tc 0 24 0 r . . 8 8 8 0 0 16 0 0 0 0 0 0 0 None which is 24-bits + 16-bit depth, I don't have enough information in the DDX to create a depth buffer with a cpp of 2, the DDX can only see the drawable information it knows nothing about the visual. Now it goes and creates a set of 4 buffers to give back to Mesa, which then goes and takes the cpp of the depth buffer as 4 when clearly it would need to be 2, then bad things happen. So should I just be creating the depth buffer in Mesa, does the DDX need to know about it all really... Yep, go create the depth buffer in Mesa. The DDX doesn't really need to know about it. Creating the depth buffer and the other aux buffers through the X server is more complicated, but it was done that way for a reason. Two different client can render to the same GLX drawable and in that case they need to share the aux buffers for the drawable. I considered letting the DRI driver create the buffers, but in that case it needs to tell the X server about it, and then you get extra roundtrips and races between DRI clients to create the buffers. So creating them in the X server is the simplest solution, given what we have to support. As it is, the hw specific part of X creates the buffers from the tokens passed in by the DRI driver and can implement whichever convention the DRI driver expects. For example, for intel, if both depth and stencil are requested, the DDX driver knows to only allocate one BO for the two buffers and the DRI driver expects this. Likewise, if radeon expects a 16 bit depth buffer when there is no stencil, that's the behaviour the DDX should implement. If a situation arises where a combination of buffers required for a visual doesn't uniquely imply which buffer sizes are expected (say, a fbconfig without stencil could use either a 16i bit or a 32 bit depth buffer), we need to introduce new DRI2 buffer tokens along the lines of Depth16 and Depth32 so the DRI driver can communicate which one it wants. But you don't give the DDX enough information to make this decision, I get the drawable, an attachments list and a count, I don't get the visual or fbconfig. Now radeon isn't special Intel has the same bug. Most GPUs have 3 cases they can deal with, z24s8, z24, z16. However if I pick a z16 visual there is no way the DDX can differentiate this from a z24, all it gets is a drawable. It then uses CreatePixmap which creates a pixmap that is 2x too large, wasting VRAM, and with the wrong bpp component. What I've done now is intercept it on the mesa side and hack the bpp down to 2, but this still wastes memory and ignores the actual problem that the DDX doesn't have the info to make the correct decision. That's the case I was describing in the last sentence. When the DDX gets the set of buffers to allocate it doesn't know whether to allocate 16 or 24 bit depth buffer. What I'm suggesting is that we add a new buffer token to the DRI2 protocol, DRI2BufferDepth16, which the dri driver can use to indicate that it wants a 16 bit depth buffer even if the drawable is 24 bpp. It requires a dri2 proto bump, and the loader needs to tell the dri driver that the DRI2BufferDepth16 token is available. cheers, Kristian -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20954] mesa/drm(git): kernel panic with radeon driver (Radeon 9500 Pro )
http://bugs.freedesktop.org/show_bug.cgi?id=20954 --- Comment #1 from Robert Noland rnol...@2hip.net 2009-03-31 11:56:16 PST --- I just moved the vblank_cleanup after lastclose, give that a try and see if it is resolved. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [PATCH 6/6] drm/i915: Fix lock order reversal in GEM relocation entry copying. -- makes X hang
On Mon, 2009-03-30 at 12:00 +0200, Florian Mickler wrote: Hi! On Wed, 25 Mar 2009 14:45:10 -0700 Eric Anholt e...@anholt.net wrote: Signed-off-by: Eric Anholt e...@anholt.net Reviewed-by: Keith Packard kei...@keithp.com --- drivers/gpu/drm/i915/i915_gem.c | 187 +++--- 1 files changed, 133 insertions(+), 54 deletions(-) I testet Linus' Git tree @ 5d80f8e5a (merge net-2.6) and discovered that X hung after starting up gdm. When i start gdm the screen is frozen and X hangs. I was able to bisect it down to 40a5f0decdf050785ebd62b36ad48c869ee4b384 drm/i915: Fix lock order reversal in GEM relocation entry copying. when hung /proc/[xpid]/stack did contain: [8024c13e] msleep_interruptible+0x2e/0x40 [80557fdd] i915_wait_ring+0x17d/0x1d0 [8056280d] i915_gem_execbuffer+0xd2d/0xf70 [80546555] drm_ioctl+0x1f5/0x320 [802d8ce5] vfs_ioctl+0x85/0xa0 [802d8f0b] do_vfs_ioctl+0x20b/0x510 [802d9297] sys_ioctl+0x87/0xa0 [8020ba8b] system_call_fastpath+0x16/0x1b [] 0x I reverted that commit now on top of 5d80f8e5a and all 's well . My X-Stack is from Git at around 20th march. I will update that now, but as it hangs in a kernel syscall, it shouldn't matter? Sincerely, Florian p.s. no kms Ouch. Could you file a bug report using: http://intellinuxgraphics.org/how_to_report_bug.html so I've got the information I need to try to reproduce this? -- Eric Anholt e...@anholt.net eric.anh...@intel.com signature.asc Description: This is a digitally signed message part -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [PATCH] libdrm: speed up connector mode fetching
On Fri, 2009-03-27 at 15:48 -0700, Jesse Barnes wrote: On Fri, 27 Mar 2009 22:53:00 +0100 Jakob Bornecrantz wallbra...@gmail.com wrote: On Fri, Mar 27, 2009 at 8:58 PM, Jesse Barnes jbar...@virtuousgeek.org wrote: This patch speeds up drmModeGetConnector by pre-allocating mode property info space before calling into the kernel. In many cases this pre-allocation will be sufficient to hold the returned values (it's easy enough to tweak if the common case becomes larger), which means we don't have to make the second call, which saves a lot of time. Any comments or problems with the patch? Looks good, I do wonder how much time do we save on doing pre allocation, just curious? Anyways the patch is Acked-by: Jakob Bornecrantz wallbra...@gmail.com Cheers Jakob. Some of my testing showed it took about .3s for a drmModeGetConnector call and my testing showed this decrease a lot with the pre-allocation (conveniently lost the test results though). Theoretically it should be about half the cost with pre-allocation, which adds up if you multiply by the number of outputs. Seems like the ioctl needs a flag for did the user want me to go reprobe things? like we did in the X Server. -- Eric Anholt e...@anholt.net eric.anh...@intel.com signature.asc Description: This is a digitally signed message part -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [PATCH] libdrm: speed up connector mode fetching
On Tue, 31 Mar 2009 12:39:17 -0700 Eric Anholt e...@anholt.net wrote: On Fri, 2009-03-27 at 15:48 -0700, Jesse Barnes wrote: On Fri, 27 Mar 2009 22:53:00 +0100 Jakob Bornecrantz wallbra...@gmail.com wrote: On Fri, Mar 27, 2009 at 8:58 PM, Jesse Barnes jbar...@virtuousgeek.org wrote: This patch speeds up drmModeGetConnector by pre-allocating mode property info space before calling into the kernel. In many cases this pre-allocation will be sufficient to hold the returned values (it's easy enough to tweak if the common case becomes larger), which means we don't have to make the second call, which saves a lot of time. Any comments or problems with the patch? Looks good, I do wonder how much time do we save on doing pre allocation, just curious? Anyways the patch is Acked-by: Jakob Bornecrantz wallbra...@gmail.com Cheers Jakob. Some of my testing showed it took about .3s for a drmModeGetConnector call and my testing showed this decrease a lot with the pre-allocation (conveniently lost the test results though). Theoretically it should be about half the cost with pre-allocation, which adds up if you multiply by the number of outputs. Seems like the ioctl needs a flag for did the user want me to go reprobe things? like we did in the X Server. Yeah or a new ioctl... we'll need a new libdrm function for that at the very least for compatibility. -- Jesse Barnes, Intel Open Source Technology Center -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [PATCH] libdrm: speed up connector mode fetching
2009/3/31 Eric Anholt e...@anholt.net: On Fri, 2009-03-27 at 15:48 -0700, Jesse Barnes wrote: On Fri, 27 Mar 2009 22:53:00 +0100 Jakob Bornecrantz wallbra...@gmail.com wrote: On Fri, Mar 27, 2009 at 8:58 PM, Jesse Barnes jbar...@virtuousgeek.org wrote: This patch speeds up drmModeGetConnector by pre-allocating mode property info space before calling into the kernel. In many cases this pre-allocation will be sufficient to hold the returned values (it's easy enough to tweak if the common case becomes larger), which means we don't have to make the second call, which saves a lot of time. Any comments or problems with the patch? Looks good, I do wonder how much time do we save on doing pre allocation, just curious? Anyways the patch is Acked-by: Jakob Bornecrantz wallbra...@gmail.com Cheers Jakob. Some of my testing showed it took about .3s for a drmModeGetConnector call and my testing showed this decrease a lot with the pre-allocation (conveniently lost the test results though). Theoretically it should be about half the cost with pre-allocation, which adds up if you multiply by the number of outputs. Seems like the ioctl needs a flag for did the user want me to go reprobe things? like we did in the X Server. Yes, please, I've been pushing for that too. cheers, Kristian -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: [PATCH] libdrm: speed up connector mode fetching
2009/3/31 Kristian Høgsberg k...@bitplanet.net: 2009/3/31 Eric Anholt e...@anholt.net: On Fri, 2009-03-27 at 15:48 -0700, Jesse Barnes wrote: On Fri, 27 Mar 2009 22:53:00 +0100 Jakob Bornecrantz wallbra...@gmail.com wrote: On Fri, Mar 27, 2009 at 8:58 PM, Jesse Barnes jbar...@virtuousgeek.org wrote: This patch speeds up drmModeGetConnector by pre-allocating mode property info space before calling into the kernel. In many cases this pre-allocation will be sufficient to hold the returned values (it's easy enough to tweak if the common case becomes larger), which means we don't have to make the second call, which saves a lot of time. Any comments or problems with the patch? Looks good, I do wonder how much time do we save on doing pre allocation, just curious? Anyways the patch is Acked-by: Jakob Bornecrantz wallbra...@gmail.com Cheers Jakob. Some of my testing showed it took about .3s for a drmModeGetConnector call and my testing showed this decrease a lot with the pre-allocation (conveniently lost the test results though). Theoretically it should be about half the cost with pre-allocation, which adds up if you multiply by the number of outputs. Seems like the ioctl needs a flag for did the user want me to go reprobe things? like we did in the X Server. Yes, please, I've been pushing for that too. Hmm I'm thinking that should be done on get resource and never done on get connector. So we can also reprobe for things like hotpluggable connectors, yet keep the change to one function. I wonder if any code expects things to be reprobed on get connector? Cheers Jakob. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: DRI2 + buffer creation
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Kristian Høgsberg wrote: That's the case I was describing in the last sentence. When the DDX gets the set of buffers to allocate it doesn't know whether to allocate 16 or 24 bit depth buffer. What I'm suggesting is that we add a new buffer token to the DRI2 protocol, DRI2BufferDepth16, which the dri driver can use to indicate that it wants a 16 bit depth buffer even if the drawable is 24 bpp. It requires a dri2 proto bump, and the loader needs to tell the dri driver that the DRI2BufferDepth16 token is available. Errthis is a bandaid on a bigger problem. What about 24-bit depth with 16-bit color? What about when we start to support multisampling? Etc. If the DDX needs the fbconfig information, why not just give it the fbconfig? Right? -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknSdyEACgkQX1gOwKyEAw+9YwCfc44rwsm8kfapo/RSFL0KfhEd 68wAnjRK4Ng5bCpvqmnVHC0KqCWyOYll =4J9j -END PGP SIGNATURE- -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 12947] r128: system hangs when X is started with DRI enabled
http://bugzilla.kernel.org/show_bug.cgi?id=12947 Alex Villacis Lasso avill...@ceibo.fiec.espol.edu.ec changed: What|Removed |Added CC||avill...@ceibo.fiec.espol.e ||du.ec --- Comment #1 from Alex Villacis Lasso avill...@ceibo.fiec.espol.edu.ec 2009-03-31 23:12:04 --- This bug sort of looks like bug 12920. If r128 also requires a mmap of /dev/dri/cardN to work, it could be affected by the same bug. Can you collect a strace of Xorg in the successful and failing cases, and compare the two, like it was done on bug #12920? -- Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are watching the assignee of the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20856] X hangs after idle time on GM45
http://bugs.freedesktop.org/show_bug.cgi?id=20856 --- Comment #1 from Jesse Barnes jbar...@virtuousgeek.org 2009-03-31 18:16:54 PST --- Bugs like this have been fixed recently in libdrm, Mesa and the kernel. But you can also work around the problem by creating a driconf file and setting the vblank_mode (googling around should show you how). -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20988] New: [UXA] performance degrade compared to EXA when run game celestia
http://bugs.freedesktop.org/show_bug.cgi?id=20988 Summary: [UXA]performance degrade compared to EXA when run game celestia Product: Mesa Version: unspecified Platform: Other OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Drivers/DRI/i915 AssignedTo: dri-devel@lists.sourceforge.net ReportedBy: haien@intel.com Created an attachment (id=24423) -- (http://bugs.freedesktop.org/attachment.cgi?id=24423) xorg.0.log System Environment: -- Host: x-gm965 Arch: x86_64 Platform: GM965 Libdrm: (master)cd5c66c659168cbe2e3229ebf8be79f764ed0ee1 Mesa: (mesa_7_4_branch)de197cf991416f0cd65ad2e2d2ca9aa599b52075 Xserver:(server-1.6-branch)60c161545af80eb78eb790a05bde79409dfdf16e Xf86_video_intel: (2.7)e2465249a90b9aefe6d7a96eb56a51fde54698a0 Kernel: (for-airlied)a2e785c32b886dd7f0289d1cf15fc14e9c81bc01 Bug detailed description: -- start X with UXA, then run the game celestia. it looks like a series of slides which are showing.the performance is very low,but it works well using UXA. the issue happens on both i915 and i965. Reproduce steps: 1.xinit 2.run celestia -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20988] [UXA]performance degrade compared to EXA when run game celestia
http://bugs.freedesktop.org/show_bug.cgi?id=20988 --- Comment #1 from liuhaien haien@intel.com 2009-03-31 22:26:49 PST --- Created an attachment (id=24424) -- (http://bugs.freedesktop.org/attachment.cgi?id=24424) xorg conf file -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 20988] [UXA]performance degrade compared to EXA when run game celestia
http://bugs.freedesktop.org/show_bug.cgi?id=20988 Gordon Jin gordon@intel.com changed: What|Removed |Added AssignedTo|dri-|e...@anholt.net |de...@lists.sourceforge.net | -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. -- -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel