[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #6 from Michal 2011-10-24 15:42:28 PDT --- In tests I'm using fluxbox without any compositing. My card is radeon 9100 (rebranded 8500) with 128mb vram. Libtxc library didn't change anything. RADEON_DEBUG=fall puts in loop: R200 begin tcl fallback Rasterization fallback R200 begin rasterization fallback: 0x1 Texture mode R200 end tcl fallback Rasterization fallback R200 end tcl fallback R200 end rasterization fallback: 0x1 Texture mode R200 begin tcl fallback Rasterization fallback R200 begin rasterization fallback: 0x1 Texture mode R200 end tcl fallback Rasterization fallback R200 end tcl fallback R200 end rasterization fallback: 0x1 Texture mode I forget to add. In openarena, i get about 2-4 fps in normal play but when I look down to ground, fps immediately jumps to about 80. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 41698] [r300g] Flickering user interface in WoW
https://bugs.freedesktop.org/show_bug.cgi?id=41698 --- Comment #4 from Chris Rankin 2011-10-24 15:24:33 PDT --- This bug is still happening after this new commit: commit 2717b8f034db16cf551e167aa5ce3a9be3bf730b Author: Mathias Fr?hlich Date: Sat Oct 8 21:33:23 2011 +0200 winsys/radeon: restore the old r600g winsys memory characteristics. However, this new commit also means that the original commit can no longer be trivially reverted. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42175] RV730: Display errors in glxgears & WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #4 from Stefan 2011-10-24 14:55:59 PDT --- (In reply to comment #3) > Can you bisect? The change took place between 7.11-rc1 and -rc2: a8907c6005d7935b4520255e12184c139471b5b9 is the first bad commit commit a8907c6005d7935b4520255e12184c139471b5b9 Author: Benjamin Franzke Date: Sat Jul 2 13:41:35 2011 +0200 But: This is where the switch from the old driver to the Gallium takes place. The old driver got used in my configuration up to 7.11-rc1 because the udev-devel packet was not installed. Eventually after installing it 7.11-rc1 uses the new driver. So currently I don't have a "good" reference for a bisection. BTW: https://bugzilla.mozilla.org/show_bug.cgi?id=693056 (origin of the current problems). -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42175] RV730: Display errors in glxgears & WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #3 from Alex Deucher 2011-10-24 14:44:38 PDT --- Can you bisect? -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42175] RV730: Display errors in glxgears & WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #2 from Stefan 2011-10-24 14:07:44 PDT --- Created attachment 52713 --> https://bugs.freedesktop.org/attachment.cgi?id=52713 Lesson 3 screenshot -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42175] RV730: Display errors in glxgears & WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #1 from Stefan 2011-10-24 14:06:39 PDT --- Created attachment 52710 --> https://bugs.freedesktop.org/attachment.cgi?id=52710 glxgears screenshot -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42175] New: RV730: Display errors in glxgears & WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 Bug #: 42175 Summary: RV730: Display errors in glxgears & WebGL Classification: Unclassified Product: Mesa Version: git Platform: x86-64 (AMD64) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Drivers/Gallium/r600 AssignedTo: dri-devel at lists.freedesktop.org ReportedBy: kdevel at vogtner.de Since Mesa 7.11-rc2 the gallium driver is used on my machine and produces display errors. Find attached a glxgears screenshot and a screendump of http://learningwebgl.com/lessons/lesson03/index.html produced with the current Firefox. Before and including 7.11-rc1 seemed to use the old driver which does not produce rendering errors but which crashes Firefox in http://learningwebgl.com/lessons/lesson16/index.html or in the Aquarium of http://code.google.com/p/webglsamples/. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH] DRM: bug: RADEON_DEBUGFS_MAX_{NUM_FILES => COMPONENTS}
Maybe you are looking at the wrong branch, but I see it in drm-next (it has been there since Oct 10) http://cgit.freedesktop.org/~airlied/linux/commit/?h=drm-next=c245cb9e15055ed5dcf7eaf29232badb0059fdc1 On Mon, 24 Oct 2011, Michael Witten wrote: > On Fri, Oct 7, 2011 at 19:20, Michael Witten wrote: > Date: Fri, 16 Sep 2011 20:45:30 + > > The value of RADEON_DEBUGFS_MAX_NUM_FILES has been used to > specify the size of an array, each element of which looks > like this: > > ?struct radeon_debugfs { > ? ? ? ? ?struct drm_info_list ? ?*files; > ? ? ? ? ?unsigned ? ? ? ? ? ? ? ?num_files; > ?}; > > Consequently, the number of debugfs files may be much greater > than RADEON_DEBUGFS_MAX_NUM_FILES, something that the current > code ignores: > > ?if ((_radeon_debugfs_count + nfiles) > RADEON_DEBUGFS_MAX_NUM_FILES) { > ? ? ? ? ?DRM_ERROR("Reached maximum number of debugfs files.\n"); > ? ? ? ? ?DRM_ERROR("Report so we increase RADEON_DEBUGFS_MAX_NUM_FILES.\n"); > ? ? ? ? ?return -EINVAL; > ?} > > This commit fixes this mistake, and accordingly renames: > > ?RADEON_DEBUGFS_MAX_NUM_FILES > > to: > > ?RADEON_DEBUGFS_MAX_COMPONENTS > > Signed-off-by: Michael Witten > --- > ?drivers/gpu/drm/radeon/radeon.h ? ? ? ?| ? ?2 +- > ?drivers/gpu/drm/radeon/radeon_device.c | ? 13 - > ?2 files changed, 9 insertions(+), 6 deletions(-) > > diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h > index c1e056b..dd7bab9 100644 > --- a/drivers/gpu/drm/radeon/radeon.h > +++ b/drivers/gpu/drm/radeon/radeon.h > @@ -102,7 +102,7 @@ extern int radeon_pcie_gen2; > ?#define RADEON_FENCE_JIFFIES_TIMEOUT ? (HZ / 2) > ?/* RADEON_IB_POOL_SIZE must be a power of 2 */ > ?#define RADEON_IB_POOL_SIZE ? ? ? ? ? ?16 > -#define RADEON_DEBUGFS_MAX_NUM_FILES ? 32 > +#define RADEON_DEBUGFS_MAX_COMPONENTS ?32 > ?#define RADEONFB_CONN_LIMIT ? ? ? ? ? ?4 > ?#define RADEON_BIOS_NUM_SCRATCH ? ? ? ? ? ? ? ?8 > > diff --git a/drivers/gpu/drm/radeon/radeon_device.c > b/drivers/gpu/drm/radeon/radeon_device.c > index b51e157..31b1f4b 100644 > --- a/drivers/gpu/drm/radeon/radeon_device.c > +++ b/drivers/gpu/drm/radeon/radeon_device.c > @@ -981,7 +981,7 @@ struct radeon_debugfs { > ? ? ? ?struct drm_info_list ? ?*files; > ? ? ? ?unsigned ? ? ? ? ? ? ? ?num_files; > ?}; > -static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_NUM_FILES]; > +static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_COMPONENTS]; > ?static unsigned _radeon_debugfs_count = 0; > > ?int radeon_debugfs_add_files(struct radeon_device *rdev, > @@ -996,14 +996,17 @@ int radeon_debugfs_add_files(struct radeon_device *rdev, > ? ? ? ? ? ? ? ? ? ? ? ?return 0; > ? ? ? ? ? ? ? ?} > ? ? ? ?} > - ? ? ? if ((_radeon_debugfs_count + nfiles) > RADEON_DEBUGFS_MAX_NUM_FILES) { > - ? ? ? ? ? ? ? DRM_ERROR("Reached maximum number of debugfs files.\n"); > - ? ? ? ? ? ? ? DRM_ERROR("Report so we increase > RADEON_DEBUGFS_MAX_NUM_FILES.\n"); > + > + ? ? ? i = _radeon_debugfs_count + 1; > + ? ? ? if (i > RADEON_DEBUGFS_MAX_COMPONENTS) { > + ? ? ? ? ? ? ? DRM_ERROR("Reached maximum number of debugfs components.\n"); > + ? ? ? ? ? ? ? DRM_ERROR("Report so we increase " > + ? ? ? ? ? ? ? ? ? ? ? ? "RADEON_DEBUGFS_MAX_COMPONENTS.\n"); > ? ? ? ? ? ? ? ?return -EINVAL; > ? ? ? ?} > ? ? ? ?_radeon_debugfs[_radeon_debugfs_count].files = files; > ? ? ? ?_radeon_debugfs[_radeon_debugfs_count].num_files = nfiles; > - ? ? ? _radeon_debugfs_count++; > + ? ? ? _radeon_debugfs_count = i; > ?#if defined(CONFIG_DEBUG_FS) > ? ? ? ?drm_debugfs_create_files(files, nfiles, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? rdev->ddev->control->debugfs_root, > -- > 1.7.6.409.ge7a85 > > This patch has not yet been applied. What's wrong? Sincerely, Michael Witten ___ dri-devel mailing list dri-devel at lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
On 10/24/2011 07:27 PM, Konrad Rzeszutek Wilk wrote: > On Sat, Oct 22, 2011 at 11:40:54AM +0200, Thomas Hellstrom wrote: > >> Konrad, >> >> I was hoping that we could get rid of the dma_address shuffling into >> core TTM, >> like I mentioned in the review. From what I can tell it's now only >> used in the backend and >> core ttm doesn't care about it. >> >> Is there a particular reason we're still passing it around? >> > Yes - and I should have addressed that in the writeup but forgot, sorry about > that. > > So initially I thought you meant this: > > diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c > b/drivers/gpu/drm/ttm/ttm_page_alloc.c > index 360afb3..06ef048 100644 > --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c > +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c > @@ -662,8 +662,7 @@ out: > > /* Put all pages in pages list to correct pool to wait for reuse */ > static void __ttm_put_pages(struct list_head *pages, unsigned page_count, > - int flags, enum ttm_caching_state cstate, > - dma_addr_t *dma_address) > + int flags, enum ttm_caching_state cstate) > { > unsigned long irq_flags; > struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); > @@ -707,8 +706,7 @@ static void __ttm_put_pages(struct list_head *pages, > unsigned page_count, >* cached pages. >*/ > static int __ttm_get_pages(struct list_head *pages, int flags, > -enum ttm_caching_state cstate, unsigned count, > -dma_addr_t *dma_address) > +enum ttm_caching_state cstate, unsigned count) > { > struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); > struct page *p = NULL; > @@ -864,7 +862,7 @@ int ttm_get_pages(struct ttm_tt *ttm, struct list_head > *pages, > if (ttm->be&& ttm->be->func&& ttm->be->func->get_pages) > return ttm->be->func->get_pages(ttm, pages, count, dma_address); > return __ttm_get_pages(pages, ttm->page_flags, ttm->caching_state, > - count, dma_address); > + count) > } > void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, > unsigned page_count, dma_addr_t *dma_address) > @@ -873,5 +871,5 @@ void ttm_put_pages(struct ttm_tt *ttm, struct list_head > *pages, > ttm->be->func->put_pages(ttm, pages, page_count, dma_address); > else > __ttm_put_pages(pages, page_count, ttm->page_flags, > - ttm->caching_state, dma_address); > + ttm->caching_state) > } > which is trivial (thought I have not compile tested it), but it should do it. > > But I think you mean eliminate the dma_address handling completly in > ttm_page_alloc.c and ttm_tt.c. > > For that there are couple of architectural issues I am not sure how to solve. > > There has to be some form of TTM<->[Radeon|Nouveau] lookup mechanism > to say: "here is a 'struct page *', give me the bus address". Currently > this is solved by keeping an array of DMA addresses along with the list > of pages. And passing the list and DMA address up the stack (and down) > from TTM up to the driver (when ttm->be->func->populate is called and they > are handed off) does it. It does not break any API layering .. and the > internal > TTM pool (non-DMA) can just ignore the dma_address altogether (see patch > above). > > I actually had something more simple in mind, but when tinking a bit deeper into it, it seems more complicated than I initially thought. Namely that when we allocate pages from the ttm_backend, we actually populated it at the same time. be::populate would then not take a page array as an argument, and would actually be a no-op on many drivers. This makes us move towards struct ttm_tt consisting almost only of its backend, so that whole API should perhaps be looked at with new eyes. So anyway, I'm fine with high level things as they are now, and the dma_addr issue can be looked at at a later time. If we could get a couple of extra eyes to review the code for style etc. would be great, because I have very little time the next couple of weeks. /Thomas
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Alright then. Dave, if you are reading this, feel free not to include the two patches I sent you in the next pull request. Marek On Mon, Oct 24, 2011 at 7:28 PM, Thomas Hellstrom wrote: > Marek, > > The problem is that the patch adds a lot of complicated code where it's not > needed, and I don't want to end up reverting that code and re-implementing > the new Radeon gem ioctl by myself. > > Having a list of two fence objects and waiting for either of them shouldn't > be that complicated to implement, in particular when it's done in a > driver-specific way and you have the benefit of assuming that they are > ordered. > > Since the new functionality is a performance improvement, If time is an > issue, I suggest we back this change out and go for the next merge window. > > /Thomas > > > On 10/24/2011 07:10 PM, Marek Ol??k wrote: >> >> Hi Thomas, >> >> I have made no progress so far due to lack of time. >> >> Would you mind if I fixed the most important things first, which are: >> - sync objects are not ordered, (A) >> - every sync object must have its corresponding sync_obj_arg, (B) >> and if I fixed (C) some time later. >> >> I planned on moving the two sync objects from ttm into radeon and not >> using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it >> does), but it looks more complicated to me than I had originally >> thought. >> >> Marek >> >> On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstrom >> ?wrote: >> >>> >>> Marek, >>> Any progress on this. The merge window is about to open soon I guess and >>> we >>> need a fix by then. >>> >>> /Thomas >>> > > > >
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Marek, The problem is that the patch adds a lot of complicated code where it's not needed, and I don't want to end up reverting that code and re-implementing the new Radeon gem ioctl by myself. Having a list of two fence objects and waiting for either of them shouldn't be that complicated to implement, in particular when it's done in a driver-specific way and you have the benefit of assuming that they are ordered. Since the new functionality is a performance improvement, If time is an issue, I suggest we back this change out and go for the next merge window. /Thomas On 10/24/2011 07:10 PM, Marek Ol??k wrote: > Hi Thomas, > > I have made no progress so far due to lack of time. > > Would you mind if I fixed the most important things first, which are: > - sync objects are not ordered, (A) > - every sync object must have its corresponding sync_obj_arg, (B) > and if I fixed (C) some time later. > > I planned on moving the two sync objects from ttm into radeon and not > using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it > does), but it looks more complicated to me than I had originally > thought. > > Marek > > On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstrom > wrote: > >> Marek, >> Any progress on this. The merge window is about to open soon I guess and we >> need a fix by then. >> >> /Thomas >>
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Hi Thomas, I have made no progress so far due to lack of time. Would you mind if I fixed the most important things first, which are: - sync objects are not ordered, (A) - every sync object must have its corresponding sync_obj_arg, (B) and if I fixed (C) some time later. I planned on moving the two sync objects from ttm into radeon and not using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it does), but it looks more complicated to me than I had originally thought. Marek On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstrom wrote: > > Marek, > Any progress on this. The merge window is about to open soon I guess and we > need a fix by then. > > /Thomas
[Bug 42067] [r600g] Compiz emblem icons corrupted on cayman
https://bugs.freedesktop.org/show_bug.cgi?id=42067 --- Comment #4 from Harald Judt 2011-10-24 11:58:24 PDT --- I think this might rather be a duplicate of bug 38173. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On 10/24/2011 06:42 PM, Marek Ol??k wrote: > On Sat, Oct 8, 2011 at 1:32 PM, Thomas Hellstrom > wrote: > >> On 10/08/2011 01:27 PM, Ville Syrj?l? wrote: >> >>> On Sat, Oct 08, 2011 at 01:10:13PM +0200, Thomas Hellstrom wrote: >>> >>> On 10/08/2011 12:26 PM, Ville Syrj?l? wrote: > On Fri, Oct 07, 2011 at 10:58:13AM +0200, Thomas Hellstrom wrote: > > > >> Oh, and one more style comment below: >> >> On 08/07/2011 10:39 PM, Marek Ol??k wrote: >> >> >> >>> +enum ttm_buffer_usage { >>> +TTM_USAGE_READ = 1, >>> +TTM_USAGE_WRITE = 2, >>> +TTM_USAGE_READWRITE = TTM_USAGE_READ | TTM_USAGE_WRITE >>> +}; >>> >>> >>> >>> >>> >> Please don't use enums for bit operations. >> >> >> > Now I'm curious. Why not? > > > > Because it's inconsistent with how flags are defined in the rest of the TTM module. >>> Ah OK. I was wondering if there's some subtle technical issue involved. >>> I've recently gotten to the habit of using enums for pretty much all >>> constants. Just easier on the eye IMHO, and avoids cpp output from >>> looking like number soup. >>> >>> >>> >> Yes, there are a number of advantages, including symbolic debugger output. >> If we had flag enums that enumerated 1, 2, 4, 8 etc. I'd feel motivated to >> move >> all TTM definitions over. >> > I don't think that how it is enumerated matters in any way. What I > like about enums, besides what has already been mentioned, is that it > adds a self-documentation in the code. Compare: > > void ttm_set_bo_flags(unsigned flags); > > And: > > void ttm_set_bo_flags(enum ttm_bo_flags flags); > > The latter is way easier to understand for somebody who doesn't know > the code and wants to implement his first patch. With the latter, it's > clear at first glance what are the valid values for "flags", because > you can just search for "enum ttm_bo_flags". > > I will change the enum to defines for the sake of following your code > style convention, but it's an unreasonable convention to say the > least. > > Marek > I'm not going to argue against this, because you're probably right. The important thing is that we get the fix in with or without enums. Thanks, Thomas
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #5 from Marek Ol??k 2011-10-24 11:46:25 PDT --- I am leaning to believe that tiling can make such a difference. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On Sat, Oct 8, 2011 at 1:32 PM, Thomas Hellstrom wrote: > On 10/08/2011 01:27 PM, Ville Syrj?l? wrote: >> >> On Sat, Oct 08, 2011 at 01:10:13PM +0200, Thomas Hellstrom wrote: >> >>> >>> On 10/08/2011 12:26 PM, Ville Syrj?l? wrote: >>> On Fri, Oct 07, 2011 at 10:58:13AM +0200, Thomas Hellstrom wrote: > > Oh, and one more style comment below: > > On 08/07/2011 10:39 PM, Marek Ol??k wrote: > > >> >> +enum ttm_buffer_usage { >> + ? ?TTM_USAGE_READ = 1, >> + ? ?TTM_USAGE_WRITE = 2, >> + ? ?TTM_USAGE_READWRITE = TTM_USAGE_READ | TTM_USAGE_WRITE >> +}; >> >> >> >> > > Please don't use enums for bit operations. > > Now I'm curious. Why not? >>> >>> Because it's inconsistent with how flags are defined in the rest of the >>> TTM module. >>> >> >> Ah OK. I was wondering if there's some subtle technical issue involved. >> I've recently gotten to the habit of using enums for pretty much all >> constants. Just easier on the eye IMHO, and avoids cpp output from >> looking like number soup. >> >> > > Yes, there are a number of advantages, including symbolic debugger output. > If we had flag enums that enumerated 1, 2, 4, 8 etc. I'd feel motivated to > move > all TTM definitions over. I don't think that how it is enumerated matters in any way. What I like about enums, besides what has already been mentioned, is that it adds a self-documentation in the code. Compare: void ttm_set_bo_flags(unsigned flags); And: void ttm_set_bo_flags(enum ttm_bo_flags flags); The latter is way easier to understand for somebody who doesn't know the code and wants to implement his first patch. With the latter, it's clear at first glance what are the valid values for "flags", because you can just search for "enum ttm_bo_flags". I will change the enum to defines for the sake of following your code style convention, but it's an unreasonable convention to say the least. Marek
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #4 from Roland Scheidegger 2011-10-24 11:30:14 PDT --- Some performance difference is expected due to kms not supporting tiling on r200, though I would expect a 2x difference only if you also enabled hyperz manually. That said, if performance jumps a lot higher with lower texture settings, this looks like a problem with bo placement/migration, the old code was simple and terrible in some cases (could easily get texture thrashing) whereas the new code is different (but still not smart enough). In any case, if you don't have libtxc installed try that as openarena can use it and hence textures will use much less vram. If you have some compositing manager running try disabling it as it will also use more memory. Though if you have some terribly memory-constrained chip nothing might help much (should really have at least 64MB vram). As for the ums problem, it looks like the driver is hitting a software fallback. RADEON_DEBUG=fall might tell you why - not that there's any chance it will get fixed... -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Bug 42067] [r600g] Compiz emblem icons corrupted on cayman
https://bugs.freedesktop.org/show_bug.cgi?id=42067 --- Comment #3 from Harald Judt 2011-10-24 11:25:31 PDT --- This was caused by enabling texture compression in core settings. Turning it off makes the icons look right again. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH 1/2] vmwgfx: Emulate depth 32 framebuffers
On Mon, Oct 24, 2011 at 03:01:23PM -0700, Jakob Bornecrantz wrote: > > - Original Message - > > On Sat, Oct 22, 2011 at 10:29:33AM +0200, Thomas Hellstrom wrote: > > > From: Jakob Bornecrantz > > > > > > Signed-off-by: Jakob Bornecrantz > > > Signed-off-by: Thomas Hellstrom > > > --- > > > drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 10 +- > > > 1 files changed, 9 insertions(+), 1 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > > b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > > index 39b99db..00ec619 100644 > > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > > @@ -679,6 +679,7 @@ static int do_dmabuf_define_gmrfb(struct > > > drm_file *file_priv, > > > struct vmw_private *dev_priv, > > > struct vmw_framebuffer *framebuffer) > > > { > > > + int depth = framebuffer->base.depth; > > > size_t fifo_size; > > > int ret; > > > > > > @@ -687,6 +688,13 @@ static int do_dmabuf_define_gmrfb(struct > > > drm_file *file_priv, > > > SVGAFifoCmdDefineGMRFB body; > > > } *cmd; > > > > > > + /* Emulate RGBA support, contrary to svga_reg.h this is not > > > + * supported by hosts. This is only a problem if we are reading > > > > Uh, what if it becomes supported at some point? Should there be some > > check against the host version? > > > > (Thinking that some user might be running this older driver with a > > newer host that des support 32 - won't that cause issues?) > > We can add a check then, from the point of view of userspace 32 bit > framebuffers works fine. The problem is with the readback ioctl where > the readback pixels alpha will be clobbered. We also don't support > depths of 30 R10G10B10X2. > > If we add support for depth 32 and/or depth 30 formats we can add > params to tell userspace about that, right now there isn't really a > point to them since they will always return not supported and we > need to do any work around in userspace anyways. sounds reasonable. > > Cheers, Jakob.
[PATCH] drm/radeon: avoid bouncing connector status btw disconnected & unknown
On Mon, Oct 24, 2011 at 6:16 PM, wrote: > From: Jerome Glisse > > Since force handling rework of d0d0a225e6ad43314c9aa7ea081f76adc5098ad4 > we could end up bouncing connector status btw disconnected and unknown. > When connector status change a call to output_poll_changed happen which > in turn ask again for detect but with force set. > > So set the load detect flags whenever we report the connector as > connected or unknown this avoid bouncing btw disconnected and unknown. Looks good. Reviewed-by: Alex Deucher > > Signed-off-by: Jerome Glisse > --- > ?drivers/gpu/drm/radeon/radeon_connectors.c | ? ?5 +++-- > ?1 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c > b/drivers/gpu/drm/radeon/radeon_connectors.c > index dec6cbe..ff6a2e0 100644 > --- a/drivers/gpu/drm/radeon/radeon_connectors.c > +++ b/drivers/gpu/drm/radeon/radeon_connectors.c > @@ -764,7 +764,7 @@ radeon_vga_detect(struct drm_connector *connector, bool > force) > ? ? ? ? ? ? ? ?if (radeon_connector->dac_load_detect && encoder) { > ? ? ? ? ? ? ? ? ? ? ? ?encoder_funcs = encoder->helper_private; > ? ? ? ? ? ? ? ? ? ? ? ?ret = encoder_funcs->detect(encoder, connector); > - ? ? ? ? ? ? ? ? ? ? ? if (ret == connector_status_connected) > + ? ? ? ? ? ? ? ? ? ? ? if (ret != connector_status_disconnected) > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?radeon_connector->detected_by_load = true; > ? ? ? ? ? ? ? ?} > ? ? ? ?} > @@ -1005,8 +1005,9 @@ radeon_dvi_detect(struct drm_connector *connector, bool > force) > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?ret = encoder_funcs->detect(encoder, > connector); > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?if (ret == connector_status_connected) > { > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?radeon_connector->use_digital > = false; > - ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > radeon_connector->detected_by_load = true; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?} > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (ret != > connector_status_disconnected) > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > radeon_connector->detected_by_load = true; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?} > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?break; > ? ? ? ? ? ? ? ? ? ? ? ?} > -- > 1.7.1 > > ___ > dri-devel mailing list > dri-devel at lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel >
[PATCH] drm/radeon: avoid bouncing connector status btw disconnected & unknown
From: Jerome GlisseSince force handling rework of d0d0a225e6ad43314c9aa7ea081f76adc5098ad4 we could end up bouncing connector status btw disconnected and unknown. When connector status change a call to output_poll_changed happen which in turn ask again for detect but with force set. So set the load detect flags whenever we report the connector as connected or unknown this avoid bouncing btw disconnected and unknown. Signed-off-by: Jerome Glisse --- drivers/gpu/drm/radeon/radeon_connectors.c |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index dec6cbe..ff6a2e0 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -764,7 +764,7 @@ radeon_vga_detect(struct drm_connector *connector, bool force) if (radeon_connector->dac_load_detect && encoder) { encoder_funcs = encoder->helper_private; ret = encoder_funcs->detect(encoder, connector); - if (ret == connector_status_connected) + if (ret != connector_status_disconnected) radeon_connector->detected_by_load = true; } } @@ -1005,8 +1005,9 @@ radeon_dvi_detect(struct drm_connector *connector, bool force) ret = encoder_funcs->detect(encoder, connector); if (ret == connector_status_connected) { radeon_connector->use_digital = false; - radeon_connector->detected_by_load = true; } + if (ret != connector_status_disconnected) + radeon_connector->detected_by_load = true; } break; } -- 1.7.1
[PATCH 1/2] vmwgfx: Emulate depth 32 framebuffers
On Sat, Oct 22, 2011 at 10:29:33AM +0200, Thomas Hellstrom wrote: > From: Jakob Bornecrantz > > Signed-off-by: Jakob Bornecrantz > Signed-off-by: Thomas Hellstrom > --- > drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 10 +- > 1 files changed, 9 insertions(+), 1 deletions(-) > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > index 39b99db..00ec619 100644 > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > @@ -679,6 +679,7 @@ static int do_dmabuf_define_gmrfb(struct drm_file > *file_priv, > struct vmw_private *dev_priv, > struct vmw_framebuffer *framebuffer) > { > + int depth = framebuffer->base.depth; > size_t fifo_size; > int ret; > > @@ -687,6 +688,13 @@ static int do_dmabuf_define_gmrfb(struct drm_file > *file_priv, > SVGAFifoCmdDefineGMRFB body; > } *cmd; > > + /* Emulate RGBA support, contrary to svga_reg.h this is not > + * supported by hosts. This is only a problem if we are reading Uh, what if it becomes supported at some point? Should there be some check against the host version? (Thinking that some user might be running this older driver with a newer host that des support 32 - won't that cause issues?) > + * this value later and expecting what we uploaded back. > + */ > + if (depth == 32) > + depth = 24; > + > fifo_size = sizeof(*cmd); > cmd = kmalloc(fifo_size, GFP_KERNEL); > if (unlikely(cmd == NULL)) { > @@ -697,7 +705,7 @@ static int do_dmabuf_define_gmrfb(struct drm_file > *file_priv, > memset(cmd, 0, fifo_size); > cmd->header = SVGA_CMD_DEFINE_GMRFB; > cmd->body.format.bitsPerPixel = framebuffer->base.bits_per_pixel; > - cmd->body.format.colorDepth = framebuffer->base.depth; > + cmd->body.format.colorDepth = depth; > cmd->body.format.reserved = 0; > cmd->body.bytesPerLine = framebuffer->base.pitch; > cmd->body.ptr.gmrId = framebuffer->user_handle; > -- > 1.7.4.4 > > ___ > dri-devel mailing list > dri-devel at lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 41698] [r300g] Flickering user interface in WoW
https://bugs.freedesktop.org/show_bug.cgi?id=41698 --- Comment #6 from Chris Rankin 2011-10-24 16:53:50 UTC --- (In reply to comment #5) > Can you try this patch? Sorry, no change. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On 10/08/2011 12:03 AM, Marek Ol??k wrote: > On Fri, Oct 7, 2011 at 10:00 AM, Thomas Hellstrom > wrote: > >> OK. First I think we need to make a distinction: bo sync objects vs driver >> fences. The bo sync obj api is there to strictly provide functionality that >> the ttm bo subsystem is using, and that follows a simple set of rules: >> >> 1) the bo subsystem does never assume sync objects are ordered. That means >> the bo subsystem needs to wait on a sync object before removing it from a >> buffer. Any other assumption is buggy and must be fixed. BUT, if that >> assumption takes place in the driver unknowingly from the ttm bo subsystem >> (which is usually the case), it's OK. >> >> 2) When the sync object(s) attached to the bo are signaled the ttm bo >> subsystem is free to copy the bo contents and to unbind the bo. >> >> 3) The ttm bo system allows sync objects to be signaled in different ways >> opaque to the subsystem using sync_obj_arg. The driver is responsible for >> setting up that argument. >> >> 4) Driver fences may be used for or expose other functionality or adaptions >> to APIs as long as the sync obj api exported to the bo sybsystem follows the >> above rules. >> >> This means the following w r t the patch. >> >> A) it violates 1). This is a bug that must be fixed. Assumptions that if one >> sync object is singnaled, another sync object is also signaled must be done >> in the driver and not in the bo subsystem. Hence we need to explicitly wait >> for a fence to remove it from the bo. >> >> B) the sync_obj_arg carries *per-sync-obj* information on how it should be >> signaled. If we need to attach multiple sync objects to a buffer object, we >> also need multiple sync_obj_args. This is a bug and needs to be fixed. >> >> C) There is really only one reason that the ttm bo subsystem should care >> about multiple sync objects, and that is because the driver can't order them >> efficiently. A such example would be hardware with multiple pipes reading >> simultaneously from the same texture buffer. Currently we don't support this >> so only the *last* sync object needs to be know by the bo subsystem. Keeping >> track of multiple fences generates a lot of completely unnecessary code in >> the ttm_bo_util file, the ttm_bo_vm file, and will be a nightmare if / when >> we truly support pipelined moves. >> >> As I understand it from your patches, you want to keep multiple fences >> around only to track rendering history. If we want to do that generically, i >> suggest doing it in the execbuf util code in the following way: >> >> struct ttm_eu_rendering_history { >> void *last_read_sync_obj; >> void *last_read_sync_obj_arg; >> void *last_write_sync_obj; >> void *last_write_sync_obj_arg; >> } >> >> Embed this structure in the radeon_bo, and build a small api around it, >> including *optionally* passing it to the existing execbuf utilities, and you >> should be done. The bo_util code and bo_vm code doesn't care about the >> rendering history. Only that the bo is completely idle. >> >> Note also that when an accelerated bo move is scheduled, the driver needs to >> update this struct >> > OK, sounds good. I'll fix what should be fixed and send a patch when > it's ready. I am now not so sure whether doing this generically is a > good idea. :) > > Marek > Marek, Any progress on this. The merge window is about to open soon I guess and we need a fix by then. /Thomas
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #3 from Michal 2011-10-24 08:56:25 PDT --- Well, yes, ums is about 2x faster then kms. I think the problem is somewhere in textures. Lowering texture quality in openarena, fps jumps from 5 to 60. The same with etracer, with low res textures copied from version 3.5, fps jumps from 3 to 30. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[PATCH 1/2] vmwgfx: Emulate depth 32 framebuffers
- Original Message - > On Sat, Oct 22, 2011 at 10:29:33AM +0200, Thomas Hellstrom wrote: > > From: Jakob Bornecrantz > > > > Signed-off-by: Jakob Bornecrantz > > Signed-off-by: Thomas Hellstrom > > --- > > drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 10 +- > > 1 files changed, 9 insertions(+), 1 deletions(-) > > > > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > index 39b99db..00ec619 100644 > > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c > > @@ -679,6 +679,7 @@ static int do_dmabuf_define_gmrfb(struct > > drm_file *file_priv, > > struct vmw_private *dev_priv, > > struct vmw_framebuffer *framebuffer) > > { > > + int depth = framebuffer->base.depth; > > size_t fifo_size; > > int ret; > > > > @@ -687,6 +688,13 @@ static int do_dmabuf_define_gmrfb(struct > > drm_file *file_priv, > > SVGAFifoCmdDefineGMRFB body; > > } *cmd; > > > > + /* Emulate RGBA support, contrary to svga_reg.h this is not > > +* supported by hosts. This is only a problem if we are reading > > Uh, what if it becomes supported at some point? Should there be some > check against the host version? > > (Thinking that some user might be running this older driver with a > newer host that des support 32 - won't that cause issues?) We can add a check then, from the point of view of userspace 32 bit framebuffers works fine. The problem is with the readback ioctl where the readback pixels alpha will be clobbered. We also don't support depths of 30 R10G10B10X2. If we add support for depth 32 and/or depth 30 formats we can add params to tell userspace about that, right now there isn't really a point to them since they will always return not supported and we need to do any work around in userspace anyways. Cheers, Jakob.
nouveau page_flip function implement not wait vblank, which cause screen garbage
Dear, I use NVidia Geforce 7300GT graphics card in my PC, and Linux 3.1rc4 kernel code, git drm 2.4.36. When I run the vbltest program, it prints "60HZ" which indicated the implementation of drmWaitVBlank() and drm_vblank_wait() is correct. But when I run modetest with option " -v -s 12:1280x1024" , it prints high fresh rate up to "150 HZ" . I examing the code , and found that no waiting vblank operation is processed in nouveau_crtc_page_flip() function. The screen produced lots of garbage and blink very much. int nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. } I study the i915 intel_crtc_page_flip implementation. static int intel_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. ret = drm_vblank_get(dev, intel_crtc->pipe); if (ret) goto cleanup_objs; work->pending_flip_obj = obj; work->enable_stall_check = true; /* Block clients from rendering to the new back buffer until * the flip occurs and the object is no longer visible. */ atomic_add(1 << intel_crtc->plane, >old_fb_obj->pending_flip); ret = dev_priv->display.queue_flip(dev, crtc, fb, obj); if (ret) goto cleanup_pending; .. } after vblank irq acquired, the interrupt isr will wakup the runqueue. 6159 static void do_intel_finish_page_flip(struct drm_device *dev, 6160 struct drm_crtc *crtc) 6161 { .. 6211 list_add_tail(>base.link, 6212 >base.file_priv->event_list); 6213 wake_up_interruptible(>base.file_priv->event_wait); 6214 } 6215 6216 drm_vblank_put(dev, intel_crtc->pipe); 6217 Is there anyone use the same driver and found this issues can tell me "is it a bug"? Thanks! -- next part -- An HTML attachment was scrubbed... URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20111024/26405bf2/attachment.htm>
[PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
> >For that there are couple of architectural issues I am not sure how to solve. > > > >There has to be some form of TTM<->[Radeon|Nouveau] lookup mechanism > >to say: "here is a 'struct page *', give me the bus address". Currently > >this is solved by keeping an array of DMA addresses along with the list > >of pages. And passing the list and DMA address up the stack (and down) > >from TTM up to the driver (when ttm->be->func->populate is called and they > >are handed off) does it. It does not break any API layering .. and the > >internal > >TTM pool (non-DMA) can just ignore the dma_address altogether (see patch > >above). > > > > I actually had something more simple in mind, but when tinking a bit > deeper into it, it seems more complicated than I initially thought. > > Namely that when we allocate pages from the ttm_backend, we actually > populated it at the same time. be::populate would then not take a > page array as an argument, and would actually be a no-op on many > drivers. The programming of the gfx's MMU.. would be done via a new API call? I think this needs a bit of whiteboarding for me to be sure I understand you. > > This makes us move towards struct ttm_tt consisting almost only of > its backend, so that whole API should perhaps be looked at with new > eyes. > > So anyway, I'm fine with high level things as they are now, and the Great! > dma_addr issue can be looked at at a later time. If we could get a > couple of extra eyes to review the code for style etc. would be Anybody in particular you can recommend that I can pester^H^H^H^H politely ask :-) > great, because I have very little time the next couple of weeks. Understood.
[PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
On Sat, Oct 22, 2011 at 11:40:54AM +0200, Thomas Hellstrom wrote: > Konrad, > > I was hoping that we could get rid of the dma_address shuffling into > core TTM, > like I mentioned in the review. From what I can tell it's now only > used in the backend and > core ttm doesn't care about it. > > Is there a particular reason we're still passing it around? Yes - and I should have addressed that in the writeup but forgot, sorry about that. So initially I thought you meant this: diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 360afb3..06ef048 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -662,8 +662,7 @@ out: /* Put all pages in pages list to correct pool to wait for reuse */ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, - int flags, enum ttm_caching_state cstate, - dma_addr_t *dma_address) + int flags, enum ttm_caching_state cstate) { unsigned long irq_flags; struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); @@ -707,8 +706,7 @@ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, * cached pages. */ static int __ttm_get_pages(struct list_head *pages, int flags, - enum ttm_caching_state cstate, unsigned count, - dma_addr_t *dma_address) + enum ttm_caching_state cstate, unsigned count) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct page *p = NULL; @@ -864,7 +862,7 @@ int ttm_get_pages(struct ttm_tt *ttm, struct list_head *pages, if (ttm->be && ttm->be->func && ttm->be->func->get_pages) return ttm->be->func->get_pages(ttm, pages, count, dma_address); return __ttm_get_pages(pages, ttm->page_flags, ttm->caching_state, - count, dma_address); + count) } void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, unsigned page_count, dma_addr_t *dma_address) @@ -873,5 +871,5 @@ void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, ttm->be->func->put_pages(ttm, pages, page_count, dma_address); else __ttm_put_pages(pages, page_count, ttm->page_flags, - ttm->caching_state, dma_address); + ttm->caching_state) } which is trivial (thought I have not compile tested it), but it should do it. But I think you mean eliminate the dma_address handling completly in ttm_page_alloc.c and ttm_tt.c. For that there are couple of architectural issues I am not sure how to solve. There has to be some form of TTM<->[Radeon|Nouveau] lookup mechanism to say: "here is a 'struct page *', give me the bus address". Currently this is solved by keeping an array of DMA addresses along with the list of pages. And passing the list and DMA address up the stack (and down) from TTM up to the driver (when ttm->be->func->populate is called and they are handed off) does it. It does not break any API layering .. and the internal TTM pool (non-DMA) can just ignore the dma_address altogether (see patch above). But if we wanted to rip all mention of dma_addr from TTM, one immediate way that comes to my mind is: 1). Provide a new function in the ttm->be->func that would be called 'get_dma' of: (int)( *get_dma)(struct list_head *pages, unsigned page_count, dma_addr_t *dma_address) which would call the TTM DMA to search the internal list and find 'pages*' (which were just a microsecond ago allocated by calling ttm->be->func->get_pages) and stick the bus address on the 'dma_address' array. 2). The radeon|nouveau driver would both call this if they decided to use the TTM DMA API. They would need to provide the newly allocated dma_address for this call. 3). Not sure how to wrap this in macros though - it looks as if both drivers will be riddled with 'if (ttm->be->func->get_pages) { private->dma_addr=kzalloc(...) } else {}'. But that is more an implemention problem.. .. While this idea looks correct, I am struck that it looks like it is breaking the layering of APIs, where the driver is reaching behind the TTM API and calling this extra function? Another idea is to transform the 'struct dma_addr *dma_addr' to a 'void *override_p' in the 'struct ttm_tt'. That means still keeping the TTM API layers seperate, and "passing" the array of DMA address through the 'override_p' array (which would be allocated by TTM DMA code). Something along these lines (not tested): I like this more, but I haven't actually tested it so not sure if it works right? diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index e0d4474..8760a04 100644 ---
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #2 from Alex Deucher 2011-10-24 06:22:51 PDT --- Is there some reason why you want to use UMS? It's not really supported any more. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
[Intel-gfx] [PATCH 1/2] Give up on edid retries when i2c tells us that bus is not there
On Thu, Oct 20, 2011 at 10:33, Jean Delvare wrote: > Just to clarify: by "connectivity is setup", do you mean code in the > driver setting the GPIO pin direction etc., or a display being > connected to the graphics card? > > I admit I am a little surprised. I2C buses should have their lines up > by default, thanks to pull-up resistors, and masters and slaves should > merely drive the lines low as needed. The absence of slaves should have > no impact on the line behavior. If the Intel graphics chips (or the > driver) rely on the display to hold the lines high, I'd say this is a > seriously broken design. > > As a side note, I thought that HDMI had the capability of presence > detection, so there may be a better way to know if a display is > connected or not, and this information could used to dynamically > instantiate and delete the I2C buses? That way we could skip probing > for EDID when there is no chance of success. > Yes, I think so too. I admit I haven't got to the root of the problem here. My test was really simple, borrowed from the test_bus() at i2c-algo-bit.c - without HDMI cable plugged in, I was getting the "SDA stuck high" messages; with the cable plugged in, I wasn't getting any of those. But in any case, I haven't investigated it deeper in the hardware direction after figuring out that drm_get_edid would retry its attempts for retreiving the edid for 15 times, getting the -ENXIO error for all of them. > Well, you could always do manual line testing of the I2C bus _before_ > calling drm_do_probe_ddc_edid()? And skip the call if the test fails > (i.e. the bus isn't ready for use.) As said before, I am willing to > export bit_test if it helps any. I've attached a patch doing exactly > this. Let me know if you want me to commit it. > Yes, surely, I can do this. I did a similar test in the i915-specific patch, checking if we can talk to i2c adapter pior to call drm_do_probe_ddc_edid, but I thought that perhaps it would be easier for all the cards relying on drm_get_edid to have a faster return path in case of problems. I am fine with any solution, if this problem is happening to appear on i915 cards only, we could do this in our driver. It is that 15 attempts looks a bit overkill. > Your proposed patch is better than I first thought. I had forgotten > that i2c-algo-bit tries up to 3 times to get a first ack from a slave. > So if i2c_transfer returns -ENXIO, this means that i2c-algo-bit has > already attempted 3 times to contact the slave, with no reply, so > there's probably no point going on. A communication problem with a > present slave would have returned a different error code. > > Without your patch, a missing slave would cause 3 * 5 = 15 retries, > which is definitely too much. > > That being said, even then the whole probe sequence shouldn't exceed > 10 ms, which I wouldn't expect a user to notice. The user-reported 4 > second delay when running xrandr can't be caused by this. 4 seconds for > 15 attempts is 250 ms per attempt, this looks like a timeout due to > non-functional bus, not a nack. Not that 250 ms is 1 jiffy for HZ=250, > which I guess was what the reporting user was running. So even with > your patch, there's still 750 ms to wait, I don't think this is > acceptable. You really have to fix that I2C bus or skip probing it. > Yep, true. I've followed the easiest route so far - find out where the initial problem happens, and attempt at solving it. This change in drm_get_edid solves the delay at its origin, but we certainly could have additional delays propagated somewhere else. I couldn't reproduce the original reporter's huge delay, so I looked at what was within my reach. In any case - looking at a faster way to leave the drm_do_probe_ddc_edid, while also allowing a way for having a more detailed probing - would it be an acceptable solution to add a: MODULE_PARM(skip_unresponsive_edid, "Ignore outputs which do not provide edid on first attempt"); and update the patch to use this option? -- Eugeni Dodonov <http://eugeni.dodonov.net/> -- next part -- An HTML attachment was scrubbed... URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20111024/2d0c722f/attachment.htm>
[PATCH 1/2] drm/kms: Make i2c buses faster
On Sat, Oct 22, 2011 at 12:38, Alex Deucher wrote: > On Fri, Oct 21, 2011 at 3:29 PM, Jean Delvare wrote: > > Hi Alex, > > > > On Friday 21 October 2011 08:05:48 pm Alex Deucher wrote: > >> On Fri, Oct 21, 2011 at 10:16 AM, Jean Delvare > >> > Does anyone know at which speed hardware I2C engines are running > >> > the DDC bus on various graphics cards? > >> > >> IIRC, we generally target the radeon hw i2c engines to run at 50 khz. > > > > Then it doesn't seem unreasonable to try and achieve the same for bit- > > banged I2C. That's exactly what my patch is doing. > > Seems fine to me then. I don't know why we set it so low to begin > with, but I'm certainly not an i2c expert. > Seems fine for me as well. Acked-by: Eugeni Dodonov -- Eugeni Dodonov <http://eugeni.dodonov.net/> -- next part -- An HTML attachment was scrubbed... URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20111024/0db900e3/attachment.html>
[Bug 42090] [r300/compiler] [bisected] sauerbraten texture corruption
https://bugs.freedesktop.org/show_bug.cgi?id=42090 --- Comment #2 from Fabio Pedretti 2011-10-24 00:19:29 PDT --- (In reply to comment #1) > Created attachment 52640 [details] [review] > Possible fix > > Does this patch fix the problem? Yes (tested on current git and also with 0dc97e7fd49a5b8db25b95a1020fc598dba5cf65 reverted: both type of corruptions are fixed with the patch). -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug.
nouveau page_flip function implement not wait vblank, which cause screen garbage
Dear, I use NVidia Geforce 7300GT graphics card in my PC, and Linux 3.1rc4 kernel code, git drm 2.4.36. When I run the vbltest program, it prints 60HZ which indicated the implementation of drmWaitVBlank() and drm_vblank_wait() is correct. But when I run modetest with option -v -s 12:1280x1024 , it prints high fresh rate up to 150 HZ . I examing the code , and found that no waiting vblank operation is processed in nouveau_crtc_page_flip() function. The screen produced lots of garbage and blink very much. int nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. } I study the i915 intel_crtc_page_flip implementation. static int intel_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. ret = drm_vblank_get(dev, intel_crtc-pipe); if (ret) goto cleanup_objs; work-pending_flip_obj = obj; work-enable_stall_check = true; /* Block clients from rendering to the new back buffer until * the flip occurs and the object is no longer visible. */ atomic_add(1 intel_crtc-plane, work-old_fb_obj-pending_flip); ret = dev_priv-display.queue_flip(dev, crtc, fb, obj); if (ret) goto cleanup_pending; .. } after vblank irq acquired, the interrupt isr will wakup the runqueue. 6159 static void do_intel_finish_page_flip(struct drm_device *dev, 6160 struct drm_crtc *crtc) 6161 { .. 6211 list_add_tail(e-base.link, 6212 e-base.file_priv-event_list); 6213 wake_up_interruptible(e-base.file_priv-event_wait); 6214 } 6215 6216 drm_vblank_put(dev, intel_crtc-pipe); 6217 Is there anyone use the same driver and found this issues can tell me is it a bug? Thanks! ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42090] [r300/compiler] [bisected] sauerbraten texture corruption
https://bugs.freedesktop.org/show_bug.cgi?id=42090 --- Comment #2 from Fabio Pedretti fabio@libero.it 2011-10-24 00:19:29 PDT --- (In reply to comment #1) Created attachment 52640 [details] [review] Possible fix Does this patch fix the problem? Yes (tested on current git and also with 0dc97e7fd49a5b8db25b95a1020fc598dba5cf65 reverted: both type of corruptions are fixed with the patch). -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #2 from Alex Deucher ag...@yahoo.com 2011-10-24 06:22:51 PDT --- Is there some reason why you want to use UMS? It's not really supported any more. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/kms: Make i2c buses faster
On Sat, Oct 22, 2011 at 12:38, Alex Deucher alexdeuc...@gmail.com wrote: On Fri, Oct 21, 2011 at 3:29 PM, Jean Delvare jdelv...@suse.de wrote: Hi Alex, On Friday 21 October 2011 08:05:48 pm Alex Deucher wrote: On Fri, Oct 21, 2011 at 10:16 AM, Jean Delvare jdelv...@suse.de Does anyone know at which speed hardware I2C engines are running the DDC bus on various graphics cards? IIRC, we generally target the radeon hw i2c engines to run at 50 khz. Then it doesn't seem unreasonable to try and achieve the same for bit- banged I2C. That's exactly what my patch is doing. Seems fine to me then. I don't know why we set it so low to begin with, but I'm certainly not an i2c expert. Seems fine for me as well. Acked-by: Eugeni Dodonov eugeni.dodo...@intel.com -- Eugeni Dodonov http://eugeni.dodonov.net/ ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [Intel-gfx] [PATCH 1/2] Give up on edid retries when i2c tells us that bus is not there
On Thu, Oct 20, 2011 at 10:33, Jean Delvare kh...@linux-fr.org wrote: Just to clarify: by connectivity is setup, do you mean code in the driver setting the GPIO pin direction etc., or a display being connected to the graphics card? I admit I am a little surprised. I2C buses should have their lines up by default, thanks to pull-up resistors, and masters and slaves should merely drive the lines low as needed. The absence of slaves should have no impact on the line behavior. If the Intel graphics chips (or the driver) rely on the display to hold the lines high, I'd say this is a seriously broken design. As a side note, I thought that HDMI had the capability of presence detection, so there may be a better way to know if a display is connected or not, and this information could used to dynamically instantiate and delete the I2C buses? That way we could skip probing for EDID when there is no chance of success. Yes, I think so too. I admit I haven't got to the root of the problem here. My test was really simple, borrowed from the test_bus() at i2c-algo-bit.c - without HDMI cable plugged in, I was getting the SDA stuck high messages; with the cable plugged in, I wasn't getting any of those. But in any case, I haven't investigated it deeper in the hardware direction after figuring out that drm_get_edid would retry its attempts for retreiving the edid for 15 times, getting the -ENXIO error for all of them. Well, you could always do manual line testing of the I2C bus _before_ calling drm_do_probe_ddc_edid()? And skip the call if the test fails (i.e. the bus isn't ready for use.) As said before, I am willing to export bit_test if it helps any. I've attached a patch doing exactly this. Let me know if you want me to commit it. Yes, surely, I can do this. I did a similar test in the i915-specific patch, checking if we can talk to i2c adapter pior to call drm_do_probe_ddc_edid, but I thought that perhaps it would be easier for all the cards relying on drm_get_edid to have a faster return path in case of problems. I am fine with any solution, if this problem is happening to appear on i915 cards only, we could do this in our driver. It is that 15 attempts looks a bit overkill. Your proposed patch is better than I first thought. I had forgotten that i2c-algo-bit tries up to 3 times to get a first ack from a slave. So if i2c_transfer returns -ENXIO, this means that i2c-algo-bit has already attempted 3 times to contact the slave, with no reply, so there's probably no point going on. A communication problem with a present slave would have returned a different error code. Without your patch, a missing slave would cause 3 * 5 = 15 retries, which is definitely too much. That being said, even then the whole probe sequence shouldn't exceed 10 ms, which I wouldn't expect a user to notice. The user-reported 4 second delay when running xrandr can't be caused by this. 4 seconds for 15 attempts is 250 ms per attempt, this looks like a timeout due to non-functional bus, not a nack. Not that 250 ms is 1 jiffy for HZ=250, which I guess was what the reporting user was running. So even with your patch, there's still 750 ms to wait, I don't think this is acceptable. You really have to fix that I2C bus or skip probing it. Yep, true. I've followed the easiest route so far - find out where the initial problem happens, and attempt at solving it. This change in drm_get_edid solves the delay at its origin, but we certainly could have additional delays propagated somewhere else. I couldn't reproduce the original reporter's huge delay, so I looked at what was within my reach. In any case - looking at a faster way to leave the drm_do_probe_ddc_edid, while also allowing a way for having a more detailed probing - would it be an acceptable solution to add a: MODULE_PARM(skip_unresponsive_edid, Ignore outputs which do not provide edid on first attempt); and update the patch to use this option? -- Eugeni Dodonov http://eugeni.dodonov.net/ ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On 10/08/2011 12:03 AM, Marek Olšák wrote: On Fri, Oct 7, 2011 at 10:00 AM, Thomas Hellstromtho...@shipmail.org wrote: OK. First I think we need to make a distinction: bo sync objects vs driver fences. The bo sync obj api is there to strictly provide functionality that the ttm bo subsystem is using, and that follows a simple set of rules: 1) the bo subsystem does never assume sync objects are ordered. That means the bo subsystem needs to wait on a sync object before removing it from a buffer. Any other assumption is buggy and must be fixed. BUT, if that assumption takes place in the driver unknowingly from the ttm bo subsystem (which is usually the case), it's OK. 2) When the sync object(s) attached to the bo are signaled the ttm bo subsystem is free to copy the bo contents and to unbind the bo. 3) The ttm bo system allows sync objects to be signaled in different ways opaque to the subsystem using sync_obj_arg. The driver is responsible for setting up that argument. 4) Driver fences may be used for or expose other functionality or adaptions to APIs as long as the sync obj api exported to the bo sybsystem follows the above rules. This means the following w r t the patch. A) it violates 1). This is a bug that must be fixed. Assumptions that if one sync object is singnaled, another sync object is also signaled must be done in the driver and not in the bo subsystem. Hence we need to explicitly wait for a fence to remove it from the bo. B) the sync_obj_arg carries *per-sync-obj* information on how it should be signaled. If we need to attach multiple sync objects to a buffer object, we also need multiple sync_obj_args. This is a bug and needs to be fixed. C) There is really only one reason that the ttm bo subsystem should care about multiple sync objects, and that is because the driver can't order them efficiently. A such example would be hardware with multiple pipes reading simultaneously from the same texture buffer. Currently we don't support this so only the *last* sync object needs to be know by the bo subsystem. Keeping track of multiple fences generates a lot of completely unnecessary code in the ttm_bo_util file, the ttm_bo_vm file, and will be a nightmare if / when we truly support pipelined moves. As I understand it from your patches, you want to keep multiple fences around only to track rendering history. If we want to do that generically, i suggest doing it in the execbuf util code in the following way: struct ttm_eu_rendering_history { void *last_read_sync_obj; void *last_read_sync_obj_arg; void *last_write_sync_obj; void *last_write_sync_obj_arg; } Embed this structure in the radeon_bo, and build a small api around it, including *optionally* passing it to the existing execbuf utilities, and you should be done. The bo_util code and bo_vm code doesn't care about the rendering history. Only that the bo is completely idle. Note also that when an accelerated bo move is scheduled, the driver needs to update this struct OK, sounds good. I'll fix what should be fixed and send a patch when it's ready. I am now not so sure whether doing this generically is a good idea. :) Marek Marek, Any progress on this. The merge window is about to open soon I guess and we need a fix by then. /Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #3 from Michal majkel...@interia.pl 2011-10-24 08:56:25 PDT --- Well, yes, ums is about 2x faster then kms. I think the problem is somewhere in textures. Lowering texture quality in openarena, fps jumps from 5 to 60. The same with etracer, with low res textures copied from version 3.5, fps jumps from 3 to 30. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On Sat, Oct 8, 2011 at 1:32 PM, Thomas Hellstrom tho...@shipmail.org wrote: On 10/08/2011 01:27 PM, Ville Syrjälä wrote: On Sat, Oct 08, 2011 at 01:10:13PM +0200, Thomas Hellstrom wrote: On 10/08/2011 12:26 PM, Ville Syrjälä wrote: On Fri, Oct 07, 2011 at 10:58:13AM +0200, Thomas Hellstrom wrote: Oh, and one more style comment below: On 08/07/2011 10:39 PM, Marek Olšák wrote: +enum ttm_buffer_usage { + TTM_USAGE_READ = 1, + TTM_USAGE_WRITE = 2, + TTM_USAGE_READWRITE = TTM_USAGE_READ | TTM_USAGE_WRITE +}; Please don't use enums for bit operations. Now I'm curious. Why not? Because it's inconsistent with how flags are defined in the rest of the TTM module. Ah OK. I was wondering if there's some subtle technical issue involved. I've recently gotten to the habit of using enums for pretty much all constants. Just easier on the eye IMHO, and avoids cpp output from looking like number soup. Yes, there are a number of advantages, including symbolic debugger output. If we had flag enums that enumerated 1, 2, 4, 8 etc. I'd feel motivated to move all TTM definitions over. I don't think that how it is enumerated matters in any way. What I like about enums, besides what has already been mentioned, is that it adds a self-documentation in the code. Compare: void ttm_set_bo_flags(unsigned flags); And: void ttm_set_bo_flags(enum ttm_bo_flags flags); The latter is way easier to understand for somebody who doesn't know the code and wants to implement his first patch. With the latter, it's clear at first glance what are the valid values for flags, because you can just search for enum ttm_bo_flags. I will change the enum to defines for the sake of following your code style convention, but it's an unreasonable convention to say the least. Marek ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
On 10/24/2011 06:42 PM, Marek Olšák wrote: On Sat, Oct 8, 2011 at 1:32 PM, Thomas Hellstromtho...@shipmail.org wrote: On 10/08/2011 01:27 PM, Ville Syrjälä wrote: On Sat, Oct 08, 2011 at 01:10:13PM +0200, Thomas Hellstrom wrote: On 10/08/2011 12:26 PM, Ville Syrjälä wrote: On Fri, Oct 07, 2011 at 10:58:13AM +0200, Thomas Hellstrom wrote: Oh, and one more style comment below: On 08/07/2011 10:39 PM, Marek Olšák wrote: +enum ttm_buffer_usage { +TTM_USAGE_READ = 1, +TTM_USAGE_WRITE = 2, +TTM_USAGE_READWRITE = TTM_USAGE_READ | TTM_USAGE_WRITE +}; Please don't use enums for bit operations. Now I'm curious. Why not? Because it's inconsistent with how flags are defined in the rest of the TTM module. Ah OK. I was wondering if there's some subtle technical issue involved. I've recently gotten to the habit of using enums for pretty much all constants. Just easier on the eye IMHO, and avoids cpp output from looking like number soup. Yes, there are a number of advantages, including symbolic debugger output. If we had flag enums that enumerated 1, 2, 4, 8 etc. I'd feel motivated to move all TTM definitions over. I don't think that how it is enumerated matters in any way. What I like about enums, besides what has already been mentioned, is that it adds a self-documentation in the code. Compare: void ttm_set_bo_flags(unsigned flags); And: void ttm_set_bo_flags(enum ttm_bo_flags flags); The latter is way easier to understand for somebody who doesn't know the code and wants to implement his first patch. With the latter, it's clear at first glance what are the valid values for flags, because you can just search for enum ttm_bo_flags. I will change the enum to defines for the sake of following your code style convention, but it's an unreasonable convention to say the least. Marek I'm not going to argue against this, because you're probably right. The important thing is that we get the fix in with or without enums. Thanks, Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Hi Thomas, I have made no progress so far due to lack of time. Would you mind if I fixed the most important things first, which are: - sync objects are not ordered, (A) - every sync object must have its corresponding sync_obj_arg, (B) and if I fixed (C) some time later. I planned on moving the two sync objects from ttm into radeon and not using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it does), but it looks more complicated to me than I had originally thought. Marek On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstrom tho...@shipmail.org wrote: Marek, Any progress on this. The merge window is about to open soon I guess and we need a fix by then. /Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
On Sat, Oct 22, 2011 at 11:40:54AM +0200, Thomas Hellstrom wrote: Konrad, I was hoping that we could get rid of the dma_address shuffling into core TTM, like I mentioned in the review. From what I can tell it's now only used in the backend and core ttm doesn't care about it. Is there a particular reason we're still passing it around? Yes - and I should have addressed that in the writeup but forgot, sorry about that. So initially I thought you meant this: diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 360afb3..06ef048 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -662,8 +662,7 @@ out: /* Put all pages in pages list to correct pool to wait for reuse */ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, - int flags, enum ttm_caching_state cstate, - dma_addr_t *dma_address) + int flags, enum ttm_caching_state cstate) { unsigned long irq_flags; struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); @@ -707,8 +706,7 @@ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, * cached pages. */ static int __ttm_get_pages(struct list_head *pages, int flags, - enum ttm_caching_state cstate, unsigned count, - dma_addr_t *dma_address) + enum ttm_caching_state cstate, unsigned count) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct page *p = NULL; @@ -864,7 +862,7 @@ int ttm_get_pages(struct ttm_tt *ttm, struct list_head *pages, if (ttm-be ttm-be-func ttm-be-func-get_pages) return ttm-be-func-get_pages(ttm, pages, count, dma_address); return __ttm_get_pages(pages, ttm-page_flags, ttm-caching_state, - count, dma_address); + count) } void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, unsigned page_count, dma_addr_t *dma_address) @@ -873,5 +871,5 @@ void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, ttm-be-func-put_pages(ttm, pages, page_count, dma_address); else __ttm_put_pages(pages, page_count, ttm-page_flags, - ttm-caching_state, dma_address); + ttm-caching_state) } which is trivial (thought I have not compile tested it), but it should do it. But I think you mean eliminate the dma_address handling completly in ttm_page_alloc.c and ttm_tt.c. For that there are couple of architectural issues I am not sure how to solve. There has to be some form of TTM-[Radeon|Nouveau] lookup mechanism to say: here is a 'struct page *', give me the bus address. Currently this is solved by keeping an array of DMA addresses along with the list of pages. And passing the list and DMA address up the stack (and down) from TTM up to the driver (when ttm-be-func-populate is called and they are handed off) does it. It does not break any API layering .. and the internal TTM pool (non-DMA) can just ignore the dma_address altogether (see patch above). But if we wanted to rip all mention of dma_addr from TTM, one immediate way that comes to my mind is: 1). Provide a new function in the ttm-be-func that would be called 'get_dma' of: (int)( *get_dma)(struct list_head *pages, unsigned page_count, dma_addr_t *dma_address) which would call the TTM DMA to search the internal list and find 'pages*' (which were just a microsecond ago allocated by calling ttm-be-func-get_pages) and stick the bus address on the 'dma_address' array. 2). The radeon|nouveau driver would both call this if they decided to use the TTM DMA API. They would need to provide the newly allocated dma_address for this call. 3). Not sure how to wrap this in macros though - it looks as if both drivers will be riddled with 'if (ttm-be-func-get_pages) { private-dma_addr=kzalloc(...) } else {}'. But that is more an implemention problem.. .. While this idea looks correct, I am struck that it looks like it is breaking the layering of APIs, where the driver is reaching behind the TTM API and calling this extra function? Another idea is to transform the 'struct dma_addr *dma_addr' to a 'void *override_p' in the 'struct ttm_tt'. That means still keeping the TTM API layers seperate, and passing the array of DMA address through the 'override_p' array (which would be allocated by TTM DMA code). Something along these lines (not tested): I like this more, but I haven't actually tested it so not sure if it works right? diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index e0d4474..8760a04 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Marek, The problem is that the patch adds a lot of complicated code where it's not needed, and I don't want to end up reverting that code and re-implementing the new Radeon gem ioctl by myself. Having a list of two fence objects and waiting for either of them shouldn't be that complicated to implement, in particular when it's done in a driver-specific way and you have the benefit of assuming that they are ordered. Since the new functionality is a performance improvement, If time is an issue, I suggest we back this change out and go for the next merge window. /Thomas On 10/24/2011 07:10 PM, Marek Olšák wrote: Hi Thomas, I have made no progress so far due to lack of time. Would you mind if I fixed the most important things first, which are: - sync objects are not ordered, (A) - every sync object must have its corresponding sync_obj_arg, (B) and if I fixed (C) some time later. I planned on moving the two sync objects from ttm into radeon and not using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it does), but it looks more complicated to me than I had originally thought. Marek On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstromtho...@shipmail.org wrote: Marek, Any progress on this. The merge window is about to open soon I guess and we need a fix by then. /Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write
Alright then. Dave, if you are reading this, feel free not to include the two patches I sent you in the next pull request. Marek On Mon, Oct 24, 2011 at 7:28 PM, Thomas Hellstrom tho...@shipmail.org wrote: Marek, The problem is that the patch adds a lot of complicated code where it's not needed, and I don't want to end up reverting that code and re-implementing the new Radeon gem ioctl by myself. Having a list of two fence objects and waiting for either of them shouldn't be that complicated to implement, in particular when it's done in a driver-specific way and you have the benefit of assuming that they are ordered. Since the new functionality is a performance improvement, If time is an issue, I suggest we back this change out and go for the next merge window. /Thomas On 10/24/2011 07:10 PM, Marek Olšák wrote: Hi Thomas, I have made no progress so far due to lack of time. Would you mind if I fixed the most important things first, which are: - sync objects are not ordered, (A) - every sync object must have its corresponding sync_obj_arg, (B) and if I fixed (C) some time later. I planned on moving the two sync objects from ttm into radeon and not using ttm_bo_wait from radeon (i.e. pretty much reimplementing what it does), but it looks more complicated to me than I had originally thought. Marek On Mon, Oct 24, 2011 at 4:48 PM, Thomas Hellstromtho...@shipmail.org wrote: Marek, Any progress on this. The merge window is about to open soon I guess and we need a fix by then. /Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
On 10/24/2011 07:27 PM, Konrad Rzeszutek Wilk wrote: On Sat, Oct 22, 2011 at 11:40:54AM +0200, Thomas Hellstrom wrote: Konrad, I was hoping that we could get rid of the dma_address shuffling into core TTM, like I mentioned in the review. From what I can tell it's now only used in the backend and core ttm doesn't care about it. Is there a particular reason we're still passing it around? Yes - and I should have addressed that in the writeup but forgot, sorry about that. So initially I thought you meant this: diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 360afb3..06ef048 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -662,8 +662,7 @@ out: /* Put all pages in pages list to correct pool to wait for reuse */ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, - int flags, enum ttm_caching_state cstate, - dma_addr_t *dma_address) + int flags, enum ttm_caching_state cstate) { unsigned long irq_flags; struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); @@ -707,8 +706,7 @@ static void __ttm_put_pages(struct list_head *pages, unsigned page_count, * cached pages. */ static int __ttm_get_pages(struct list_head *pages, int flags, - enum ttm_caching_state cstate, unsigned count, - dma_addr_t *dma_address) + enum ttm_caching_state cstate, unsigned count) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct page *p = NULL; @@ -864,7 +862,7 @@ int ttm_get_pages(struct ttm_tt *ttm, struct list_head *pages, if (ttm-be ttm-be-func ttm-be-func-get_pages) return ttm-be-func-get_pages(ttm, pages, count, dma_address); return __ttm_get_pages(pages, ttm-page_flags, ttm-caching_state, - count, dma_address); + count) } void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, unsigned page_count, dma_addr_t *dma_address) @@ -873,5 +871,5 @@ void ttm_put_pages(struct ttm_tt *ttm, struct list_head *pages, ttm-be-func-put_pages(ttm, pages, page_count, dma_address); else __ttm_put_pages(pages, page_count, ttm-page_flags, - ttm-caching_state, dma_address); + ttm-caching_state) } which is trivial (thought I have not compile tested it), but it should do it. But I think you mean eliminate the dma_address handling completly in ttm_page_alloc.c and ttm_tt.c. For that there are couple of architectural issues I am not sure how to solve. There has to be some form of TTM-[Radeon|Nouveau] lookup mechanism to say: here is a 'struct page *', give me the bus address. Currently this is solved by keeping an array of DMA addresses along with the list of pages. And passing the list and DMA address up the stack (and down) from TTM up to the driver (when ttm-be-func-populate is called and they are handed off) does it. It does not break any API layering .. and the internal TTM pool (non-DMA) can just ignore the dma_address altogether (see patch above). I actually had something more simple in mind, but when tinking a bit deeper into it, it seems more complicated than I initially thought. Namely that when we allocate pages from the ttm_backend, we actually populated it at the same time. be::populate would then not take a page array as an argument, and would actually be a no-op on many drivers. This makes us move towards struct ttm_tt consisting almost only of its backend, so that whole API should perhaps be looked at with new eyes. So anyway, I'm fine with high level things as they are now, and the dma_addr issue can be looked at at a later time. If we could get a couple of extra eyes to review the code for style etc. would be great, because I have very little time the next couple of weeks. /Thomas ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 06/11] ttm/driver: Expand ttm_backend_func to include two overrides for TTM page pool.
For that there are couple of architectural issues I am not sure how to solve. There has to be some form of TTM-[Radeon|Nouveau] lookup mechanism to say: here is a 'struct page *', give me the bus address. Currently this is solved by keeping an array of DMA addresses along with the list of pages. And passing the list and DMA address up the stack (and down) from TTM up to the driver (when ttm-be-func-populate is called and they are handed off) does it. It does not break any API layering .. and the internal TTM pool (non-DMA) can just ignore the dma_address altogether (see patch above). I actually had something more simple in mind, but when tinking a bit deeper into it, it seems more complicated than I initially thought. Namely that when we allocate pages from the ttm_backend, we actually populated it at the same time. be::populate would then not take a page array as an argument, and would actually be a no-op on many drivers. The programming of the gfx's MMU.. would be done via a new API call? I think this needs a bit of whiteboarding for me to be sure I understand you. This makes us move towards struct ttm_tt consisting almost only of its backend, so that whole API should perhaps be looked at with new eyes. So anyway, I'm fine with high level things as they are now, and the Great! dma_addr issue can be looked at at a later time. If we could get a couple of extra eyes to review the code for style etc. would be Anybody in particular you can recommend that I can pester^H^H^H^H politely ask :-) great, because I have very little time the next couple of weeks. nods Understood. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42067] [r600g] Compiz emblem icons corrupted on cayman
https://bugs.freedesktop.org/show_bug.cgi?id=42067 --- Comment #3 from Harald Judt h.j...@gmx.at 2011-10-24 11:25:31 PDT --- This was caused by enabling texture compression in core settings. Turning it off makes the icons look right again. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #4 from Roland Scheidegger srol...@vmware.com 2011-10-24 11:30:14 PDT --- Some performance difference is expected due to kms not supporting tiling on r200, though I would expect a 2x difference only if you also enabled hyperz manually. That said, if performance jumps a lot higher with lower texture settings, this looks like a problem with bo placement/migration, the old code was simple and terrible in some cases (could easily get texture thrashing) whereas the new code is different (but still not smart enough). In any case, if you don't have libtxc installed try that as openarena can use it and hence textures will use much less vram. If you have some compositing manager running try disabling it as it will also use more memory. Though if you have some terribly memory-constrained chip nothing might help much (should really have at least 64MB vram). As for the ums problem, it looks like the driver is hitting a software fallback. RADEON_DEBUG=fall might tell you why - not that there's any chance it will get fixed... -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #5 from Marek Olšák mar...@gmail.com 2011-10-24 11:46:25 PDT --- I am leaning to believe that tiling can make such a difference. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42067] [r600g] Compiz emblem icons corrupted on cayman
https://bugs.freedesktop.org/show_bug.cgi?id=42067 --- Comment #4 from Harald Judt h.j...@gmx.at 2011-10-24 11:58:24 PDT --- I think this might rather be a duplicate of bug 38173. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] New: RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 Bug #: 42175 Summary: RV730: Display errors in glxgears WebGL Classification: Unclassified Product: Mesa Version: git Platform: x86-64 (AMD64) OS/Version: Linux (All) Status: NEW Severity: normal Priority: medium Component: Drivers/Gallium/r600 AssignedTo: dri-devel@lists.freedesktop.org ReportedBy: kde...@vogtner.de Since Mesa 7.11-rc2 the gallium driver is used on my machine and produces display errors. Find attached a glxgears screenshot and a screendump of http://learningwebgl.com/lessons/lesson03/index.html produced with the current Firefox. Before and including 7.11-rc1 seemed to use the old driver which does not produce rendering errors but which crashes Firefox in http://learningwebgl.com/lessons/lesson16/index.html or in the Aquarium of http://code.google.com/p/webglsamples/. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #1 from Stefan kde...@vogtner.de 2011-10-24 14:06:39 PDT --- Created attachment 52710 -- https://bugs.freedesktop.org/attachment.cgi?id=52710 glxgears screenshot -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #2 from Stefan kde...@vogtner.de 2011-10-24 14:07:44 PDT --- Created attachment 52713 -- https://bugs.freedesktop.org/attachment.cgi?id=52713 Lesson 3 screenshot -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] vmwgfx: Emulate depth 32 framebuffers
On Sat, Oct 22, 2011 at 10:29:33AM +0200, Thomas Hellstrom wrote: From: Jakob Bornecrantz ja...@vmware.com Signed-off-by: Jakob Bornecrantz ja...@vmware.com Signed-off-by: Thomas Hellstrom thellst...@vmware.com --- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 10 +- 1 files changed, 9 insertions(+), 1 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 39b99db..00ec619 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -679,6 +679,7 @@ static int do_dmabuf_define_gmrfb(struct drm_file *file_priv, struct vmw_private *dev_priv, struct vmw_framebuffer *framebuffer) { + int depth = framebuffer-base.depth; size_t fifo_size; int ret; @@ -687,6 +688,13 @@ static int do_dmabuf_define_gmrfb(struct drm_file *file_priv, SVGAFifoCmdDefineGMRFB body; } *cmd; + /* Emulate RGBA support, contrary to svga_reg.h this is not + * supported by hosts. This is only a problem if we are reading Uh, what if it becomes supported at some point? Should there be some check against the host version? (Thinking that some user might be running this older driver with a newer host that des support 32 - won't that cause issues?) + * this value later and expecting what we uploaded back. + */ + if (depth == 32) + depth = 24; + fifo_size = sizeof(*cmd); cmd = kmalloc(fifo_size, GFP_KERNEL); if (unlikely(cmd == NULL)) { @@ -697,7 +705,7 @@ static int do_dmabuf_define_gmrfb(struct drm_file *file_priv, memset(cmd, 0, fifo_size); cmd-header = SVGA_CMD_DEFINE_GMRFB; cmd-body.format.bitsPerPixel = framebuffer-base.bits_per_pixel; - cmd-body.format.colorDepth = framebuffer-base.depth; + cmd-body.format.colorDepth = depth; cmd-body.format.reserved = 0; cmd-body.bytesPerLine = framebuffer-base.pitch; cmd-body.ptr.gmrId = framebuffer-user_handle; -- 1.7.4.4 ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #3 from Alex Deucher ag...@yahoo.com 2011-10-24 14:44:38 PDT --- Can you bisect? -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #4 from Stefan kde...@vogtner.de 2011-10-24 14:55:59 PDT --- (In reply to comment #3) Can you bisect? The change took place between 7.11-rc1 and -rc2: a8907c6005d7935b4520255e12184c139471b5b9 is the first bad commit commit a8907c6005d7935b4520255e12184c139471b5b9 Author: Benjamin Franzke benjaminfran...@googlemail.com Date: Sat Jul 2 13:41:35 2011 +0200 But: This is where the switch from the old driver to the Gallium takes place. The old driver got used in my configuration up to 7.11-rc1 because the udev-devel packet was not installed. Eventually after installing it 7.11-rc1 uses the new driver. So currently I don't have a good reference for a bisection. BTW: https://bugzilla.mozilla.org/show_bug.cgi?id=693056 (origin of the current problems). -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH] drm/radeon: avoid bouncing connector status btw disconnected unknown
On Mon, Oct 24, 2011 at 6:16 PM, j.gli...@gmail.com wrote: From: Jerome Glisse jgli...@redhat.com Since force handling rework of d0d0a225e6ad43314c9aa7ea081f76adc5098ad4 we could end up bouncing connector status btw disconnected and unknown. When connector status change a call to output_poll_changed happen which in turn ask again for detect but with force set. So set the load detect flags whenever we report the connector as connected or unknown this avoid bouncing btw disconnected and unknown. Looks good. Reviewed-by: Alex Deucher alexander.deuc...@amd.com Signed-off-by: Jerome Glisse jgli...@redhat.com --- drivers/gpu/drm/radeon/radeon_connectors.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index dec6cbe..ff6a2e0 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -764,7 +764,7 @@ radeon_vga_detect(struct drm_connector *connector, bool force) if (radeon_connector-dac_load_detect encoder) { encoder_funcs = encoder-helper_private; ret = encoder_funcs-detect(encoder, connector); - if (ret == connector_status_connected) + if (ret != connector_status_disconnected) radeon_connector-detected_by_load = true; } } @@ -1005,8 +1005,9 @@ radeon_dvi_detect(struct drm_connector *connector, bool force) ret = encoder_funcs-detect(encoder, connector); if (ret == connector_status_connected) { radeon_connector-use_digital = false; - radeon_connector-detected_by_load = true; } + if (ret != connector_status_disconnected) + radeon_connector-detected_by_load = true; } break; } -- 1.7.1 ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH 1/2] vmwgfx: Emulate depth 32 framebuffers
On Mon, Oct 24, 2011 at 03:01:23PM -0700, Jakob Bornecrantz wrote: - Original Message - On Sat, Oct 22, 2011 at 10:29:33AM +0200, Thomas Hellstrom wrote: From: Jakob Bornecrantz ja...@vmware.com Signed-off-by: Jakob Bornecrantz ja...@vmware.com Signed-off-by: Thomas Hellstrom thellst...@vmware.com --- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 10 +- 1 files changed, 9 insertions(+), 1 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 39b99db..00ec619 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -679,6 +679,7 @@ static int do_dmabuf_define_gmrfb(struct drm_file *file_priv, struct vmw_private *dev_priv, struct vmw_framebuffer *framebuffer) { + int depth = framebuffer-base.depth; size_t fifo_size; int ret; @@ -687,6 +688,13 @@ static int do_dmabuf_define_gmrfb(struct drm_file *file_priv, SVGAFifoCmdDefineGMRFB body; } *cmd; + /* Emulate RGBA support, contrary to svga_reg.h this is not + * supported by hosts. This is only a problem if we are reading Uh, what if it becomes supported at some point? Should there be some check against the host version? (Thinking that some user might be running this older driver with a newer host that des support 32 - won't that cause issues?) We can add a check then, from the point of view of userspace 32 bit framebuffers works fine. The problem is with the readback ioctl where the readback pixels alpha will be clobbered. We also don't support depths of 30 R10G10B10X2. If we add support for depth 32 and/or depth 30 formats we can add params to tell userspace about that, right now there isn't really a point to them since they will always return not supported and we need to do any work around in userspace anyways. nods sounds reasonable. Cheers, Jakob. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 41698] [r300g] Flickering user interface in WoW
https://bugs.freedesktop.org/show_bug.cgi?id=41698 --- Comment #4 from Chris Rankin ranki...@googlemail.com 2011-10-24 15:24:33 PDT --- This bug is still happening after this new commit: commit 2717b8f034db16cf551e167aa5ce3a9be3bf730b Author: Mathias Fröhlich mathias.froehl...@gmx.net Date: Sat Oct 8 21:33:23 2011 +0200 winsys/radeon: restore the old r600g winsys memory characteristics. However, this new commit also means that the original commit can no longer be trivially reverted. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #6 from Michal majkel...@interia.pl 2011-10-24 15:42:28 PDT --- In tests I'm using fluxbox without any compositing. My card is radeon 9100 (rebranded 8500) with 128mb vram. Libtxc library didn't change anything. RADEON_DEBUG=fall puts in loop: R200 begin tcl fallback Rasterization fallback R200 begin rasterization fallback: 0x1 Texture mode R200 end tcl fallback Rasterization fallback R200 end tcl fallback R200 end rasterization fallback: 0x1 Texture mode R200 begin tcl fallback Rasterization fallback R200 begin rasterization fallback: 0x1 Texture mode R200 end tcl fallback Rasterization fallback R200 end tcl fallback R200 end rasterization fallback: 0x1 Texture mode I forget to add. In openarena, i get about 2-4 fps in normal play but when I look down to ground, fps immediately jumps to about 80. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 41698] [r300g] Flickering user interface in WoW
https://bugs.freedesktop.org/show_bug.cgi?id=41698 --- Comment #5 from Marek Olšák mar...@gmail.com 2011-10-24 16:18:42 PDT --- Created attachment 52721 -- https://bugs.freedesktop.org/attachment.cgi?id=52721 possible fix Can you try this patch? -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH] DRM: bug: RADEON_DEBUGFS_MAX_{NUM_FILES = COMPONENTS}
On Fri, Oct 7, 2011 at 19:20, Michael Witten mfwit...@gmail.com wrote: Date: Fri, 16 Sep 2011 20:45:30 + The value of RADEON_DEBUGFS_MAX_NUM_FILES has been used to specify the size of an array, each element of which looks like this: struct radeon_debugfs { struct drm_info_list *files; unsigned num_files; }; Consequently, the number of debugfs files may be much greater than RADEON_DEBUGFS_MAX_NUM_FILES, something that the current code ignores: if ((_radeon_debugfs_count + nfiles) RADEON_DEBUGFS_MAX_NUM_FILES) { DRM_ERROR(Reached maximum number of debugfs files.\n); DRM_ERROR(Report so we increase RADEON_DEBUGFS_MAX_NUM_FILES.\n); return -EINVAL; } This commit fixes this mistake, and accordingly renames: RADEON_DEBUGFS_MAX_NUM_FILES to: RADEON_DEBUGFS_MAX_COMPONENTS Signed-off-by: Michael Witten mfwit...@gmail.com --- drivers/gpu/drm/radeon/radeon.h | 2 +- drivers/gpu/drm/radeon/radeon_device.c | 13 - 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index c1e056b..dd7bab9 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -102,7 +102,7 @@ extern int radeon_pcie_gen2; #define RADEON_FENCE_JIFFIES_TIMEOUT (HZ / 2) /* RADEON_IB_POOL_SIZE must be a power of 2 */ #define RADEON_IB_POOL_SIZE 16 -#define RADEON_DEBUGFS_MAX_NUM_FILES 32 +#define RADEON_DEBUGFS_MAX_COMPONENTS 32 #define RADEONFB_CONN_LIMIT 4 #define RADEON_BIOS_NUM_SCRATCH 8 diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index b51e157..31b1f4b 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -981,7 +981,7 @@ struct radeon_debugfs { struct drm_info_list *files; unsigned num_files; }; -static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_NUM_FILES]; +static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_COMPONENTS]; static unsigned _radeon_debugfs_count = 0; int radeon_debugfs_add_files(struct radeon_device *rdev, @@ -996,14 +996,17 @@ int radeon_debugfs_add_files(struct radeon_device *rdev, return 0; } } - if ((_radeon_debugfs_count + nfiles) RADEON_DEBUGFS_MAX_NUM_FILES) { - DRM_ERROR(Reached maximum number of debugfs files.\n); - DRM_ERROR(Report so we increase RADEON_DEBUGFS_MAX_NUM_FILES.\n); + + i = _radeon_debugfs_count + 1; + if (i RADEON_DEBUGFS_MAX_COMPONENTS) { + DRM_ERROR(Reached maximum number of debugfs components.\n); + DRM_ERROR(Report so we increase + RADEON_DEBUGFS_MAX_COMPONENTS.\n); return -EINVAL; } _radeon_debugfs[_radeon_debugfs_count].files = files; _radeon_debugfs[_radeon_debugfs_count].num_files = nfiles; - _radeon_debugfs_count++; + _radeon_debugfs_count = i; #if defined(CONFIG_DEBUG_FS) drm_debugfs_create_files(files, nfiles, rdev-ddev-control-debugfs_root, -- 1.7.6.409.ge7a85 This patch has not yet been applied. What's wrong? Sincerely, Michael Witten ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[PATCH] drm/radeon/kms: add a CS ioctl flag not to rewrite tiling flags in the CS
This adds a new optional chunk to the CS ioctl that specifies optional flags to the CS parser. Why this is useful is explained below. Note that some regs no longer need the NOP relocation packet if this feature is enabled. Tested on r300g and r600g with this flag disabled and enabled. Assume there are two contexts sharing the same mipmapped tiled texture. One context wants to render into the first mipmap and the other one wants to render into the last mipmap. As you probably know, the hardware has a MACRO_SWITCH feature, which turns off macro tiling for small mipmaps, but that only applies to samplers. (at least on r300-r500, though later hardware likely behaves the same) So we want to just re-set the tiling flags before rendering (writing packets), right? ... No. The contexts run in parallel, so they may set the tiling flags simultaneously and then fire their command streams also simultaneously. The last one setting the flags wins, the other one loses. Another problem is when one context wants to render into the first and the last mipmap in one CS. Impossible. It must flush before changing tiling flags and do the rendering into the smaller mipmaps in another CS. Yet another problem is that writing copy_blit in userspace would be a mess involving re-setting tiling flags to please the kernel, and causing races with other contexts at the same time. The only way out of this is to send tiling flags with each CS, ideally with each relocation. But we already do that through the registers. So let's just use what we have in the registers. Signed-off-by: Marek Olšák mar...@gmail.com --- drivers/gpu/drm/radeon/evergreen_cs.c | 92 +--- drivers/gpu/drm/radeon/r300.c | 94 ++--- drivers/gpu/drm/radeon/r600_cs.c | 26 ++ drivers/gpu/drm/radeon/radeon.h |3 +- drivers/gpu/drm/radeon/radeon_cs.c| 11 - drivers/gpu/drm/radeon/radeon_drv.c |3 +- include/drm/radeon_drm.h |4 ++ 7 files changed, 135 insertions(+), 98 deletions(-) diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c index a134790..86b060c 100644 --- a/drivers/gpu/drm/radeon/evergreen_cs.c +++ b/drivers/gpu/drm/radeon/evergreen_cs.c @@ -508,21 +508,23 @@ static inline int evergreen_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u3 } break; case DB_Z_INFO: - r = evergreen_cs_packet_next_reloc(p, reloc); - if (r) { - dev_warn(p-dev, bad SET_CONTEXT_REG - 0x%04X\n, reg); - return -EINVAL; - } track-db_z_info = radeon_get_ib_value(p, idx); - ib[idx] = ~Z_ARRAY_MODE(0xf); - track-db_z_info = ~Z_ARRAY_MODE(0xf); - if (reloc-lobj.tiling_flags RADEON_TILING_MACRO) { - ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); - track-db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); - } else { - ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); - track-db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); + if (!p-keep_tiling_flags) { + r = evergreen_cs_packet_next_reloc(p, reloc); + if (r) { + dev_warn(p-dev, bad SET_CONTEXT_REG + 0x%04X\n, reg); + return -EINVAL; + } + ib[idx] = ~Z_ARRAY_MODE(0xf); + track-db_z_info = ~Z_ARRAY_MODE(0xf); + if (reloc-lobj.tiling_flags RADEON_TILING_MACRO) { + ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); + track-db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); + } else { + ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); + track-db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); + } } break; case DB_STENCIL_INFO: @@ -635,40 +637,44 @@ static inline int evergreen_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u3 case CB_COLOR5_INFO: case CB_COLOR6_INFO: case CB_COLOR7_INFO: - r = evergreen_cs_packet_next_reloc(p, reloc); - if (r) { - dev_warn(p-dev, bad SET_CONTEXT_REG - 0x%04X\n, reg); - return -EINVAL; - } tmp = (reg - CB_COLOR0_INFO) / 0x3c; track-cb_color_info[tmp] = radeon_get_ib_value(p, idx); - if (reloc-lobj.tiling_flags RADEON_TILING_MACRO) { - ib[idx] |=
[Bug 41668] Screen locks up at random points when using a 3D compositing wm (gnome-shell) on an rv515 (radeon mobility x1300)
https://bugs.freedesktop.org/show_bug.cgi?id=41668 --- Comment #14 from dmotd inaudi...@simplesuperlativ.es 2011-10-24 16:46:17 PDT --- (In reply to comment #13) Try the following options in the kernel command line in grub: pci=nomsi noapic irqpoll and see if any of them help. I have been running my machine with all the above kernel flags appended and i haven't yet experienced an issue. I haven't had a chance to exhaustively test these settings, but my machine has been active for a few days now without a screen lock, so I thought I would report back that this seems to help. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 41668] Screen locks up at random points when using a 3D compositing wm (gnome-shell) on an rv515 (radeon mobility x1300)
https://bugs.freedesktop.org/show_bug.cgi?id=41668 --- Comment #15 from Alex Deucher ag...@yahoo.com 2011-10-24 16:47:39 PDT --- (In reply to comment #14) (In reply to comment #13) Try the following options in the kernel command line in grub: pci=nomsi noapic irqpoll and see if any of them help. I have been running my machine with all the above kernel flags appended and i haven't yet experienced an issue. I haven't had a chance to exhaustively test these settings, but my machine has been active for a few days now without a screen lock, so I thought I would report back that this seems to help. Can you narrow down which specific one helps? -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 41698] [r300g] Flickering user interface in WoW
https://bugs.freedesktop.org/show_bug.cgi?id=41698 --- Comment #6 from Chris Rankin ranki...@googlemail.com 2011-10-24 16:53:50 UTC --- (In reply to comment #5) Can you try this patch? Sorry, no change. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42117] R200 driver performance, UMS, all mesa versions from 7.6
https://bugs.freedesktop.org/show_bug.cgi?id=42117 --- Comment #7 from Roland Scheidegger srol...@vmware.com 2011-10-24 17:03:22 PDT --- Yes that's a fallback. Not sure why it would trigger texture mode fallback. You could try attaching a debugger and see where r200Fallback gets that true mode bit and work from there, could be from several functions. But these ums pieces are going to get away very soon so chances someone going to fix it are slim. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
[Bug 42175] RV730: Display errors in glxgears WebGL
https://bugs.freedesktop.org/show_bug.cgi?id=42175 --- Comment #5 from Stefan kde...@vogtner.de 2011-10-24 17:32:00 PDT --- Mesa 7.12-devel (git-faa16dc) Works. No crash, no picture errors. -- Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH] DRM: bug: RADEON_DEBUGFS_MAX_{NUM_FILES = COMPONENTS}
Maybe you are looking at the wrong branch, but I see it in drm-next (it has been there since Oct 10) http://cgit.freedesktop.org/~airlied/linux/commit/?h=drm-nextid=c245cb9e15055ed5dcf7eaf29232badb0059fdc1 On Mon, 24 Oct 2011, Michael Witten wrote: On Fri, Oct 7, 2011 at 19:20, Michael Witten mfwit...@gmail.com wrote: Date: Fri, 16 Sep 2011 20:45:30 + The value of RADEON_DEBUGFS_MAX_NUM_FILES has been used to specify the size of an array, each element of which looks like this: struct radeon_debugfs { struct drm_info_list *files; unsigned num_files; }; Consequently, the number of debugfs files may be much greater than RADEON_DEBUGFS_MAX_NUM_FILES, something that the current code ignores: if ((_radeon_debugfs_count + nfiles) RADEON_DEBUGFS_MAX_NUM_FILES) { DRM_ERROR(Reached maximum number of debugfs files.\n); DRM_ERROR(Report so we increase RADEON_DEBUGFS_MAX_NUM_FILES.\n); return -EINVAL; } This commit fixes this mistake, and accordingly renames: RADEON_DEBUGFS_MAX_NUM_FILES to: RADEON_DEBUGFS_MAX_COMPONENTS Signed-off-by: Michael Witten mfwit...@gmail.com --- drivers/gpu/drm/radeon/radeon.h | 2 +- drivers/gpu/drm/radeon/radeon_device.c | 13 - 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index c1e056b..dd7bab9 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -102,7 +102,7 @@ extern int radeon_pcie_gen2; #define RADEON_FENCE_JIFFIES_TIMEOUT (HZ / 2) /* RADEON_IB_POOL_SIZE must be a power of 2 */ #define RADEON_IB_POOL_SIZE 16 -#define RADEON_DEBUGFS_MAX_NUM_FILES 32 +#define RADEON_DEBUGFS_MAX_COMPONENTS 32 #define RADEONFB_CONN_LIMIT 4 #define RADEON_BIOS_NUM_SCRATCH 8 diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index b51e157..31b1f4b 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -981,7 +981,7 @@ struct radeon_debugfs { struct drm_info_list *files; unsigned num_files; }; -static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_NUM_FILES]; +static struct radeon_debugfs _radeon_debugfs[RADEON_DEBUGFS_MAX_COMPONENTS]; static unsigned _radeon_debugfs_count = 0; int radeon_debugfs_add_files(struct radeon_device *rdev, @@ -996,14 +996,17 @@ int radeon_debugfs_add_files(struct radeon_device *rdev, return 0; } } - if ((_radeon_debugfs_count + nfiles) RADEON_DEBUGFS_MAX_NUM_FILES) { - DRM_ERROR(Reached maximum number of debugfs files.\n); - DRM_ERROR(Report so we increase RADEON_DEBUGFS_MAX_NUM_FILES.\n); + + i = _radeon_debugfs_count + 1; + if (i RADEON_DEBUGFS_MAX_COMPONENTS) { + DRM_ERROR(Reached maximum number of debugfs components.\n); + DRM_ERROR(Report so we increase + RADEON_DEBUGFS_MAX_COMPONENTS.\n); return -EINVAL; } _radeon_debugfs[_radeon_debugfs_count].files = files; _radeon_debugfs[_radeon_debugfs_count].num_files = nfiles; - _radeon_debugfs_count++; + _radeon_debugfs_count = i; #if defined(CONFIG_DEBUG_FS) drm_debugfs_create_files(files, nfiles, rdev-ddev-control-debugfs_root, -- 1.7.6.409.ge7a85 This patch has not yet been applied. What's wrong? Sincerely, Michael Witten ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re:nouveau page_flip function implement not wait vblank, which cause screen garbage
Can anyone give a suggestion, is wait-vblank fully implemented in page_flip() for nouveau drm driver? At 2011-10-24 14:30:55,chris wwzbw...@163.com wrote: Dear, I use NVidia Geforce 7300GT graphics card in my PC, and Linux 3.1rc4 kernel code, git drm 2.4.36. When I run the vbltest program, it prints 60HZ which indicated the implementation of drmWaitVBlank() and drm_vblank_wait() is correct. But when I run modetest with option -v -s 12:1280x1024 , it prints high fresh rate up to 150 HZ . I examing the code , and found that no waiting vblank operation is processed in nouveau_crtc_page_flip() function. The screen produced lots of garbage and blink very much. int nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. } I study the i915 intel_crtc_page_flip implementation. static int intel_crtc_page_flip(stru ct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. ret = drm_vblank_get(dev, intel_crtc-pipe); if (ret) goto cleanup_objs; work-pending_flip_obj = obj; work-enable_stall_check = true; /* Block clients from rendering to the new back buffer until * the flip occurs and the object is no longer visible. */ atomic_add(1 intel_crtc-plane, work-old_fb_obj-pending_flip); ret = dev_priv-display.queue_flip(dev, crtc, fb, obj); nb sp; if (ret) goto cleanup_pending; .. } after vblank irq acquired, the interrupt isr will wakup the runqueue. 6159 static void do_intel_finish_page_flip(struct drm_device *dev, 6160 struct drm_crtc *crtc) 6161 { .. 6211 list_add_tail(e-base.link, 6212 e-base.file_priv-event_list); 6213 wake_up_interruptible(e-base.file_priv-event_wait); 6214 } 6215 6216 drm_vblank_put(dev, intel_crtc-pipe); 6217 Is there anyone use the same driver and foun d this issues can tell me is it a bug? Thanks! ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: nouveau page_flip function implement not wait vblank, which cause screen garbage
2011/10/25 chris wwzbw...@163.com: Can anyone give a suggestion, is wait-vblank fully implemented in page_flip() for nouveau drm driver? At 2011-10-24 14:30:55,chris wwzbw...@163.com wrote: Dear, I use NVidia Geforce 7300GT graphics card in my PC, and Linux 3.1rc4 kernel code, git drm 2.4.36. When I run the vbltest program, it prints 60HZ which indicated the implementation of drmWaitVBlank() and drm_vblank_wait() is correct. But when I run modetest with option -v -s 12:1280x1024 , it prints high fresh rate up to 150 HZ . I examing the code , and found that no waiting vblank operation is processed in nouveau_crtc_ page_flip() function. The screen produced lots of garbage and blink very much. int nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. } I study the i915 intel_crtc_page_flip implementation. static int intel_crtc_page_flip(stru ct drm_crtc *crtc, struct drm_framebuffer *fb, struct drm_pending_vblank_event *event) { .. ret = drm_vblank_get(dev, intel_crtc-pipe); if (ret) goto cleanup_objs; work-pending_flip_obj = obj; work-enable_stall_check = true; /* Block clients from rendering to the new back buffer until * the flip occurs and the object is no longer visible. */ atomic_add(1 intel_crtc-plane, work-old_fb_obj-pending_flip); ret = dev_priv-display.queue_flip(dev, crtc, fb, obj); am p;nb sp; if (ret) goto cleanup_pending; .. } after vblank irq acquired, the interrupt isr will wakup the runqueue. 6159 static void do_intel_finish_page_flip(struct drm_device *dev, 6160 struct drm_crtc *crtc) 6161 { .. 6211 list_add_tail(e-base.link, 6212 e-base.file_priv-event_list); 6213 wake_up_interruptible(e-base.file_priv-event_wait); 6214 } 6215 6216 drm_vblank_put(dev, intel_crtc-pipe); 6217 Is there anyone use the same driver and foun d this issues can tell me is it a bug? Thanks! ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel It seems to be, the actual page flipping is done by software method (see nv04_graph_mthd_page_flip). There is one thing i'm unsure about and that is that we wait for the rendering to be done to the current frontbuffer and not the current backbuffer (this is only done if the page flip channel is different than the rendering channel). Maybe someone else can comment on that. -- Far away from the primal instinct, the song seems to fade away, the river get wider between your thoughts and the things we do and say. ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel