Re: [Mesa-dev] [RFC] r600-r800 2D tiling

2012-01-16 Thread Simon Farnsworth
(resending due to my inability to work my e-mail client - I neither cc'd
Jerome, nor used the correct identity, so the original appears to be held in
moderation).

On Thursday 12 January 2012, Jerome Glisse j.gli...@gmail.com wrote:
 Hi,
 
 I don't cross post as i am pretty sure all interested people are reading
 this mailing-list.
 
 Attached is kernel, libdrm, ddx, mesa/r600g patches to enable 2D tiling
 on r600 to cayman. I haven't yet done a full regression testing but 2D
 tiling seems to work ok. I would like to get feedback on 2 things :
 
 - the kernel API

I notice that you don't expose all the available Evergreen parameters to
user control (TILE_SPLIT_BYTES, NUM_BANKS are both currently fixed by the
kernel). Is this deliberate?

It looks like it's leftovers from a previous attempt to force Evergreen's
flexible 2D tiling to behave like R600's fixed-by-hardware 2D tiling.

 - using libdrm/radeon as common place for surface allocation
 
 The second question especialy impact the layering/abstraction of gallium
 btw winsys as it make libdrm/radeon_surface API a part of the winsys.
 The ddx doesn't need as much knowledge as mesa (pretty much the whole
 mipmap tree is pointless to the ddx). So anyone have strong feeling
 about moving the whole mipmap tree computation to this common code ?

I'm in favour - it means that all the code relating to the details of how
modern Radeons tile surfaces is in one place.

I've looked at the API you introduce to handle this, and it should be very
easy to port to a non-libdrm platform - the only element of the API that's
currently tied to libdrm is radeon_surface_manager_new, so a new platform
shouldn't struggle to adapt it.

I do have one question; how are you intending to handle passing the tiling
parameters from the DDX to Mesa for GLX_EXT_texture_from_pixmap? Right now,
it works because the DDX uses the surface manager's defaults for tiling, as
does Mesa; I would expect Mesa to read out the parameters as set in the
kernel and use those.

At a future date, I can envisage the DDX wanting to choose a different
tiling layout for DRI2 buffers, or XComposite backing pixmaps (e.g. because
someone's benchmarked it and found that choosing something beyond the bare
minimum that meets constraints improves performance); it would be a shame if
we can't do this because Mesa's not flexible enough.

-- 
Simon Farnsworth
Software Engineer
ONELAN Limited
http://www.onelan.com/
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [RFC] r600-r800 2D tiling

2012-01-16 Thread Jerome Glisse
On Mon, Jan 16, 2012 at 12:08:17PM +, Simon Farnsworth wrote:
 (resending due to my inability to work my e-mail client - I neither cc'd
 Jerome, nor used the correct identity, so the original appears to be held in
 moderation).
 
 On Thursday 12 January 2012, Jerome Glisse j.gli...@gmail.com wrote:
  Hi,
  
  I don't cross post as i am pretty sure all interested people are reading
  this mailing-list.
  
  Attached is kernel, libdrm, ddx, mesa/r600g patches to enable 2D tiling
  on r600 to cayman. I haven't yet done a full regression testing but 2D
  tiling seems to work ok. I would like to get feedback on 2 things :
  
  - the kernel API
 
 I notice that you don't expose all the available Evergreen parameters to
 user control (TILE_SPLIT_BYTES, NUM_BANKS are both currently fixed by the
 kernel). Is this deliberate?
 
 It looks like it's leftovers from a previous attempt to force Evergreen's
 flexible 2D tiling to behave like R600's fixed-by-hardware 2D tiling.

I need to add tile split to kernel API, num banks is not a surface parameter.
Well it is but it needs to be set to the same value as the global one. I think
it might only be usefull in multi-gpu case with different GPU (but that's
just a wild guess).

 
  - using libdrm/radeon as common place for surface allocation
  
  The second question especialy impact the layering/abstraction of gallium
  btw winsys as it make libdrm/radeon_surface API a part of the winsys.
  The ddx doesn't need as much knowledge as mesa (pretty much the whole
  mipmap tree is pointless to the ddx). So anyone have strong feeling
  about moving the whole mipmap tree computation to this common code ?
 
 I'm in favour - it means that all the code relating to the details of how
 modern Radeons tile surfaces is in one place.
 
 I've looked at the API you introduce to handle this, and it should be very
 easy to port to a non-libdrm platform - the only element of the API that's
 currently tied to libdrm is radeon_surface_manager_new, so a new platform
 shouldn't struggle to adapt it.

I am in process of reworking a bit the API but it will be very close and
only the surface manager creator will have drm specific code.

 I do have one question; how are you intending to handle passing the tiling
 parameters from the DDX to Mesa for GLX_EXT_texture_from_pixmap? Right now,
 it works because the DDX uses the surface manager's defaults for tiling, as
 does Mesa; I would expect Mesa to read out the parameters as set in the
 kernel and use those.
 
 At a future date, I can envisage the DDX wanting to choose a different
 tiling layout for DRI2 buffers, or XComposite backing pixmaps (e.g. because
 someone's benchmarked it and found that choosing something beyond the bare
 minimum that meets constraints improves performance); it would be a shame if
 we can't do this because Mesa's not flexible enough.

We don't use dri2 to communicate tiling info, we go through kernel for that.
So ddx call set_tiling ioctl and mesa call get_tiling, i haven't hooked up
the mesa side to extract various eg values yet, right now it works because
both ddx and mesa use same surface allocator param so they end up taking
same value for various eg fields. Again i am working on this. Hopefully
should be completely done this week.

Cheers,
Jerome
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [RFC] r600-r800 2D tiling

2012-01-16 Thread Marek Olšák
Hi Jerome,

I skimmed over the patches and how libdrm interacts with r600g and it
looks good.

We don't generally include DRM-specific headers in radeon_winsys.h,
because that header is supposed to be cross-platform, but I guess we
can make an exception in this case as long as we keep radeon_surface.h
small.

Marek

2012/1/13 Jerome Glisse j.gli...@gmail.com:
 On Fri, Jan 13, 2012 at 11:59:28AM +0100, Michel Dänzer wrote:
 On Don, 2012-01-12 at 14:50 -0500, Jerome Glisse wrote:
 
  Attached is kernel, libdrm, ddx, mesa/r600g patches to enable 2D tiling
  on r600 to cayman. I haven't yet done a full regression testing but 2D
  tiling seems to work ok. I would like to get feedback on 2 things :
 
  - the kernel API
  - using libdrm/radeon as common place for surface allocation

 I generally like the idea of centralizing this in libdrm_radeon.


  The second question especialy impact the layering/abstraction of gallium
  btw winsys as it make libdrm/radeon_surface API a part of the winsys.

 That's unfortunate, but then again the Radeon Gallium drivers have never
 been very clean in this regard. I guess the first one to want to use
 them on a non-DRM platform gets to clean that up. :)


  To test you need to set ColorTiling2D to true in your xorg.conf, plan
  is to get mesa 8.0 and newer with proper support for 2D tiling and
  in 1 year, to move ColorTiling2D default value from false to true.
  (assumption is that by then we could assume that someone with a working
  ddx would also have a supported mesa)

 Sounds good.

 Note that the Mesa and X driver changes need to either continue building
 and working with older libdrm_radeon, or bump the libdrm_radeon version
 requirement in configure.ac.

 Plan is to release updated libdrm before commiting to mesa, at which point
 i will try to dust off my configure.ac foo.

 I updated patches and are now at :
 http://people.freedesktop.org/~glisse/tiling/

 For them to work you need the ddx option and for mesa you need to set
 R600_TILING=1  R600_SURF=1. I will remove this once i am confident that
 it works accross various GPU without regression.

 Cheers,
 Jerome
 ___
 mesa-dev mailing list
 mesa-dev@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [RFC] r600-r800 2D tiling

2012-01-13 Thread Jerome Glisse
On Fri, Jan 13, 2012 at 11:59:28AM +0100, Michel Dänzer wrote:
 On Don, 2012-01-12 at 14:50 -0500, Jerome Glisse wrote: 
  
  Attached is kernel, libdrm, ddx, mesa/r600g patches to enable 2D tiling
  on r600 to cayman. I haven't yet done a full regression testing but 2D
  tiling seems to work ok. I would like to get feedback on 2 things :
  
  - the kernel API
  - using libdrm/radeon as common place for surface allocation
 
 I generally like the idea of centralizing this in libdrm_radeon.
 
 
  The second question especialy impact the layering/abstraction of gallium
  btw winsys as it make libdrm/radeon_surface API a part of the winsys.
 
 That's unfortunate, but then again the Radeon Gallium drivers have never
 been very clean in this regard. I guess the first one to want to use
 them on a non-DRM platform gets to clean that up. :)
 
 
  To test you need to set ColorTiling2D to true in your xorg.conf, plan
  is to get mesa 8.0 and newer with proper support for 2D tiling and
  in 1 year, to move ColorTiling2D default value from false to true.
  (assumption is that by then we could assume that someone with a working
  ddx would also have a supported mesa)
 
 Sounds good.
 
 Note that the Mesa and X driver changes need to either continue building
 and working with older libdrm_radeon, or bump the libdrm_radeon version
 requirement in configure.ac.

Plan is to release updated libdrm before commiting to mesa, at which point
i will try to dust off my configure.ac foo.

I updated patches and are now at :
http://people.freedesktop.org/~glisse/tiling/

For them to work you need the ddx option and for mesa you need to set
R600_TILING=1  R600_SURF=1. I will remove this once i am confident that
it works accross various GPU without regression.

Cheers,
Jerome
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev