Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread José Fonseca
On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
 Hi,
 
 I am a bit puzzled, how a pipe driver should handle
 draw callback failure ? On radeon (pretty sure nouveau
 or intel hit the same issue) we can only know when one
 of the draw_* context callback is call if we can do
 the rendering or not.
 
 The failure here is dictated by memory constraint, ie
 if user bind big texture, big vbo ... we might not have
 enough GPU address space to bind all the desired object
 (even for drawing a single triangle) ?
 
 What should we do ? None of the draw callback can return
 a value ? Maybe for a GL stack tracker we should report
 GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
 is i think pipe driver are missing something here. Any
 idea ? Thought ? Is there already a plan to address that ? :)

Gallium draw calls had return codes before. They were used for the
fallover driver IIRC and were recently deleted.

Either we put the return codes back, or we add a new
pipe_context::validate() that would ensure that all necessary conditions
to draw successfully are met.

Putting return codes on bind time won't work, because one can't set all
atoms simultaneously -- atoms are set one by one, so when one's setting
the state there are state combinations which may exceed the available
resources but that are never drawn with. E.g. you have a huge VB you
finished drawing, and then you switch to drawing with a small VB with a
huge texture, but in between it may happen that you have both bound
simultaneously.

If ignoring is not an alternative, then I'd prefer a validate call.

Whether to fallback to software or not -- it seems to me it's really a
problem that must be decided case by case. Drivers are supposed to be
useful -- if hardware is so limited that it can't do anything useful
then falling back to software is sensible. I don't think that a driver
should support every imaginable thing -- apps should check errors, and
users should ensure they have enough hardware resources for the
workloads they want.

Personally I think state trackers shouldn't emulate anything with CPU
beyond unsupported pixel formats. If a hardware is so limited that in
need CPU assistence this should taken care transparently by the pipe
driver. Nevertheless we can and should provide auxiliary libraries like
draw to simplify the pipe driver implementation.

Jose


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread José Fonseca
On Sun, 2010-02-28 at 21:35 -0800, Corbin Simpson wrote:
 On Sun, Feb 28, 2010 at 9:15 PM, Dave Airlie airl...@gmail.com wrote:
  On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt b...@zhasha.com wrote:
  On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote:
  Hi,
 
  I am a bit puzzled, how a pipe driver should handle
  draw callback failure ? On radeon (pretty sure nouveau
  or intel hit the same issue) we can only know when one
  of the draw_* context callback is call if we can do
  the rendering or not.
 
  The failure here is dictated by memory constraint, ie
  if user bind big texture, big vbo ... we might not have
  enough GPU address space to bind all the desired object
  (even for drawing a single triangle) ?
 
  What should we do ? None of the draw callback can return
  a value ? Maybe for a GL stack tracker we should report
  GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
  is i think pipe driver are missing something here. Any
  idea ? Thought ? Is there already a plan to address that ? :)
 
  Cheers,
  Jerome
 
  I think a vital point you're missing is: do we even care? If rendering
  fails because we simply can't render any more, do we even want to fall
  back? I can see a point in having a cap on how large a buffer can be
  rendered but apart from that, I'm not sure there even is a problem.
 
 
  Welcome to GL. If I have a 32MB graphics card, and I advertise
  a maximum texture size of 4096x4096 + cubemapping + 3D textures,
  there is no nice way for the app to get a clue about what it can legally
  ask me to do. Old DRI drivers used to either use texmem which would
  try and scale the limits etc to what it could legally fit in the
  memory available,
  or with bufmgr drivers they would check against a limit from the kernel,
  and in both cases sw fallback if necessary. Gallium seemingly can't do this,
  maybe its okay to ignore it but it wasn't an option when we did the
  old DRI drivers.
 
 GL_ATI_meminfo is unfortunately the best bet. :C
 
 Also Gallium's API is written so that drivers must never fail on
 render calls. This is *incredibly* lame but there's nothing that can
 be done. Every single driver is currently encouraged to just drop shit
 on the floor if e.g. u_trim_pipe_prim fails, and every driver is
 encouraged to call u_trim_pipe_prim, so we have stupidity like: if
 (!u_trim_pipe_prim(mode, count)) { return; }
 
 In EVERY SINGLE DRIVER. Most uncool. What's the point of a unified API
 if it can't do sanity checks? :T

I don't see what sanity checking has to do with the topic of failing
draw calls.

Would 

 (!u_trim_pipe_prim(mode, count)) { return FALSE; }

make you any happier?

I think we all agree sanity checking should be done by the state
trackers.  You're just confusing the result of common practices of
cut'n'pasting code and working around third-party problems in their code
with the encouraged design principles.  I'm sure a patch to fix this
would be most welcome.

Jose


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Keith Whitwell
On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
 On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
  Hi,
  
  I am a bit puzzled, how a pipe driver should handle
  draw callback failure ? On radeon (pretty sure nouveau
  or intel hit the same issue) we can only know when one
  of the draw_* context callback is call if we can do
  the rendering or not.
  
  The failure here is dictated by memory constraint, ie
  if user bind big texture, big vbo ... we might not have
  enough GPU address space to bind all the desired object
  (even for drawing a single triangle) ?
  
  What should we do ? None of the draw callback can return
  a value ? Maybe for a GL stack tracker we should report
  GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
  is i think pipe driver are missing something here. Any
  idea ? Thought ? Is there already a plan to address that ? :)
 
 Gallium draw calls had return codes before. They were used for the
 fallover driver IIRC and were recently deleted.
 
 Either we put the return codes back, or we add a new
 pipe_context::validate() that would ensure that all necessary conditions
 to draw successfully are met.
 
 Putting return codes on bind time won't work, because one can't set all
 atoms simultaneously -- atoms are set one by one, so when one's setting
 the state there are state combinations which may exceed the available
 resources but that are never drawn with. E.g. you have a huge VB you
 finished drawing, and then you switch to drawing with a small VB with a
 huge texture, but in between it may happen that you have both bound
 simultaneously.
 
 If ignoring is not an alternative, then I'd prefer a validate call.
 
 Whether to fallback to software or not -- it seems to me it's really a
 problem that must be decided case by case. Drivers are supposed to be
 useful -- if hardware is so limited that it can't do anything useful
 then falling back to software is sensible. I don't think that a driver
 should support every imaginable thing -- apps should check errors, and
 users should ensure they have enough hardware resources for the
 workloads they want.
 
 Personally I think state trackers shouldn't emulate anything with CPU
 beyond unsupported pixel formats. If a hardware is so limited that in
 need CPU assistence this should taken care transparently by the pipe
 driver. Nevertheless we can and should provide auxiliary libraries like
 draw to simplify the pipe driver implementation.


My opinion on this is similar: the pipe driver is responsible for
getting the rendering done.  If it needs to pull in a fallback module to
achieve that, it is the pipe driver's responsibility to do so.

Understanding the limitations of hardware and the best ways to work
around those limitations is really something that the driver itself is
best positioned to handle.

The slight quirk of OpenGL is that there are some conditions where
theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or
similar) and not render.  This option isn't really available to gallium
drivers, mainly because we don't know inside gallium whether the API
permits this.  Unfortunately, even in OpenGL, very few applications
actually check the error conditions, or do anything sensible when they
fail.

I don't really like the idea of pipe drivers being able to fail render
calls, as it means that every state tracker and every bit of utility
code that issues a pipe-draw() call will have to check the return code
and hook in fallback code on failure.

One interesting thing would be to consider creating a layer that exposes
a pipe_context interface to the state tracker, but revives some of the
failover ideas internally - maybe as a first step just lifting the draw
module usage up to a layer above the actual hardware driver.

Keith


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Jerome Glisse
On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
 On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
  On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
   Hi,
   
   I am a bit puzzled, how a pipe driver should handle
   draw callback failure ? On radeon (pretty sure nouveau
   or intel hit the same issue) we can only know when one
   of the draw_* context callback is call if we can do
   the rendering or not.
   
   The failure here is dictated by memory constraint, ie
   if user bind big texture, big vbo ... we might not have
   enough GPU address space to bind all the desired object
   (even for drawing a single triangle) ?
   
   What should we do ? None of the draw callback can return
   a value ? Maybe for a GL stack tracker we should report
   GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
   is i think pipe driver are missing something here. Any
   idea ? Thought ? Is there already a plan to address that ? :)
  
  Gallium draw calls had return codes before. They were used for the
  fallover driver IIRC and were recently deleted.
  
  Either we put the return codes back, or we add a new
  pipe_context::validate() that would ensure that all necessary conditions
  to draw successfully are met.
  
  Putting return codes on bind time won't work, because one can't set all
  atoms simultaneously -- atoms are set one by one, so when one's setting
  the state there are state combinations which may exceed the available
  resources but that are never drawn with. E.g. you have a huge VB you
  finished drawing, and then you switch to drawing with a small VB with a
  huge texture, but in between it may happen that you have both bound
  simultaneously.
  
  If ignoring is not an alternative, then I'd prefer a validate call.
  
  Whether to fallback to software or not -- it seems to me it's really a
  problem that must be decided case by case. Drivers are supposed to be
  useful -- if hardware is so limited that it can't do anything useful
  then falling back to software is sensible. I don't think that a driver
  should support every imaginable thing -- apps should check errors, and
  users should ensure they have enough hardware resources for the
  workloads they want.
  
  Personally I think state trackers shouldn't emulate anything with CPU
  beyond unsupported pixel formats. If a hardware is so limited that in
  need CPU assistence this should taken care transparently by the pipe
  driver. Nevertheless we can and should provide auxiliary libraries like
  draw to simplify the pipe driver implementation.
 
 
 My opinion on this is similar: the pipe driver is responsible for
 getting the rendering done.  If it needs to pull in a fallback module to
 achieve that, it is the pipe driver's responsibility to do so.
 
 Understanding the limitations of hardware and the best ways to work
 around those limitations is really something that the driver itself is
 best positioned to handle.
 
 The slight quirk of OpenGL is that there are some conditions where
 theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or
 similar) and not render.  This option isn't really available to gallium
 drivers, mainly because we don't know inside gallium whether the API
 permits this.  Unfortunately, even in OpenGL, very few applications
 actually check the error conditions, or do anything sensible when they
 fail.
 
 I don't really like the idea of pipe drivers being able to fail render
 calls, as it means that every state tracker and every bit of utility
 code that issues a pipe-draw() call will have to check the return code
 and hook in fallback code on failure.
 
 One interesting thing would be to consider creating a layer that exposes
 a pipe_context interface to the state tracker, but revives some of the
 failover ideas internally - maybe as a first step just lifting the draw
 module usage up to a layer above the actual hardware driver.
 
 Keith
 

So you don't like the pipe_context::validate() of Jose ? My
taste goes to the pipe_context::validate() and having state
tracker setting the proper flag according to the API they
support (GL_OUT_OF_MEMORY for GL), this means just drop
rendering command that we can't do.

I am not really interested in doing software fallback. What
would be nice is someone testing with closed source driver
what happen when you try to draw somethings the GPU can't
handle. Maybe even people from closed source world can give
us a clue on what they are doing in front of such situation :)

Cheers,
Jerome

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Keith Whitwell
On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote:
 On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
  On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
   On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
Hi,

I am a bit puzzled, how a pipe driver should handle
draw callback failure ? On radeon (pretty sure nouveau
or intel hit the same issue) we can only know when one
of the draw_* context callback is call if we can do
the rendering or not.

The failure here is dictated by memory constraint, ie
if user bind big texture, big vbo ... we might not have
enough GPU address space to bind all the desired object
(even for drawing a single triangle) ?

What should we do ? None of the draw callback can return
a value ? Maybe for a GL stack tracker we should report
GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
is i think pipe driver are missing something here. Any
idea ? Thought ? Is there already a plan to address that ? :)
   
   Gallium draw calls had return codes before. They were used for the
   fallover driver IIRC and were recently deleted.
   
   Either we put the return codes back, or we add a new
   pipe_context::validate() that would ensure that all necessary conditions
   to draw successfully are met.
   
   Putting return codes on bind time won't work, because one can't set all
   atoms simultaneously -- atoms are set one by one, so when one's setting
   the state there are state combinations which may exceed the available
   resources but that are never drawn with. E.g. you have a huge VB you
   finished drawing, and then you switch to drawing with a small VB with a
   huge texture, but in between it may happen that you have both bound
   simultaneously.
   
   If ignoring is not an alternative, then I'd prefer a validate call.
   
   Whether to fallback to software or not -- it seems to me it's really a
   problem that must be decided case by case. Drivers are supposed to be
   useful -- if hardware is so limited that it can't do anything useful
   then falling back to software is sensible. I don't think that a driver
   should support every imaginable thing -- apps should check errors, and
   users should ensure they have enough hardware resources for the
   workloads they want.
   
   Personally I think state trackers shouldn't emulate anything with CPU
   beyond unsupported pixel formats. If a hardware is so limited that in
   need CPU assistence this should taken care transparently by the pipe
   driver. Nevertheless we can and should provide auxiliary libraries like
   draw to simplify the pipe driver implementation.
  
  
  My opinion on this is similar: the pipe driver is responsible for
  getting the rendering done.  If it needs to pull in a fallback module to
  achieve that, it is the pipe driver's responsibility to do so.
  
  Understanding the limitations of hardware and the best ways to work
  around those limitations is really something that the driver itself is
  best positioned to handle.
  
  The slight quirk of OpenGL is that there are some conditions where
  theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or
  similar) and not render.  This option isn't really available to gallium
  drivers, mainly because we don't know inside gallium whether the API
  permits this.  Unfortunately, even in OpenGL, very few applications
  actually check the error conditions, or do anything sensible when they
  fail.
  
  I don't really like the idea of pipe drivers being able to fail render
  calls, as it means that every state tracker and every bit of utility
  code that issues a pipe-draw() call will have to check the return code
  and hook in fallback code on failure.
  
  One interesting thing would be to consider creating a layer that exposes
  a pipe_context interface to the state tracker, but revives some of the
  failover ideas internally - maybe as a first step just lifting the draw
  module usage up to a layer above the actual hardware driver.
  
  Keith
  
 
 So you don't like the pipe_context::validate() of Jose ? My
 taste goes to the pipe_context::validate() and having state
 tracker setting the proper flag according to the API they
 support (GL_OUT_OF_MEMORY for GL), this means just drop
 rendering command that we can't do.

I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but
the pipe driver should:

a) not rely on validate() being called - ie it is just a query, not a
mandatory prepare-to-render notification.

b) make a best effort to render in subsequent draw() calls, even if
validate has been called - ie. it is just a query, does not modify pipe
driver behaviour.

 I am not really interested in doing software fallback. What
 would be nice is someone testing with closed source driver
 what happen when you try to draw somethings the GPU can't
 handle. Maybe even people from closed source world can give
 us a clue on what they are 

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Olivier Galibert
On Mon, Mar 01, 2010 at 12:55:09PM +0100, Jerome Glisse wrote:
 So you don't like the pipe_context::validate() of Jose ? My
 taste goes to the pipe_context::validate() and having state
 tracker setting the proper flag according to the API they
 support (GL_OUT_OF_MEMORY for GL), this means just drop
 rendering command that we can't do.

validate-then-do is a race condition waiting to happen.  Validate is
also a somewhat costly operation to do, and 99.999% of the time for
nothing.  You don't want to optimize for the error case, and that's
what validate *is*.

  OG.

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Keith Whitwell
On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote:
 On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote:
  On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
   On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
 Hi,
 
 I am a bit puzzled, how a pipe driver should handle
 draw callback failure ? On radeon (pretty sure nouveau
 or intel hit the same issue) we can only know when one
 of the draw_* context callback is call if we can do
 the rendering or not.
 
 The failure here is dictated by memory constraint, ie
 if user bind big texture, big vbo ... we might not have
 enough GPU address space to bind all the desired object
 (even for drawing a single triangle) ?
 
 What should we do ? None of the draw callback can return
 a value ? Maybe for a GL stack tracker we should report
 GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
 is i think pipe driver are missing something here. Any
 idea ? Thought ? Is there already a plan to address that ? :)

Gallium draw calls had return codes before. They were used for the
fallover driver IIRC and were recently deleted.

Either we put the return codes back, or we add a new
pipe_context::validate() that would ensure that all necessary conditions
to draw successfully are met.

Putting return codes on bind time won't work, because one can't set all
atoms simultaneously -- atoms are set one by one, so when one's setting
the state there are state combinations which may exceed the available
resources but that are never drawn with. E.g. you have a huge VB you
finished drawing, and then you switch to drawing with a small VB with a
huge texture, but in between it may happen that you have both bound
simultaneously.

If ignoring is not an alternative, then I'd prefer a validate call.

Whether to fallback to software or not -- it seems to me it's really a
problem that must be decided case by case. Drivers are supposed to be
useful -- if hardware is so limited that it can't do anything useful
then falling back to software is sensible. I don't think that a driver
should support every imaginable thing -- apps should check errors, and
users should ensure they have enough hardware resources for the
workloads they want.

Personally I think state trackers shouldn't emulate anything with CPU
beyond unsupported pixel formats. If a hardware is so limited that in
need CPU assistence this should taken care transparently by the pipe
driver. Nevertheless we can and should provide auxiliary libraries like
draw to simplify the pipe driver implementation.
   
   
   My opinion on this is similar: the pipe driver is responsible for
   getting the rendering done.  If it needs to pull in a fallback module to
   achieve that, it is the pipe driver's responsibility to do so.
   
   Understanding the limitations of hardware and the best ways to work
   around those limitations is really something that the driver itself is
   best positioned to handle.
   
   The slight quirk of OpenGL is that there are some conditions where
   theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or
   similar) and not render.  This option isn't really available to gallium
   drivers, mainly because we don't know inside gallium whether the API
   permits this.  Unfortunately, even in OpenGL, very few applications
   actually check the error conditions, or do anything sensible when they
   fail.
   
   I don't really like the idea of pipe drivers being able to fail render
   calls, as it means that every state tracker and every bit of utility
   code that issues a pipe-draw() call will have to check the return code
   and hook in fallback code on failure.
   
   One interesting thing would be to consider creating a layer that exposes
   a pipe_context interface to the state tracker, but revives some of the
   failover ideas internally - maybe as a first step just lifting the draw
   module usage up to a layer above the actual hardware driver.
   
   Keith
   
  
  So you don't like the pipe_context::validate() of Jose ? My
  taste goes to the pipe_context::validate() and having state
  tracker setting the proper flag according to the API they
  support (GL_OUT_OF_MEMORY for GL), this means just drop
  rendering command that we can't do.
 
 I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but
 the pipe driver should:
 
 a) not rely on validate() being called - ie it is just a query, not a
 mandatory prepare-to-render notification.
 
 b) make a best effort to render in subsequent draw() calls, even if
 validate has been called - ie. it is just a query, does not modify pipe
 driver behaviour.
 
  I am not really interested in doing software fallback. What
  would be nice is someone testing with closed source 

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Luca Barbieri
Falling back to CPU rendering, while respecting the OpenGL spec, is
likely going to be unusably slow in most cases and thus not really
better for real usage than just not rendering.

I think the only way to have an usable fallback mechanism is to do
fallbacks with the GPU, by automatically introducing multiple
rendering passes.
For instance, if you were to run each fragment shader instruction in a
separate pass (using floating point targets), then you would never
have more than two texture operands.

If the render targets are too large, you can also just split them in
multiple portions, and you can limit texture size so that 2 textures
plus a render target portion always fit in memory. Alternatively, you
can split textures too, try to statically deduce the referenced
portion and KIL if you guessed wrong, combined with occlusion queries
to check if you KILled.

Control flow complicates things, but you can probably just put the
execution mask in a stencil buffer or secondary render target/texture,
and use occlusion queries to find out if it is empty.

Of course, this requires to write and test a very significant amount
of complex code (probably including a TGSI-LLVM-TGSI infrastructure,
since you likely need nontrivial compiler techniques to do this
optimally).

However, we may need part of this anyway to support multi-GPU
configurations, and it also allows to emulate advanced shader
capabilities on less capable hardware (e.g. shaders with more
instructions or temporaries than the hardware limitations, or
SM3+/GLSL shaders on SM2 hardware), with some hope of getting usable
performance.

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Jerome Glisse
On Mon, Mar 01, 2010 at 01:40:37PM +0100, Olivier Galibert wrote:
 On Mon, Mar 01, 2010 at 12:55:09PM +0100, Jerome Glisse wrote:
  So you don't like the pipe_context::validate() of Jose ? My
  taste goes to the pipe_context::validate() and having state
  tracker setting the proper flag according to the API they
  support (GL_OUT_OF_MEMORY for GL), this means just drop
  rendering command that we can't do.
 
 validate-then-do is a race condition waiting to happen.  Validate is
 also a somewhat costly operation to do, and 99.999% of the time for
 nothing.  You don't want to optimize for the error case, and that's
 what validate *is*.
 
   OG.
 

validate function i have in mind as virtualy a zero cost (it will
boil down to a bunch of add followed by a test) and what validate
would do would be done by draw operation anyway.

Cheers,
Jerome

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Jerome Glisse
On Mon, Mar 01, 2010 at 12:24:19PM +, Keith Whitwell wrote:
 On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote:
  On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote:
   On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
 On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
  Hi,
  
  I am a bit puzzled, how a pipe driver should handle
  draw callback failure ? On radeon (pretty sure nouveau
  or intel hit the same issue) we can only know when one
  of the draw_* context callback is call if we can do
  the rendering or not.
  
  The failure here is dictated by memory constraint, ie
  if user bind big texture, big vbo ... we might not have
  enough GPU address space to bind all the desired object
  (even for drawing a single triangle) ?
  
  What should we do ? None of the draw callback can return
  a value ? Maybe for a GL stack tracker we should report
  GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
  is i think pipe driver are missing something here. Any
  idea ? Thought ? Is there already a plan to address that ? :)
 
 Gallium draw calls had return codes before. They were used for the
 fallover driver IIRC and were recently deleted.
 
 Either we put the return codes back, or we add a new
 pipe_context::validate() that would ensure that all necessary 
 conditions
 to draw successfully are met.
 
 Putting return codes on bind time won't work, because one can't set 
 all
 atoms simultaneously -- atoms are set one by one, so when one's 
 setting
 the state there are state combinations which may exceed the available
 resources but that are never drawn with. E.g. you have a huge VB you
 finished drawing, and then you switch to drawing with a small VB with 
 a
 huge texture, but in between it may happen that you have both bound
 simultaneously.
 
 If ignoring is not an alternative, then I'd prefer a validate call.
 
 Whether to fallback to software or not -- it seems to me it's really a
 problem that must be decided case by case. Drivers are supposed to be
 useful -- if hardware is so limited that it can't do anything useful
 then falling back to software is sensible. I don't think that a driver
 should support every imaginable thing -- apps should check errors, and
 users should ensure they have enough hardware resources for the
 workloads they want.
 
 Personally I think state trackers shouldn't emulate anything with CPU
 beyond unsupported pixel formats. If a hardware is so limited that in
 need CPU assistence this should taken care transparently by the pipe
 driver. Nevertheless we can and should provide auxiliary libraries 
 like
 draw to simplify the pipe driver implementation.


My opinion on this is similar: the pipe driver is responsible for
getting the rendering done.  If it needs to pull in a fallback module to
achieve that, it is the pipe driver's responsibility to do so.

Understanding the limitations of hardware and the best ways to work
around those limitations is really something that the driver itself is
best positioned to handle.

The slight quirk of OpenGL is that there are some conditions where
theoretically the driver is allowed to throw an OUT_OF_MEMORY error (or
similar) and not render.  This option isn't really available to gallium
drivers, mainly because we don't know inside gallium whether the API
permits this.  Unfortunately, even in OpenGL, very few applications
actually check the error conditions, or do anything sensible when they
fail.

I don't really like the idea of pipe drivers being able to fail render
calls, as it means that every state tracker and every bit of utility
code that issues a pipe-draw() call will have to check the return code
and hook in fallback code on failure.

One interesting thing would be to consider creating a layer that exposes
a pipe_context interface to the state tracker, but revives some of the
failover ideas internally - maybe as a first step just lifting the draw
module usage up to a layer above the actual hardware driver.

Keith

   
   So you don't like the pipe_context::validate() of Jose ? My
   taste goes to the pipe_context::validate() and having state
   tracker setting the proper flag according to the API they
   support (GL_OUT_OF_MEMORY for GL), this means just drop
   rendering command that we can't do.
  
  I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but
  the pipe driver should:
  
  a) not rely on validate() being called - ie it is just a query, not a
  mandatory prepare-to-render notification.
  
  b) make a best effort to render in subsequent draw() calls, even if
  validate has 

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Olivier Galibert
On Mon, Mar 01, 2010 at 02:57:08PM +0100, Jerome Glisse wrote:
 validate function i have in mind as virtualy a zero cost (it will
 boil down to a bunch of add followed by a test) and what validate
 would do would be done by draw operation anyway.

Not would, will.  You have no way to be sure nothing changed
between validate and draw, unless you're happy with an interface that
will always be unusable for multithreading.  So you'll do it twice for
something that will always tell yes except once in a blue moon.

And if you want to be sure that validate passes implies draw will
work, it's often more than a bunch of adds.  Allocations can fail even
if the apparent free space is enough.  See fragmentation and
alignment, among others.

Morality: reduce the number of operations in the normal (often called
fast) path *first*, ask questions later.  Trying to predict failures
is both unreliable and costly.  Xorg/mesa is perceived slow enough as
it is.

  OG.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Jerome Glisse
On Mon, Mar 01, 2010 at 03:24:51PM +0100, Olivier Galibert wrote:
 On Mon, Mar 01, 2010 at 02:57:08PM +0100, Jerome Glisse wrote:
  validate function i have in mind as virtualy a zero cost (it will
  boil down to a bunch of add followed by a test) and what validate
  would do would be done by draw operation anyway.
 
 Not would, will.  You have no way to be sure nothing changed
 between validate and draw, unless you're happy with an interface that
 will always be unusable for multithreading.  So you'll do it twice for
 something that will always tell yes except once in a blue moon.
 
 And if you want to be sure that validate passes implies draw will
 work, it's often more than a bunch of adds.  Allocations can fail even
 if the apparent free space is enough.  See fragmentation and
 alignment, among others.
 
 Morality: reduce the number of operations in the normal (often called
 fast) path *first*, ask questions later.  Trying to predict failures
 is both unreliable and costly.  Xorg/mesa is perceived slow enough as
 it is.
 
   OG.
 

Do you have solution/proposal/idea on how to handle the situation
i am describing ?

Cheers,
Jerome

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Marek Olšák
On Mon, Mar 1, 2010 at 3:02 PM, Jerome Glisse gli...@freedesktop.orgwrote:

 On Mon, Mar 01, 2010 at 12:24:19PM +, Keith Whitwell wrote:
  On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote:
   On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote:
On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
 On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
  On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
   Hi,
  
   I am a bit puzzled, how a pipe driver should handle
   draw callback failure ? On radeon (pretty sure nouveau
   or intel hit the same issue) we can only know when one
   of the draw_* context callback is call if we can do
   the rendering or not.
  
   The failure here is dictated by memory constraint, ie
   if user bind big texture, big vbo ... we might not have
   enough GPU address space to bind all the desired object
   (even for drawing a single triangle) ?
  
   What should we do ? None of the draw callback can return
   a value ? Maybe for a GL stack tracker we should report
   GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
   is i think pipe driver are missing something here. Any
   idea ? Thought ? Is there already a plan to address that ? :)
 
  Gallium draw calls had return codes before. They were used for
 the
  fallover driver IIRC and were recently deleted.
 
  Either we put the return codes back, or we add a new
  pipe_context::validate() that would ensure that all necessary
 conditions
  to draw successfully are met.
 
  Putting return codes on bind time won't work, because one can't
 set all
  atoms simultaneously -- atoms are set one by one, so when one's
 setting
  the state there are state combinations which may exceed the
 available
  resources but that are never drawn with. E.g. you have a huge VB
 you
  finished drawing, and then you switch to drawing with a small VB
 with a
  huge texture, but in between it may happen that you have both
 bound
  simultaneously.
 
  If ignoring is not an alternative, then I'd prefer a validate
 call.
 
  Whether to fallback to software or not -- it seems to me it's
 really a
  problem that must be decided case by case. Drivers are supposed
 to be
  useful -- if hardware is so limited that it can't do anything
 useful
  then falling back to software is sensible. I don't think that a
 driver
  should support every imaginable thing -- apps should check
 errors, and
  users should ensure they have enough hardware resources for the
  workloads they want.
 
  Personally I think state trackers shouldn't emulate anything with
 CPU
  beyond unsupported pixel formats. If a hardware is so limited
 that in
  need CPU assistence this should taken care transparently by the
 pipe
  driver. Nevertheless we can and should provide auxiliary
 libraries like
  draw to simplify the pipe driver implementation.


 My opinion on this is similar: the pipe driver is responsible for
 getting the rendering done.  If it needs to pull in a fallback
 module to
 achieve that, it is the pipe driver's responsibility to do so.

 Understanding the limitations of hardware and the best ways to work
 around those limitations is really something that the driver itself
 is
 best positioned to handle.

 The slight quirk of OpenGL is that there are some conditions where
 theoretically the driver is allowed to throw an OUT_OF_MEMORY error
 (or
 similar) and not render.  This option isn't really available to
 gallium
 drivers, mainly because we don't know inside gallium whether the
 API
 permits this.  Unfortunately, even in OpenGL, very few applications
 actually check the error conditions, or do anything sensible when
 they
 fail.

 I don't really like the idea of pipe drivers being able to fail
 render
 calls, as it means that every state tracker and every bit of
 utility
 code that issues a pipe-draw() call will have to check the return
 code
 and hook in fallback code on failure.

 One interesting thing would be to consider creating a layer that
 exposes
 a pipe_context interface to the state tracker, but revives some of
 the
 failover ideas internally - maybe as a first step just lifting the
 draw
 module usage up to a layer above the actual hardware driver.

 Keith

   
So you don't like the pipe_context::validate() of Jose ? My
taste goes to the pipe_context::validate() and having state
tracker setting the proper flag according to the API they
support (GL_OUT_OF_MEMORY for GL), this means just drop
rendering command that we can't do.
  
   I think it's useful as a method for implementing GL_OUT_OF_MEMORY, but
   the pipe driver should:
  
   a) not rely on validate() being called - ie it is 

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Jerome Glisse
On Mon, Mar 01, 2010 at 04:21:45PM +0100, Marek Olšák wrote:
 On Mon, Mar 1, 2010 at 3:02 PM, Jerome Glisse gli...@freedesktop.orgwrote:
 
  On Mon, Mar 01, 2010 at 12:24:19PM +, Keith Whitwell wrote:
   On Mon, 2010-03-01 at 04:07 -0800, Keith Whitwell wrote:
On Mon, 2010-03-01 at 03:55 -0800, Jerome Glisse wrote:
 On Mon, Mar 01, 2010 at 11:46:08AM +, Keith Whitwell wrote:
  On Mon, 2010-03-01 at 03:21 -0800, José Fonseca wrote:
   On Sun, 2010-02-28 at 11:25 -0800, Jerome Glisse wrote:
Hi,
   
I am a bit puzzled, how a pipe driver should handle
draw callback failure ? On radeon (pretty sure nouveau
or intel hit the same issue) we can only know when one
of the draw_* context callback is call if we can do
the rendering or not.
   
The failure here is dictated by memory constraint, ie
if user bind big texture, big vbo ... we might not have
enough GPU address space to bind all the desired object
(even for drawing a single triangle) ?
   
What should we do ? None of the draw callback can return
a value ? Maybe for a GL stack tracker we should report
GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
is i think pipe driver are missing something here. Any
idea ? Thought ? Is there already a plan to address that ? :)
  
   Gallium draw calls had return codes before. They were used for
  the
   fallover driver IIRC and were recently deleted.
  
   Either we put the return codes back, or we add a new
   pipe_context::validate() that would ensure that all necessary
  conditions
   to draw successfully are met.
  
   Putting return codes on bind time won't work, because one can't
  set all
   atoms simultaneously -- atoms are set one by one, so when one's
  setting
   the state there are state combinations which may exceed the
  available
   resources but that are never drawn with. E.g. you have a huge VB
  you
   finished drawing, and then you switch to drawing with a small VB
  with a
   huge texture, but in between it may happen that you have both
  bound
   simultaneously.
  
   If ignoring is not an alternative, then I'd prefer a validate
  call.
  
   Whether to fallback to software or not -- it seems to me it's
  really a
   problem that must be decided case by case. Drivers are supposed
  to be
   useful -- if hardware is so limited that it can't do anything
  useful
   then falling back to software is sensible. I don't think that a
  driver
   should support every imaginable thing -- apps should check
  errors, and
   users should ensure they have enough hardware resources for the
   workloads they want.
  
   Personally I think state trackers shouldn't emulate anything with
  CPU
   beyond unsupported pixel formats. If a hardware is so limited
  that in
   need CPU assistence this should taken care transparently by the
  pipe
   driver. Nevertheless we can and should provide auxiliary
  libraries like
   draw to simplify the pipe driver implementation.
 
 
  My opinion on this is similar: the pipe driver is responsible for
  getting the rendering done.  If it needs to pull in a fallback
  module to
  achieve that, it is the pipe driver's responsibility to do so.
 
  Understanding the limitations of hardware and the best ways to work
  around those limitations is really something that the driver itself
  is
  best positioned to handle.
 
  The slight quirk of OpenGL is that there are some conditions where
  theoretically the driver is allowed to throw an OUT_OF_MEMORY error
  (or
  similar) and not render.  This option isn't really available to
  gallium
  drivers, mainly because we don't know inside gallium whether the
  API
  permits this.  Unfortunately, even in OpenGL, very few applications
  actually check the error conditions, or do anything sensible when
  they
  fail.
 
  I don't really like the idea of pipe drivers being able to fail
  render
  calls, as it means that every state tracker and every bit of
  utility
  code that issues a pipe-draw() call will have to check the return
  code
  and hook in fallback code on failure.
 
  One interesting thing would be to consider creating a layer that
  exposes
  a pipe_context interface to the state tracker, but revives some of
  the
  failover ideas internally - maybe as a first step just lifting the
  draw
  module usage up to a layer above the actual hardware driver.
 
  Keith
 

 So you don't like the pipe_context::validate() of Jose ? My
 taste goes to the pipe_context::validate() and having state
 tracker setting the proper flag according to the API they
 support (GL_OUT_OF_MEMORY for GL), this means just drop
 rendering 

Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Olivier Galibert
On Mon, Mar 01, 2010 at 04:08:32PM +0100, Jerome Glisse wrote:
 Do you have solution/proposal/idea on how to handle the situation
 i am describing ?

I've been looking at gallium from far away, but it seems to me you
have two independant issues:
- informing the caller of errors in atomic draw() calls
- deciding what to do when the error is due to resource exhaustion

For the first issue, if the api doesn't allow for returning errors,
then the api is crap and has to be fixed.  No two ways about it.

For the second issue, you can have a generic way, a per-driver (call
them state trackers if you want) specific way, both, or neither (also
known as the fuck it solution).

The generic way is to, when you get an out of whatever error, drop
down to software in the caller.  That requires having enough state
available to be able to apply software to the specific operations in
the first place.  Potentially slow, but otoh all drivers would benefit
from it.  It would happen only on error, so outside of the fast path.

The specific way is to handle all you can in the driver, for instance
splitting as you proposed, and punt in error only if you really can't
do anything accelerated.

Both allows you in case of punting to still be able to do the
requested render.  Belt and suspenders :-)

Neither just means ensuring errors go up all the way in the chain to
the application.  Personally I'd start by that, but that's just me.
Ensure that the application has enough information, even if
after-the-fact, to do its own tuning.  A polygon not drawing silently
is an atrocity to debug.  An out of resources error is something
obvious (debug-wise) you can throw money or code at.

Having neither, i.e. correctness in error handling, does not prevent
you to play with the generic or specific ways afterwards.  But I
suspect you'll find more interesting to work on enabling access to
currently unavailable hardware features and tell people that if they
want 16 8192^3 textures they can go full software explicitely or buy a
card capable of it.  Reasonableness has limits.

  OG.

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread José Fonseca
On Mon, 2010-03-01 at 06:24 -0800, Olivier Galibert wrote:
 On Mon, Mar 01, 2010 at 02:57:08PM +0100, Jerome Glisse wrote:
  validate function i have in mind as virtualy a zero cost (it will
  boil down to a bunch of add followed by a test) and what validate
  would do would be done by draw operation anyway.
 
 Not would, will.  You have no way to be sure nothing changed
 between validate and draw, 

pipe_contexts are not re-entrant.

 unless you're happy with an interface that
 will always be unusable for multithreading.  So you'll do it twice for
 something that will always tell yes except once in a blue moon.

The current procedure is:

   pipe-bind_this_state();
   pipe-bind_that_state();
   pipe-set_this_state();
   pipe-set_that_state();

   pipe-draw();

Making it 

   pipe-bind_this_state();
   pipe-bind_that_state();
   pipe-set_this_state();
   pipe-set_that_state();

   if(pipe-validate() == PIPE_OUT_OF_MEMORY)
  return GL_OUT_OF_MEMORY;

   pipe-draw();

Makes it no better, no worse in terms of race conditions.

Jose


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Keith Whitwell
On Mon, 2010-03-01 at 07:33 -0800, Olivier Galibert wrote:
 On Mon, Mar 01, 2010 at 04:08:32PM +0100, Jerome Glisse wrote:
  Do you have solution/proposal/idea on how to handle the situation
  i am describing ?
 
 I've been looking at gallium from far away, but it seems to me you
 have two independant issues:
 - informing the caller of errors in atomic draw() calls
 - deciding what to do when the error is due to resource exhaustion
 
 For the first issue, if the api doesn't allow for returning errors,
 then the api is crap and has to be fixed.  No two ways about it.

Thanks for your comments.

To reiterate what has already been said, the approach we're taking is:
a) the driver makes a best effort to render under all circumstances
b) we'll add an error notification path to generate GL_OUT_OF_MEMORY,
but the state tracker will not be doing any fallbacks based on this.

Keith


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-03-01 Thread Corbin Simpson
Wow, this really got a lot of discussion.

I don't really care *where* the sanity code is, but it just seems
horribly wrong that it's got to be duplicated between per-hook
per-driver in a library that purports simplified drivers with reduced
LOCs. I suppose it's unavoidable to a degree as long as driver setup
is bare, though.

There are alternatives to every single bad draw case, but handling
them correctly needs to be required and documented, and that means we
probably have to agree on them. Examples:
- Oversized colorbufs are forbidden; if you absolutely need them, I
could cook up a u_shatter but it's going to be hilariously slow due to
CPU blits
- While not all textures can fit into VRAM, find the biggest texture
and shrink it
- Too many verts or too many indices are handled by multiple draw calls
- If the bound pipeline is incomplete (at least one state bound to
NULL or unset), results are undefined

Async errors make sense, or at least more sense than no error reporting at all.

-- 
Only fools are easily impressed by what is only
barely beyond their reach. ~ Unknown

Corbin Simpson
mostawesomed...@gmail.com

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-02-28 Thread Joakim Sindholt
On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote:
 Hi,
 
 I am a bit puzzled, how a pipe driver should handle
 draw callback failure ? On radeon (pretty sure nouveau
 or intel hit the same issue) we can only know when one
 of the draw_* context callback is call if we can do
 the rendering or not.
 
 The failure here is dictated by memory constraint, ie
 if user bind big texture, big vbo ... we might not have
 enough GPU address space to bind all the desired object
 (even for drawing a single triangle) ?
 
 What should we do ? None of the draw callback can return
 a value ? Maybe for a GL stack tracker we should report
 GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
 is i think pipe driver are missing something here. Any
 idea ? Thought ? Is there already a plan to address that ? :)
 
 Cheers,
 Jerome

I think a vital point you're missing is: do we even care? If rendering
fails because we simply can't render any more, do we even want to fall
back? I can see a point in having a cap on how large a buffer can be
rendered but apart from that, I'm not sure there even is a problem.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-02-28 Thread Dave Airlie
On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt b...@zhasha.com wrote:
 On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote:
 Hi,

 I am a bit puzzled, how a pipe driver should handle
 draw callback failure ? On radeon (pretty sure nouveau
 or intel hit the same issue) we can only know when one
 of the draw_* context callback is call if we can do
 the rendering or not.

 The failure here is dictated by memory constraint, ie
 if user bind big texture, big vbo ... we might not have
 enough GPU address space to bind all the desired object
 (even for drawing a single triangle) ?

 What should we do ? None of the draw callback can return
 a value ? Maybe for a GL stack tracker we should report
 GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
 is i think pipe driver are missing something here. Any
 idea ? Thought ? Is there already a plan to address that ? :)

 Cheers,
 Jerome

 I think a vital point you're missing is: do we even care? If rendering
 fails because we simply can't render any more, do we even want to fall
 back? I can see a point in having a cap on how large a buffer can be
 rendered but apart from that, I'm not sure there even is a problem.


Welcome to GL. If I have a 32MB graphics card, and I advertise
a maximum texture size of 4096x4096 + cubemapping + 3D textures,
there is no nice way for the app to get a clue about what it can legally
ask me to do. Old DRI drivers used to either use texmem which would
try and scale the limits etc to what it could legally fit in the
memory available,
or with bufmgr drivers they would check against a limit from the kernel,
and in both cases sw fallback if necessary. Gallium seemingly can't do this,
maybe its okay to ignore it but it wasn't an option when we did the
old DRI drivers.

Dave.

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium software fallback/draw command failure

2010-02-28 Thread Corbin Simpson
On Sun, Feb 28, 2010 at 9:15 PM, Dave Airlie airl...@gmail.com wrote:
 On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt b...@zhasha.com wrote:
 On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote:
 Hi,

 I am a bit puzzled, how a pipe driver should handle
 draw callback failure ? On radeon (pretty sure nouveau
 or intel hit the same issue) we can only know when one
 of the draw_* context callback is call if we can do
 the rendering or not.

 The failure here is dictated by memory constraint, ie
 if user bind big texture, big vbo ... we might not have
 enough GPU address space to bind all the desired object
 (even for drawing a single triangle) ?

 What should we do ? None of the draw callback can return
 a value ? Maybe for a GL stack tracker we should report
 GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
 is i think pipe driver are missing something here. Any
 idea ? Thought ? Is there already a plan to address that ? :)

 Cheers,
 Jerome

 I think a vital point you're missing is: do we even care? If rendering
 fails because we simply can't render any more, do we even want to fall
 back? I can see a point in having a cap on how large a buffer can be
 rendered but apart from that, I'm not sure there even is a problem.


 Welcome to GL. If I have a 32MB graphics card, and I advertise
 a maximum texture size of 4096x4096 + cubemapping + 3D textures,
 there is no nice way for the app to get a clue about what it can legally
 ask me to do. Old DRI drivers used to either use texmem which would
 try and scale the limits etc to what it could legally fit in the
 memory available,
 or with bufmgr drivers they would check against a limit from the kernel,
 and in both cases sw fallback if necessary. Gallium seemingly can't do this,
 maybe its okay to ignore it but it wasn't an option when we did the
 old DRI drivers.

GL_ATI_meminfo is unfortunately the best bet. :C

Also Gallium's API is written so that drivers must never fail on
render calls. This is *incredibly* lame but there's nothing that can
be done. Every single driver is currently encouraged to just drop shit
on the floor if e.g. u_trim_pipe_prim fails, and every driver is
encouraged to call u_trim_pipe_prim, so we have stupidity like: if
(!u_trim_pipe_prim(mode, count)) { return; }

In EVERY SINGLE DRIVER. Most uncool. What's the point of a unified API
if it can't do sanity checks? :T

~ C.

-- 
Only fools are easily impressed by what is only
barely beyond their reach. ~ Unknown

Corbin Simpson
mostawesomed...@gmail.com

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev