On Mon, 19 May 2008 12:16:57 +0100 (IST)
Dave Airlie <[EMAIL PROTECTED]> wrote:
> > 
> > For radeon the plan was to return error from superioctl as during 
> > superioctl and validation i do know if there is enough gart/vram to do 
> > the things. Then i think it's up to upper level to properly handle such 
> > failure from superioctl
> 
> You really want to work this out in advance, at superioctl stage it is too 
> late, have a look at the changes I made to the dri_bufmgr.c classic memory 
> manager case to deal with this for Intel hw, if you got to superioctl and 
> failed unwrapping would be a real pain in the ass, you might have a number 
> of pieces of app state you can't reconstruct. I think DirectX handled this 
> with cut-points where with the buffer you passed the kernel a set of 
> places it could break the batch without too much effort. I think we are 
> better just giving the mesa driver a limit and when it hits that limit it 
> submits the buffer. The kernel can give it a new optimal limit at any 
> point and it should use that as soon as possible. Nothing can solve Ians 
> problems where the app gives you a single working set that is too large at 
> least with current GL. However you have to deal with the fact that 
> batchbuffer has many operations and the total working set needs to fit in 
> RAM to be relocated. I've added all the hooks in dri_bufmgr.c for non-TTM 
> case, TTM shouldn't be a major effort to add.
> 

Spliting the cmd before they get submited is the way to go, likely we can
ask the kernel for estimate of available memory and so userspace can stop
building cmd stream but this isn't easy. Well anyway this would be a
userspace problem. Anyway we still will have to fail in superioctl if
for instance memory fragmentation get in the way.

Cheers,
Jerome Glisse

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to