> > The problem remains how to avoid this situation completely. I guess the > > drm driver can reserve a global "safe" aperture size, and communicate > > that to the 3D client, but the current TTM drivers don't deal with this > > situation. > > My first idea would probably be your first alternative. Flush and re-do > > the state-emit if the combined buffer size is larger than the "safe" > > aperture size. > > I think a dynamically sized safe aperture size that can be used per batch > submission, is probably the best plan, this might also allow throttling in > multi-app situations to help avoid thrashing, by reducing the per-app > limits. For cards with per-process we could make it the size of the > per-process aperture. > > The case where an app manages to submit a working set for a single > operation that is larger than the GPU can deal with, should be considered > a bug in the driver I suppose.
The trouble with the safe limit is that it can change in a timeframe that is inconvenient for the driver -- ie, if it changes when a driver has already constructed most of a scene, what happens? This is a lot like the old cliprect problem, where driver choices can be invalidated later on, leaving it in a difficult position. Trying to chop an already-constructed command stream up after the fact is unappealing, even on simple architectures like the i915 in classic mode. Add zone rendering or some other wrinkle & it looses appeal fast. What about two limits -- hard & soft? If the "hard" limit can avoid changing, that makes things a lot nicer for the driver. When the soft one changes, the driver can respect that next frame, but submit the current command stream as is. Keith ------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel