On Thu, 2009-02-19 at 13:57 +0100, Nick Piggin wrote: > On Thu, Feb 19, 2009 at 10:19:05AM +0100, Peter Zijlstra wrote: > > On Wed, 2009-02-18 at 11:38 -0500, k...@bitplanet.net wrote: > > > From: Kristian Høgsberg <k...@redhat.com> > > > > > > A number of GEM operations (and legacy drm ones) want to copy data to > > > or from userspace while holding the struct_mutex lock. However, the > > > fault handler calls us with the mmap_sem held and thus enforces the > > > opposite locking order. This patch downs the mmap_sem up front for > > > those operations that access userspace data under the struct_mutex > > > lock to ensure the locking order is consistent. > > > > > > Signed-off-by: Kristian Høgsberg <k...@redhat.com> > > > --- > > > > > > Here's a different and simpler attempt to fix the locking order > > > problem. We can just down_read() the mmap_sem pre-emptively up-front, > > > and the locking order is respected. It's simpler than the > > > mutex_trylock() game, avoids introducing a new mutex. > > The "simple" way to fix this is to just allocate a temporary buffer > to copy a snapshot of the data going to/from userspace. Then do the > real usercopy to/from that buffer outside the locks. > > You don't have any performance critical bulk copies (ie. that will > blow the L1 cache), do you?
16kb is the most common size (batchbuffers). 32k is popular on 915 (vertex), and varying between 0-128k on 965 (vertex). The pwrite path generally represents 10-30% of CPU consumption in CPU-bound apps. -- Eric Anholt e...@anholt.net eric.anh...@intel.com
signature.asc
Description: This is a digitally signed message part
------------------------------------------------------------------------------ Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H
-- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel