On Sat, 28 Mar 2009 01:54:32 +0100
Peter Zijlstra <pet...@infradead.org> wrote:

> On Thu, 2009-03-26 at 17:43 -0700, Jesse Barnes wrote:
> > On Wed, 25 Mar 2009 14:45:05 -0700
> > Eric Anholt <e...@anholt.net> wrote:
> > 
> > > Since the pagefault path determines that the lock order we use
> > > has to be mmap_sem -> struct_mutex, we can't allow page faults to
> > > occur while the struct_mutex is held.  To fix this in pwrite, we
> > > first try optimistically to see if we can copy from user without
> > > faulting.  If it fails, fall back to using get_user_pages to pin
> > > the user's memory, and map those pages atomically when copying it
> > > to the GPU.
> > > 
> > > Signed-off-by: Eric Anholt <e...@anholt.net>
> > > ---
> > > + /* Pin the user pages containing the data.  We can't
> > > fault while
> > > +  * holding the struct mutex, and all of the pwrite
> > > implementations
> > > +  * want to hold it while dereferencing the user data.
> > > +  */
> > > + first_data_page = data_ptr / PAGE_SIZE;
> > > + last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE;
> > > + num_pages = last_data_page - first_data_page + 1;
> > > +
> > > + user_pages = kcalloc(num_pages, sizeof(struct page *),
> > > GFP_KERNEL);
> > > + if (user_pages == NULL)
> > > +         return -ENOMEM;
> > 
> > If kmalloc limits us to a 128k allocation (and maybe less under
> > pressure), then we'll be limited to 128k/8 page pointers on 64 bit,
> > or 64M per pwrite...  Is that ok?  Or do we need to handle multiple
> > passes here?
> 
> While officially supported, a 128k kmalloc is _very_ likely to fail,
> it would require an order 5 page allocation to back that, and that is
> well outside of comfortable.

Yeah, my "and maybe less" could have been worded a tad more strongly. ;)
Do we have stats on which kmalloc buckets have available allocations
anywhere for machines under various workloads?  I know under heavy
pressure even 8k allocations can fail, but since this is a GFP_KERNEL
things should be a *little* better.

-- 
Jesse Barnes, Intel Open Source Technology Center

------------------------------------------------------------------------------
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to