On Fri, Feb 13, 2015 at 09:49:24AM -0600, Christoph Lameter wrote:
> On Fri, 13 Feb 2015, Joonsoo Kim wrote:
>
> > > + *p++ = freelist;
> > > + freelist = get_freepointer(s, freelist);
> > > + allocated++;
> > > + }
> >
> > Fetching all objec
On Fri, 13 Feb 2015, Joonsoo Kim wrote:
> > + *p++ = freelist;
> > + freelist = get_freepointer(s, freelist);
> > + allocated++;
> > + }
>
> Fetching all objects with holding node lock could result in enomourous
> lock contention. How
On Tue, Feb 10, 2015 at 01:48:06PM -0600, Christoph Lameter wrote:
> The major portions are there but there is no support yet for
> directly allocating per cpu objects. There could also be more
> sophisticated code to exploit the batch freeing.
>
> Signed-off-by: Christoph Lameter
>
> Index: lin
On Thu, 12 Feb 2015, Jesper Dangaard Brouer wrote:
> Measured on my laptop CPU i7-2620M CPU @ 2.70GHz:
>
> * 12.775 ns - "clean" spin_lock_unlock
> * 21.099 ns - irqsave variant spinlock
> * 22.808 ns - "manual" irqsave before spin_lock
> * 14.618 ns - "manual" local_irq_disable + spin_lock
>
On Wed, 11 Feb 2015 16:06:50 -0600 (CST)
Christoph Lameter wrote:
> On Thu, 12 Feb 2015, Jesper Dangaard Brouer wrote:
>
> > > > This is quite an expensive lock with irqsave.
[...]
> > > We can require that interrupt are off when the functions are called. Then
> > > we can avoid the "save" part?
On Thu, 12 Feb 2015, Jesper Dangaard Brouer wrote:
> > > This is quite an expensive lock with irqsave.
> >
> > Yes but we take it for all partial pages.
>
> Sure, that is good, but this might be a contention point. In a micro
> benchmark, this contention should be visible, but in real use-cases th
On Wed, 11 Feb 2015 13:07:24 -0600 (CST)
Christoph Lameter wrote:
> On Wed, 11 Feb 2015, Jesper Dangaard Brouer wrote:
>
> > > +
> > > +
> > > + spin_lock_irqsave(&n->list_lock, flags);
> >
> > This is quite an expensive lock with irqsave.
>
> Yes but we take it for all partial pages.
Sure, th
On Wed, 11 Feb 2015, Jesper Dangaard Brouer wrote:
> > +
> > +
> > + spin_lock_irqsave(&n->list_lock, flags);
>
> This is quite an expensive lock with irqsave.
Yes but we take it for all partial pages.
> Yet another lock cost.
Yup the page access is shared but there is one per page. Contentio
On Tue, 10 Feb 2015 13:48:06 -0600 Christoph Lameter wrote:
> The major portions are there but there is no support yet for
> directly allocating per cpu objects. There could also be more
> sophisticated code to exploit the batch freeing.
>
> Signed-off-by: Christoph Lameter
>
[...]
> Index: l
The major portions are there but there is no support yet for
directly allocating per cpu objects. There could also be more
sophisticated code to exploit the batch freeing.
Signed-off-by: Christoph Lameter
Index: linux/include/linux/slub_def.h
=
10 matches
Mail list logo