Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Mel Gorman
On Thu, Nov 24, 2016 at 08:26:39AM +0100, Vlastimil Babka wrote: > On 11/23/2016 05:33 PM, Mel Gorman wrote: > > > > + > > > > +static inline unsigned int pindex_to_order(unsigned int pindex) > > > > +{ > > > > + return pindex < MIGRATE_PCPTYPES ? 0 : pindex - > > > > MIGRATE_PCPTYPES + 1;

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Mel Gorman
On Thu, Nov 24, 2016 at 08:26:39AM +0100, Vlastimil Babka wrote: > On 11/23/2016 05:33 PM, Mel Gorman wrote: > > > > + > > > > +static inline unsigned int pindex_to_order(unsigned int pindex) > > > > +{ > > > > + return pindex < MIGRATE_PCPTYPES ? 0 : pindex - > > > > MIGRATE_PCPTYPES + 1;

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Vlastimil Babka
On 11/23/2016 05:33 PM, Mel Gorman wrote: + +static inline unsigned int pindex_to_order(unsigned int pindex) +{ + return pindex < MIGRATE_PCPTYPES ? 0 : pindex - MIGRATE_PCPTYPES + 1; +} + +static inline unsigned int order_to_pindex(int migratetype, unsigned int order) +{ + return

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Vlastimil Babka
On 11/23/2016 05:33 PM, Mel Gorman wrote: + +static inline unsigned int pindex_to_order(unsigned int pindex) +{ + return pindex < MIGRATE_PCPTYPES ? 0 : pindex - MIGRATE_PCPTYPES + 1; +} + +static inline unsigned int order_to_pindex(int migratetype, unsigned int order) +{ + return

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Mel Gorman
On Wed, Nov 23, 2016 at 04:37:06PM +0100, Vlastimil Babka wrote: > On 11/21/2016 04:55 PM, Mel Gorman wrote: > > ... > > > hackbench was also tested with both socket and pipes and both processes > > and threads and the results are interesting in terms of how variability > > is imapcted > > > >

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Mel Gorman
On Wed, Nov 23, 2016 at 04:37:06PM +0100, Vlastimil Babka wrote: > On 11/21/2016 04:55 PM, Mel Gorman wrote: > > ... > > > hackbench was also tested with both socket and pipes and both processes > > and threads and the results are interesting in terms of how variability > > is imapcted > > > >

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Vlastimil Babka
On 11/21/2016 04:55 PM, Mel Gorman wrote: ... hackbench was also tested with both socket and pipes and both processes and threads and the results are interesting in terms of how variability is imapcted 1-socket machine -- pipes and processes 4.9.0-rc5

Re: [RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-23 Thread Vlastimil Babka
On 11/21/2016 04:55 PM, Mel Gorman wrote: ... hackbench was also tested with both socket and pipes and both processes and threads and the results are interesting in terms of how variability is imapcted 1-socket machine -- pipes and processes 4.9.0-rc5

[RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-21 Thread Mel Gorman
SLUB has been the default small kernel object allocator for quite some time but it is not universally used due to performance concerns and a reliance on high-order pages. The high-order concerns has two major components -- high-order pages are not always available and high-order page allocations

[RFC PATCH] mm: page_alloc: High-order per-cpu page allocator

2016-11-21 Thread Mel Gorman
SLUB has been the default small kernel object allocator for quite some time but it is not universally used due to performance concerns and a reliance on high-order pages. The high-order concerns has two major components -- high-order pages are not always available and high-order page allocations