On Tue, 30 Aug 2016, Mel Gorman wrote:
> > Userspace mapped pages can be hugepages as well as giant pages and that
> > has been there for a long time. Intermediate sizes would be useful too in
> > order to avoid having to keep lists of 4k pages around and continually
> > scan them.
> >
>
>
On Tue, 30 Aug 2016, Mel Gorman wrote:
> > Userspace mapped pages can be hugepages as well as giant pages and that
> > has been there for a long time. Intermediate sizes would be useful too in
> > order to avoid having to keep lists of 4k pages around and continually
> > scan them.
> >
>
>
On Thu, Aug 25, 2016 at 02:55:43PM -0500, Christoph Lameter wrote:
> On Thu, 25 Aug 2016, Mel Gorman wrote:
>
> > Flipping the lid aside, there will always be a need for fast management
> > of 4K pages. The primary use case is networking that sometimes uses
> > high-order pages to avoid allocator
On Thu, Aug 25, 2016 at 02:55:43PM -0500, Christoph Lameter wrote:
> On Thu, 25 Aug 2016, Mel Gorman wrote:
>
> > Flipping the lid aside, there will always be a need for fast management
> > of 4K pages. The primary use case is networking that sometimes uses
> > high-order pages to avoid allocator
On Thu, 25 Aug 2016, Mel Gorman wrote:
> Flipping the lid aside, there will always be a need for fast management
> of 4K pages. The primary use case is networking that sometimes uses
> high-order pages to avoid allocator overhead and amortise DMA setup.
> Userspace-mapped pages will always be 4K
On Thu, 25 Aug 2016, Mel Gorman wrote:
> Flipping the lid aside, there will always be a need for fast management
> of 4K pages. The primary use case is networking that sometimes uses
> high-order pages to avoid allocator overhead and amortise DMA setup.
> Userspace-mapped pages will always be 4K
On Wed, Aug 24, 2016 at 11:01:43PM -0500, Christoph Lameter wrote:
> On Wed, 24 Aug 2016, Mel Gorman wrote:
> > If/when I get back to the page allocator, the priority would be a bulk
> > API for faster allocs of batches of order-0 pages instead of allocating
> > a large page and splitting.
> >
>
On Wed, Aug 24, 2016 at 11:01:43PM -0500, Christoph Lameter wrote:
> On Wed, 24 Aug 2016, Mel Gorman wrote:
> > If/when I get back to the page allocator, the priority would be a bulk
> > API for faster allocs of batches of order-0 pages instead of allocating
> > a large page and splitting.
> >
>
On Wed, 24 Aug 2016, Mel Gorman wrote:
> If/when I get back to the page allocator, the priority would be a bulk
> API for faster allocs of batches of order-0 pages instead of allocating
> a large page and splitting.
>
OMG. Do we really want to continue this? There are billions of Linux
devices
On Wed, 24 Aug 2016, Mel Gorman wrote:
> If/when I get back to the page allocator, the priority would be a bulk
> API for faster allocs of batches of order-0 pages instead of allocating
> a large page and splitting.
>
OMG. Do we really want to continue this? There are billions of Linux
devices
On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> Do we have any documentation/study about which particular workloads
> benefit from which allocator? It seems that most users will use whatever
> the default or what their distribution uses. E.g. SLES kernel use SLAB
> because this is
On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> Do we have any documentation/study about which particular workloads
> benefit from which allocator? It seems that most users will use whatever
> the default or what their distribution uses. E.g. SLES kernel use SLAB
> because this is
On Wed 24-08-16 10:15:02, Joonsoo Kim wrote:
> On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> > On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> > > On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> > [...]
> > > > I am not opposing the patch (to be honest it is quite
On Wed 24-08-16 10:15:02, Joonsoo Kim wrote:
> On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> > On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> > > On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> > [...]
> > > > I am not opposing the patch (to be honest it is quite
On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> > On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> [...]
> > > I am not opposing the patch (to be honest it is quite neat) but this
> > > is buggering me for quite some
On Tue, Aug 23, 2016 at 05:38:08PM +0200, Michal Hocko wrote:
> On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> > On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> [...]
> > > I am not opposing the patch (to be honest it is quite neat) but this
> > > is buggering me for quite some
On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
[...]
> > I am not opposing the patch (to be honest it is quite neat) but this
> > is buggering me for quite some time. Sorry for hijacking this email
> > thread but I couldn't resist. Why
On Tue 23-08-16 11:13:03, Joonsoo Kim wrote:
> On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
[...]
> > I am not opposing the patch (to be honest it is quite neat) but this
> > is buggering me for quite some time. Sorry for hijacking this email
> > thread but I couldn't resist. Why
On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> On Wed 17-08-16 11:20:50, Aruna Ramakrishna wrote:
> > On large systems, when some slab caches grow to millions of objects (and
> > many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> > During this time,
On Thu, Aug 18, 2016 at 01:52:19PM +0200, Michal Hocko wrote:
> On Wed 17-08-16 11:20:50, Aruna Ramakrishna wrote:
> > On large systems, when some slab caches grow to millions of objects (and
> > many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> > During this time,
On 08/18/2016 04:52 AM, Michal Hocko wrote:
I am not opposing the patch (to be honest it is quite neat) but this
is buggering me for quite some time. Sorry for hijacking this email
thread but I couldn't resist. Why are we trying to optimize SLAB and
slowly converge it to SLUB feature-wise. I
On 08/18/2016 04:52 AM, Michal Hocko wrote:
I am not opposing the patch (to be honest it is quite neat) but this
is buggering me for quite some time. Sorry for hijacking this email
thread but I couldn't resist. Why are we trying to optimize SLAB and
slowly converge it to SLUB feature-wise. I
On Wed 17-08-16 11:20:50, Aruna Ramakrishna wrote:
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full,
On Wed 17-08-16 11:20:50, Aruna Ramakrishna wrote:
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full,
On 08/17/2016 12:03 PM, Eric Dumazet wrote:
On Wed, 2016-08-17 at 11:20 -0700, Aruna Ramakrishna wrote:
]
- list_for_each_entry(page, >slabs_full, lru) {
- if (page->active != cachep->num && !error)
- error = "slabs_full
On 08/17/2016 12:03 PM, Eric Dumazet wrote:
On Wed, 2016-08-17 at 11:20 -0700, Aruna Ramakrishna wrote:
]
- list_for_each_entry(page, >slabs_full, lru) {
- if (page->active != cachep->num && !error)
- error = "slabs_full
On Wed, 2016-08-17 at 11:20 -0700, Aruna Ramakrishna wrote:
]
> - list_for_each_entry(page, >slabs_full, lru) {
> - if (page->active != cachep->num && !error)
> - error = "slabs_full accounting error";
> - active_objs
On Wed, 2016-08-17 at 11:20 -0700, Aruna Ramakrishna wrote:
]
> - list_for_each_entry(page, >slabs_full, lru) {
> - if (page->active != cachep->num && !error)
> - error = "slabs_full accounting error";
> - active_objs
On large systems, when some slab caches grow to millions of objects (and
many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
During this time, interrupts are disabled while walking the slab lists
(slabs_full, slabs_partial, and slabs_free) for each node, and this
sometimes
On large systems, when some slab caches grow to millions of objects (and
many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
During this time, interrupts are disabled while walking the slab lists
(slabs_full, slabs_partial, and slabs_free) for each node, and this
sometimes
30 matches
Mail list logo