On Thu, 10 January 2008 11:49:25 -0600, Matt Mackall wrote:
>
> b) grouping objects of the same -type- (not size) together should mean
> they have similar lifetimes and thereby keep fragmentation low
>
> (b) is known to be false, you just have to look at our dcache and icache
> pinning.
(b) is h
On Thu, 10 Jan 2008, Andi Kleen wrote:
> I did essentially that for my GBpages hugetlbfs patchkit. GB pages are already
> beyond MAX_ORDER and increasing MAX_ORDER didn't seem attractive because
> it would require aligning the zones all to 1GB which would quite nasty.
I am very very interested in
On Thu, 10 Jan 2008, Matt Mackall wrote:
> Well, I think we'd still have the same page size, in the sense that we'd
> have a struct page for every hardware page and we'd still have hardware
> page-sized pages in the page cache. We'd just change how we allocated
> them. Right now we've got a stack
> - huge pages (superpages for those crazy db people)
>
>Just a simple linked list of these things is fine, we'd never care
>about coalescing large pages together anyway.
I did essentially that for my GBpages hugetlbfs patchkit. GB pages are already
beyond MAX_ORDER and increasing MAX_O
On Thu, 10 Jan 2008, Linus Torvalds wrote:
> It's not even clear that a buddy allocator even for the high-order pages
> is at all the right choice. Almost nobody actually wants >64kB blocks, and
> the ones that *do* want bigger allocations tend to want *much* bigger
> ones, so it's quite possib
On Thu, 2008-01-10 at 11:24 -0800, Christoph Lameter wrote:
> On Thu, 10 Jan 2008, Matt Mackall wrote:
>
> > One idea I've been kicking around is pushing the boundary for the buddy
> > allocator back a bit (to 64k, say) and using SL*B under that. The page
> > allocators would call into buddy for
On Thu, 10 Jan 2008, Matt Mackall wrote:
>
> One idea I've been kicking around is pushing the boundary for the buddy
> allocator back a bit (to 64k, say) and using SL*B under that. The page
> allocators would call into buddy for larger than 64k (rare!) and SL*B
> otherwise. This would let us gre
On Thu, 10 Jan 2008, Matt Mackall wrote:
> > I agree. Crap too. We removed the destructors. The constructors are needed
> > so that objects in slab pages always have a definite state. That is f.e.
> > necessary for slab defragmentation because it has to be able to inspect an
> > object at an arb
On Thu, 10 Jan 2008, Matt Mackall wrote:
> One idea I've been kicking around is pushing the boundary for the buddy
> allocator back a bit (to 64k, say) and using SL*B under that. The page
> allocators would call into buddy for larger than 64k (rare!) and SL*B
> otherwise. This would let us greatly
On Thu, 2008-01-10 at 11:16 -0800, Christoph Lameter wrote:
> On Thu, 10 Jan 2008, Matt Mackall wrote:
>
> > Here I'm going to differ with you. The premises of the SLAB concept
> > (from the original paper) are:
> >
> > a) fragmentation of conventional allocators gets worse over time
>
> Even
On Thu, 10 Jan 2008, Matt Mackall wrote:
> Here I'm going to differ with you. The premises of the SLAB concept
> (from the original paper) are:
>
> a) fragmentation of conventional allocators gets worse over time
Even fragmentation of SLAB/SLUB gets worses over time. That is why we need
a defr
On Thu, 2008-01-10 at 10:28 -0800, Linus Torvalds wrote:
>
> On Thu, 10 Jan 2008, Matt Mackall wrote:
> > >
> > > (I'm not a fan of slabs per se - I think all the constructor/destructor
> > > crap is just that: total crap - but the size/type binning is a big deal,
> > > and I think SLOB was na
On Thu, 10 Jan 2008, Matt Mackall wrote:
> >
> > (I'm not a fan of slabs per se - I think all the constructor/destructor
> > crap is just that: total crap - but the size/type binning is a big deal,
> > and I think SLOB was naïve to think a pure first-fit makes any sense. Now
> > you guys are
> I would suggest that if you guys are really serious about memory use, try
> to do a size-based heap thing, and do best-fit in that heap. Or just some
iirc best fit usually also has some nasty long term fragmentation behaviour.
That is why it is usually also not used.
-Andi
--
To unsubscribe f
On Thu, 2008-01-10 at 08:13 -0800, Linus Torvalds wrote:
>
> On Thu, 10 Jan 2008, Pekka J Enberg wrote:
> >
> > We probably don't have the same version of GCC which perhaps affects
> > memory layout (struct sizes) and thus allocation patterns?
>
> No, struct sizes will not change with compiler
On Thu, 10 Jan 2008, Pekka J Enberg wrote:
>
> We probably don't have the same version of GCC which perhaps affects
> memory layout (struct sizes) and thus allocation patterns?
No, struct sizes will not change with compiler versions - that would
create binary incompatibilities for libraries e
On Thu, 2008-01-10 at 12:54 +0200, Pekka J Enberg wrote:
> Hi Matt,
>
> On Thu, 10 Jan 2008, Pekka J Enberg wrote:
> > I'll double check the results for SLUB next but it seems obvious that your
> > patches are a net gain for SLOB and should be applied. One problem though
> > with SLOB seems to
Hi Matt,
On Thu, 10 Jan 2008, Pekka J Enberg wrote:
> I'll double check the results for SLUB next but it seems obvious that your
> patches are a net gain for SLOB and should be applied. One problem though
> with SLOB seems to be that its memory efficiency is not so stable. Any
> ideas why that
On Wed, 9 Jan 2008, Matt Mackall wrote:
>
> slob: split free list by size
>
[snip]
Reviewed-by: Pekka Enberg <[EMAIL PROTECTED]>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vg
Hi Matt,
On Wed, 9 Jan 2008, Matt Mackall wrote:
> Huh, that's a fairly negligible change on your system. Is that with or
> without the earlier patch? That doesn't appear to change much here.
> Guess I'll have to clean up my stats patch and send it to you.
Ok, if I apply both of the patches, I ge
On Thu, 2008-01-10 at 00:43 +0200, Pekka J Enberg wrote:
> Hi Matt,
>
> On Wed, 9 Jan 2008, Matt Mackall wrote:
> > I kicked this around for a while, slept on it, and then came up with
> > this little hack first thing this morning:
> >
> >
> > slob: split free list by size
> >
>
>
On Thu, 2008-01-10 at 00:43 +0200, Pekka J Enberg wrote:
> Hi Matt,
>
> On Wed, 9 Jan 2008, Matt Mackall wrote:
> > I kicked this around for a while, slept on it, and then came up with
> > this little hack first thing this morning:
> >
> >
> > slob: split free list by size
> >
>
>
Hi Matt,
On Wed, 9 Jan 2008, Matt Mackall wrote:
> I kicked this around for a while, slept on it, and then came up with
> this little hack first thing this morning:
>
>
> slob: split free list by size
>
[snip]
> And the results are fairly miraculous, so please double-check them on
On Mon, 2008-01-07 at 20:06 +0200, Pekka J Enberg wrote:
> Hi Matt,
>
> On Sun, 6 Jan 2008, Matt Mackall wrote:
> > I don't have any particular "terrible" workloads for SLUB. But my
> > attempts to simply boot with all three allocators to init=/bin/bash in,
> > say, lguest show a fair margin for
24 matches
Mail list logo