Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Matt Mackall
On Tue, Jun 26, 2007 at 03:00:56PM +1000, Nick Piggin wrote:
> Matt Mackall wrote:
> >On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:
> >
> >>Yoshinori Sato wrote:
> >>
> >>>At Fri, 22 Jun 2007 09:56:35 -0500,
> >>>Matt Mackall wrote:
> >>>
> >>>
> On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
> 
> 
> >Because the page which SLOB allocator got does not have PG_slab,
> 
> This is for a NOMMU system?
> >>>
> >>>
> >>>Yes.
> >>>
> >>>
> >>>
> You're using an old kernel with an old version of SLOB. SLOB in newer
> kernels actually sets per-page flags. Nick, can you see any reason not
> to s/PG_active/PG_slab/ in the current code?
> >>
> >>The problem with this is that PG_private is used only for the SLOB
> >>part of the allocator and not the bigblock part.
> >
> >
> >That's fine, at least for the purposes of kobjsize. We only mark
> >actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
> >and that's indeed what they are.
> 
> OK, but that only makes it work in this case. I think we should
> either call PG_slab part of the kmem/kmalloc API and implement
> that, or say it isn't and make nommu do something else?
>
> >>We _could_ just bite the bullet and have SLOB set PG_slab, however
> >>that would encouarage more users of this flag which we should hope
> >>to get rid of one day.
> >>
> >>The real problem is that nommu wants to get the size of either
> >>kmalloc or alloc_pages objects and it needs to differentiate
> >>between them. So I would rather nommu to take its own page flag
> >>(could overload PG_swapcache, perhaps?), and set that flag on
> >>pages it allocates directly, then uses that to determine whether
> >>to call ksize or not.
> >
> >
> >I think we already established on the last go-round that the kobjsize
> >scheme was rather hopelessly broken anyway. 
> 
> I can't remember, but that would be another good reason to confine
> it to nommu.c wouldn't it?

(jogs brain)

When I last looked, we could tell statically whether pointers passed
to kobjsize were to alloc_pages or kmalloc or kmem_cache_alloc just
based on context.

But in some cases, we could actually pass in pointers to static data
structures (eg bits of init_task) and things that were in ROM and
being used for XIP or things that lived outside of the kernel's
address space. SLAB would deal with this kind of affront by checking
page flags and saying "sorry, not mine".

Beating some sense into nommu here is doable, but non-trivial.

Since we're actually fiddling with page flags at this point and
hijacking an arguably less-appropriate bit, I'm strongly tempted to
just use the SLAB bit.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Nick Piggin

Matt Mackall wrote:

On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:


Yoshinori Sato wrote:


At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:



On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:



Because the page which SLOB allocator got does not have PG_slab,


This is for a NOMMU system?



Yes.




You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?


The problem with this is that PG_private is used only for the SLOB
part of the allocator and not the bigblock part.



That's fine, at least for the purposes of kobjsize. We only mark
actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
and that's indeed what they are.


OK, but that only makes it work in this case. I think we should
either call PG_slab part of the kmem/kmalloc API and implement
that, or say it isn't and make nommu do something else?



We _could_ just bite the bullet and have SLOB set PG_slab, however
that would encouarage more users of this flag which we should hope
to get rid of one day.

The real problem is that nommu wants to get the size of either
kmalloc or alloc_pages objects and it needs to differentiate
between them. So I would rather nommu to take its own page flag
(could overload PG_swapcache, perhaps?), and set that flag on
pages it allocates directly, then uses that to determine whether
to call ksize or not.



I think we already established on the last go-round that the kobjsize
scheme was rather hopelessly broken anyway. 


I can't remember, but that would be another good reason to confine
it to nommu.c wouldn't it?

--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Matt Mackall
On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:
> Yoshinori Sato wrote:
> >At Fri, 22 Jun 2007 09:56:35 -0500,
> >Matt Mackall wrote:
> >
> >>On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
> >>
> >>>Because the page which SLOB allocator got does not have PG_slab,
> >>
> >>This is for a NOMMU system?
> >
> >
> >Yes.
> > 
> >
> >>You're using an old kernel with an old version of SLOB. SLOB in newer
> >>kernels actually sets per-page flags. Nick, can you see any reason not
> >>to s/PG_active/PG_slab/ in the current code?
> 
> The problem with this is that PG_private is used only for the SLOB
> part of the allocator and not the bigblock part.

That's fine, at least for the purposes of kobjsize. We only mark
actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
and that's indeed what they are.

> We _could_ just bite the bullet and have SLOB set PG_slab, however
> that would encouarage more users of this flag which we should hope
> to get rid of one day.
> 
> The real problem is that nommu wants to get the size of either
> kmalloc or alloc_pages objects and it needs to differentiate
> between them. So I would rather nommu to take its own page flag
> (could overload PG_swapcache, perhaps?), and set that flag on
> pages it allocates directly, then uses that to determine whether
> to call ksize or not.

I think we already established on the last go-round that the kobjsize
scheme was rather hopelessly broken anyway. 

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Nick Piggin

Yoshinori Sato wrote:

At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:


On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:


Because the page which SLOB allocator got does not have PG_slab,


This is for a NOMMU system?



Yes.
 


You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?


The problem with this is that PG_private is used only for the SLOB
part of the allocator and not the bigblock part.

We _could_ just bite the bullet and have SLOB set PG_slab, however
that would encouarage more users of this flag which we should hope
to get rid of one day.

The real problem is that nommu wants to get the size of either
kmalloc or alloc_pages objects and it needs to differentiate
between them. So I would rather nommu to take its own page flag
(could overload PG_swapcache, perhaps?), and set that flag on
pages it allocates directly, then uses that to determine whether
to call ksize or not.

--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Yoshinori Sato
At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:
> 
> On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
> > Because the page which SLOB allocator got does not have PG_slab,
> 
> This is for a NOMMU system?

Yes.
 
> You're using an old kernel with an old version of SLOB. SLOB in newer
> kernels actually sets per-page flags. Nick, can you see any reason not
> to s/PG_active/PG_slab/ in the current code?

in mm/nommu.c
   109  unsigned int kobjsize(const void *objp)
   110  {
:
   116  if (PageSlab(page))
   117  return ksize(objp);
:
   122  return (PAGE_SIZE << page->index);
   123  }

PG_slab is verified here.


> > Signed-off-by: Yoshinori Sato <[EMAIL PROTECTED]>
> > 
> > diff --git a/mm/slob.c b/mm/slob.c
> > index 71976c5..d10bcda 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
> >  static void slob_free(void *b, int size);
> >  static void slob_timer_cbk(void);
> >  
> > +static inline void set_slabflags(const void *ptr, int order)
> > +{
> > +   int i;
> > +   struct page *page = virt_to_page(ptr);
> > +   for (i = 0; i < (1 << order); i++)
> > +   __SetPageSlab(page++);
> > +}
> > +
> > +static inline void clear_slabflags(const void *ptr, int order)
> > +{
> > +   int i;
> > +   struct page *page = virt_to_page(ptr);
> > +   for (i = 0; i < (1 << order); i++)
> > +   __ClearPageSlab(page++);
> > +}
> >  
> >  static void *slob_alloc(size_t size, gfp_t gfp, int align)
> >  {
> > @@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
> > bb->pages = (void *)__get_free_pages(gfp, bb->order);
> >  
> > if (bb->pages) {
> > +   set_slabflags(bb->pages, bb->order);
> > spin_lock_irqsave(_lock, flags);
> > bb->next = bigblocks;
> > bigblocks = bb;
> > @@ -240,6 +256,7 @@ void kfree(const void *block)
> > if (bb->pages == block) {
> > *last = bb->next;
> > spin_unlock_irqrestore(_lock, flags);
> > +   clear_slabflags(block, bb->order);
> > free_pages((unsigned long)block, bb->order);
> > slob_free(bb, sizeof(bigblock_t));
> > return;
> > @@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t 
> > flags)
> >  
> > if (c->size < PAGE_SIZE)
> > b = slob_alloc(c->size, flags, c->align);
> > -   else
> > +   else {
> > b = (void *)__get_free_pages(flags, get_order(c->size));
> > -
> > +   if (b)
> > +   set_slabflags(b, get_order(c->size));
> > +   }
> > if (c->ctor)
> > c->ctor(b, c, 0);
> >  
> > @@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
> >  {
> > if (size < PAGE_SIZE)
> > slob_free(b, size);
> > -   else
> > +   else {
> > +   clear_slabflags(b, get_order(size));
> > free_pages((unsigned long)b, get_order(size));
> > +   }
> >  }
> >  
> >  static void kmem_rcu_free(struct rcu_head *head)
> 
> -- 
> Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Yoshinori Sato
At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:
 
 On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
  Because the page which SLOB allocator got does not have PG_slab,
 
 This is for a NOMMU system?

Yes.
 
 You're using an old kernel with an old version of SLOB. SLOB in newer
 kernels actually sets per-page flags. Nick, can you see any reason not
 to s/PG_active/PG_slab/ in the current code?

in mm/nommu.c
   109  unsigned int kobjsize(const void *objp)
   110  {
:
   116  if (PageSlab(page))
   117  return ksize(objp);
:
   122  return (PAGE_SIZE  page-index);
   123  }

PG_slab is verified here.


  Signed-off-by: Yoshinori Sato [EMAIL PROTECTED]
  
  diff --git a/mm/slob.c b/mm/slob.c
  index 71976c5..d10bcda 100644
  --- a/mm/slob.c
  +++ b/mm/slob.c
  @@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
   static void slob_free(void *b, int size);
   static void slob_timer_cbk(void);
   
  +static inline void set_slabflags(const void *ptr, int order)
  +{
  +   int i;
  +   struct page *page = virt_to_page(ptr);
  +   for (i = 0; i  (1  order); i++)
  +   __SetPageSlab(page++);
  +}
  +
  +static inline void clear_slabflags(const void *ptr, int order)
  +{
  +   int i;
  +   struct page *page = virt_to_page(ptr);
  +   for (i = 0; i  (1  order); i++)
  +   __ClearPageSlab(page++);
  +}
   
   static void *slob_alloc(size_t size, gfp_t gfp, int align)
   {
  @@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
  bb-pages = (void *)__get_free_pages(gfp, bb-order);
   
  if (bb-pages) {
  +   set_slabflags(bb-pages, bb-order);
  spin_lock_irqsave(block_lock, flags);
  bb-next = bigblocks;
  bigblocks = bb;
  @@ -240,6 +256,7 @@ void kfree(const void *block)
  if (bb-pages == block) {
  *last = bb-next;
  spin_unlock_irqrestore(block_lock, flags);
  +   clear_slabflags(block, bb-order);
  free_pages((unsigned long)block, bb-order);
  slob_free(bb, sizeof(bigblock_t));
  return;
  @@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t 
  flags)
   
  if (c-size  PAGE_SIZE)
  b = slob_alloc(c-size, flags, c-align);
  -   else
  +   else {
  b = (void *)__get_free_pages(flags, get_order(c-size));
  -
  +   if (b)
  +   set_slabflags(b, get_order(c-size));
  +   }
  if (c-ctor)
  c-ctor(b, c, 0);
   
  @@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
   {
  if (size  PAGE_SIZE)
  slob_free(b, size);
  -   else
  +   else {
  +   clear_slabflags(b, get_order(size));
  free_pages((unsigned long)b, get_order(size));
  +   }
   }
   
   static void kmem_rcu_free(struct rcu_head *head)
 
 -- 
 Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Nick Piggin

Yoshinori Sato wrote:

At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:


On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:


Because the page which SLOB allocator got does not have PG_slab,


This is for a NOMMU system?



Yes.
 


You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?


The problem with this is that PG_private is used only for the SLOB
part of the allocator and not the bigblock part.

We _could_ just bite the bullet and have SLOB set PG_slab, however
that would encouarage more users of this flag which we should hope
to get rid of one day.

The real problem is that nommu wants to get the size of either
kmalloc or alloc_pages objects and it needs to differentiate
between them. So I would rather nommu to take its own page flag
(could overload PG_swapcache, perhaps?), and set that flag on
pages it allocates directly, then uses that to determine whether
to call ksize or not.

--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Matt Mackall
On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:
 Yoshinori Sato wrote:
 At Fri, 22 Jun 2007 09:56:35 -0500,
 Matt Mackall wrote:
 
 On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
 
 Because the page which SLOB allocator got does not have PG_slab,
 
 This is for a NOMMU system?
 
 
 Yes.
  
 
 You're using an old kernel with an old version of SLOB. SLOB in newer
 kernels actually sets per-page flags. Nick, can you see any reason not
 to s/PG_active/PG_slab/ in the current code?
 
 The problem with this is that PG_private is used only for the SLOB
 part of the allocator and not the bigblock part.

That's fine, at least for the purposes of kobjsize. We only mark
actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
and that's indeed what they are.

 We _could_ just bite the bullet and have SLOB set PG_slab, however
 that would encouarage more users of this flag which we should hope
 to get rid of one day.
 
 The real problem is that nommu wants to get the size of either
 kmalloc or alloc_pages objects and it needs to differentiate
 between them. So I would rather nommu to take its own page flag
 (could overload PG_swapcache, perhaps?), and set that flag on
 pages it allocates directly, then uses that to determine whether
 to call ksize or not.

I think we already established on the last go-round that the kobjsize
scheme was rather hopelessly broken anyway. 

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Nick Piggin

Matt Mackall wrote:

On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:


Yoshinori Sato wrote:


At Fri, 22 Jun 2007 09:56:35 -0500,
Matt Mackall wrote:



On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:



Because the page which SLOB allocator got does not have PG_slab,


This is for a NOMMU system?



Yes.




You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?


The problem with this is that PG_private is used only for the SLOB
part of the allocator and not the bigblock part.



That's fine, at least for the purposes of kobjsize. We only mark
actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
and that's indeed what they are.


OK, but that only makes it work in this case. I think we should
either call PG_slab part of the kmem/kmalloc API and implement
that, or say it isn't and make nommu do something else?



We _could_ just bite the bullet and have SLOB set PG_slab, however
that would encouarage more users of this flag which we should hope
to get rid of one day.

The real problem is that nommu wants to get the size of either
kmalloc or alloc_pages objects and it needs to differentiate
between them. So I would rather nommu to take its own page flag
(could overload PG_swapcache, perhaps?), and set that flag on
pages it allocates directly, then uses that to determine whether
to call ksize or not.



I think we already established on the last go-round that the kobjsize
scheme was rather hopelessly broken anyway. 


I can't remember, but that would be another good reason to confine
it to nommu.c wouldn't it?

--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-25 Thread Matt Mackall
On Tue, Jun 26, 2007 at 03:00:56PM +1000, Nick Piggin wrote:
 Matt Mackall wrote:
 On Tue, Jun 26, 2007 at 02:06:15PM +1000, Nick Piggin wrote:
 
 Yoshinori Sato wrote:
 
 At Fri, 22 Jun 2007 09:56:35 -0500,
 Matt Mackall wrote:
 
 
 On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
 
 
 Because the page which SLOB allocator got does not have PG_slab,
 
 This is for a NOMMU system?
 
 
 Yes.
 
 
 
 You're using an old kernel with an old version of SLOB. SLOB in newer
 kernels actually sets per-page flags. Nick, can you see any reason not
 to s/PG_active/PG_slab/ in the current code?
 
 The problem with this is that PG_private is used only for the SLOB
 part of the allocator and not the bigblock part.
 
 
 That's fine, at least for the purposes of kobjsize. We only mark
 actual SLOB-managed pages, kobjsize assumes the rest are alloc_pages
 and that's indeed what they are.
 
 OK, but that only makes it work in this case. I think we should
 either call PG_slab part of the kmem/kmalloc API and implement
 that, or say it isn't and make nommu do something else?

 We _could_ just bite the bullet and have SLOB set PG_slab, however
 that would encouarage more users of this flag which we should hope
 to get rid of one day.
 
 The real problem is that nommu wants to get the size of either
 kmalloc or alloc_pages objects and it needs to differentiate
 between them. So I would rather nommu to take its own page flag
 (could overload PG_swapcache, perhaps?), and set that flag on
 pages it allocates directly, then uses that to determine whether
 to call ksize or not.
 
 
 I think we already established on the last go-round that the kobjsize
 scheme was rather hopelessly broken anyway. 
 
 I can't remember, but that would be another good reason to confine
 it to nommu.c wouldn't it?

(jogs brain)

When I last looked, we could tell statically whether pointers passed
to kobjsize were to alloc_pages or kmalloc or kmem_cache_alloc just
based on context.

But in some cases, we could actually pass in pointers to static data
structures (eg bits of init_task) and things that were in ROM and
being used for XIP or things that lived outside of the kernel's
address space. SLAB would deal with this kind of affront by checking
page flags and saying sorry, not mine.

Beating some sense into nommu here is doable, but non-trivial.

Since we're actually fiddling with page flags at this point and
hijacking an arguably less-appropriate bit, I'm strongly tempted to
just use the SLAB bit.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-22 Thread Matt Mackall
On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
> Because the page which SLOB allocator got does not have PG_slab,

This is for a NOMMU system?

You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?

> Signed-off-by: Yoshinori Sato <[EMAIL PROTECTED]>
> 
> diff --git a/mm/slob.c b/mm/slob.c
> index 71976c5..d10bcda 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
>  static void slob_free(void *b, int size);
>  static void slob_timer_cbk(void);
>  
> +static inline void set_slabflags(const void *ptr, int order)
> +{
> + int i;
> + struct page *page = virt_to_page(ptr);
> + for (i = 0; i < (1 << order); i++)
> + __SetPageSlab(page++);
> +}
> +
> +static inline void clear_slabflags(const void *ptr, int order)
> +{
> + int i;
> + struct page *page = virt_to_page(ptr);
> + for (i = 0; i < (1 << order); i++)
> + __ClearPageSlab(page++);
> +}
>  
>  static void *slob_alloc(size_t size, gfp_t gfp, int align)
>  {
> @@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
>   bb->pages = (void *)__get_free_pages(gfp, bb->order);
>  
>   if (bb->pages) {
> + set_slabflags(bb->pages, bb->order);
>   spin_lock_irqsave(_lock, flags);
>   bb->next = bigblocks;
>   bigblocks = bb;
> @@ -240,6 +256,7 @@ void kfree(const void *block)
>   if (bb->pages == block) {
>   *last = bb->next;
>   spin_unlock_irqrestore(_lock, flags);
> + clear_slabflags(block, bb->order);
>   free_pages((unsigned long)block, bb->order);
>   slob_free(bb, sizeof(bigblock_t));
>   return;
> @@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t flags)
>  
>   if (c->size < PAGE_SIZE)
>   b = slob_alloc(c->size, flags, c->align);
> - else
> + else {
>   b = (void *)__get_free_pages(flags, get_order(c->size));
> -
> + if (b)
> + set_slabflags(b, get_order(c->size));
> + }
>   if (c->ctor)
>   c->ctor(b, c, 0);
>  
> @@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
>  {
>   if (size < PAGE_SIZE)
>   slob_free(b, size);
> - else
> + else {
> + clear_slabflags(b, get_order(size));
>   free_pages((unsigned long)b, get_order(size));
> + }
>  }
>  
>  static void kmem_rcu_free(struct rcu_head *head)

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] SLOB allocator imcompatible SLAB

2007-06-22 Thread Yoshinori Sato
Because the page which SLOB allocator got does not have PG_slab,
I put back the result that kobjsize made a mistake in.

allocateしたページにPG_slabを付ける必要があるのでは無いでしょうか。
I need to add PG_slab to the allocate page, and will not there be it?

Signed-off-by: Yoshinori Sato <[EMAIL PROTECTED]>

diff --git a/mm/slob.c b/mm/slob.c
index 71976c5..d10bcda 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
 static void slob_free(void *b, int size);
 static void slob_timer_cbk(void);
 
+static inline void set_slabflags(const void *ptr, int order)
+{
+   int i;
+   struct page *page = virt_to_page(ptr);
+   for (i = 0; i < (1 << order); i++)
+   __SetPageSlab(page++);
+}
+
+static inline void clear_slabflags(const void *ptr, int order)
+{
+   int i;
+   struct page *page = virt_to_page(ptr);
+   for (i = 0; i < (1 << order); i++)
+   __ClearPageSlab(page++);
+}
 
 static void *slob_alloc(size_t size, gfp_t gfp, int align)
 {
@@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
bb->pages = (void *)__get_free_pages(gfp, bb->order);
 
if (bb->pages) {
+   set_slabflags(bb->pages, bb->order);
spin_lock_irqsave(_lock, flags);
bb->next = bigblocks;
bigblocks = bb;
@@ -240,6 +256,7 @@ void kfree(const void *block)
if (bb->pages == block) {
*last = bb->next;
spin_unlock_irqrestore(_lock, flags);
+   clear_slabflags(block, bb->order);
free_pages((unsigned long)block, bb->order);
slob_free(bb, sizeof(bigblock_t));
return;
@@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t flags)
 
if (c->size < PAGE_SIZE)
b = slob_alloc(c->size, flags, c->align);
-   else
+   else {
b = (void *)__get_free_pages(flags, get_order(c->size));
-
+   if (b)
+   set_slabflags(b, get_order(c->size));
+   }
if (c->ctor)
c->ctor(b, c, 0);
 
@@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
 {
if (size < PAGE_SIZE)
slob_free(b, size);
-   else
+   else {
+   clear_slabflags(b, get_order(size));
free_pages((unsigned long)b, get_order(size));
+   }
 }
 
 static void kmem_rcu_free(struct rcu_head *head)

-- 
Yoshinori Sato
<[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] SLOB allocator imcompatible SLAB

2007-06-22 Thread Yoshinori Sato
Because the page which SLOB allocator got does not have PG_slab,
I put back the result that kobjsize made a mistake in.

allocateしたページにPG_slabを付ける必要があるのでは無いでしょうか。
I need to add PG_slab to the allocate page, and will not there be it?

Signed-off-by: Yoshinori Sato [EMAIL PROTECTED]

diff --git a/mm/slob.c b/mm/slob.c
index 71976c5..d10bcda 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
 static void slob_free(void *b, int size);
 static void slob_timer_cbk(void);
 
+static inline void set_slabflags(const void *ptr, int order)
+{
+   int i;
+   struct page *page = virt_to_page(ptr);
+   for (i = 0; i  (1  order); i++)
+   __SetPageSlab(page++);
+}
+
+static inline void clear_slabflags(const void *ptr, int order)
+{
+   int i;
+   struct page *page = virt_to_page(ptr);
+   for (i = 0; i  (1  order); i++)
+   __ClearPageSlab(page++);
+}
 
 static void *slob_alloc(size_t size, gfp_t gfp, int align)
 {
@@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
bb-pages = (void *)__get_free_pages(gfp, bb-order);
 
if (bb-pages) {
+   set_slabflags(bb-pages, bb-order);
spin_lock_irqsave(block_lock, flags);
bb-next = bigblocks;
bigblocks = bb;
@@ -240,6 +256,7 @@ void kfree(const void *block)
if (bb-pages == block) {
*last = bb-next;
spin_unlock_irqrestore(block_lock, flags);
+   clear_slabflags(block, bb-order);
free_pages((unsigned long)block, bb-order);
slob_free(bb, sizeof(bigblock_t));
return;
@@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t flags)
 
if (c-size  PAGE_SIZE)
b = slob_alloc(c-size, flags, c-align);
-   else
+   else {
b = (void *)__get_free_pages(flags, get_order(c-size));
-
+   if (b)
+   set_slabflags(b, get_order(c-size));
+   }
if (c-ctor)
c-ctor(b, c, 0);
 
@@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
 {
if (size  PAGE_SIZE)
slob_free(b, size);
-   else
+   else {
+   clear_slabflags(b, get_order(size));
free_pages((unsigned long)b, get_order(size));
+   }
 }
 
 static void kmem_rcu_free(struct rcu_head *head)

-- 
Yoshinori Sato
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] SLOB allocator imcompatible SLAB

2007-06-22 Thread Matt Mackall
On Fri, Jun 22, 2007 at 05:08:07PM +0900, Yoshinori Sato wrote:
 Because the page which SLOB allocator got does not have PG_slab,

This is for a NOMMU system?

You're using an old kernel with an old version of SLOB. SLOB in newer
kernels actually sets per-page flags. Nick, can you see any reason not
to s/PG_active/PG_slab/ in the current code?

 Signed-off-by: Yoshinori Sato [EMAIL PROTECTED]
 
 diff --git a/mm/slob.c b/mm/slob.c
 index 71976c5..d10bcda 100644
 --- a/mm/slob.c
 +++ b/mm/slob.c
 @@ -73,6 +73,21 @@ static DEFINE_SPINLOCK(block_lock);
  static void slob_free(void *b, int size);
  static void slob_timer_cbk(void);
  
 +static inline void set_slabflags(const void *ptr, int order)
 +{
 + int i;
 + struct page *page = virt_to_page(ptr);
 + for (i = 0; i  (1  order); i++)
 + __SetPageSlab(page++);
 +}
 +
 +static inline void clear_slabflags(const void *ptr, int order)
 +{
 + int i;
 + struct page *page = virt_to_page(ptr);
 + for (i = 0; i  (1  order); i++)
 + __ClearPageSlab(page++);
 +}
  
  static void *slob_alloc(size_t size, gfp_t gfp, int align)
  {
 @@ -180,6 +195,7 @@ void *__kmalloc(size_t size, gfp_t gfp)
   bb-pages = (void *)__get_free_pages(gfp, bb-order);
  
   if (bb-pages) {
 + set_slabflags(bb-pages, bb-order);
   spin_lock_irqsave(block_lock, flags);
   bb-next = bigblocks;
   bigblocks = bb;
 @@ -240,6 +256,7 @@ void kfree(const void *block)
   if (bb-pages == block) {
   *last = bb-next;
   spin_unlock_irqrestore(block_lock, flags);
 + clear_slabflags(block, bb-order);
   free_pages((unsigned long)block, bb-order);
   slob_free(bb, sizeof(bigblock_t));
   return;
 @@ -323,9 +340,11 @@ void *kmem_cache_alloc(struct kmem_cache *c, gfp_t flags)
  
   if (c-size  PAGE_SIZE)
   b = slob_alloc(c-size, flags, c-align);
 - else
 + else {
   b = (void *)__get_free_pages(flags, get_order(c-size));
 -
 + if (b)
 + set_slabflags(b, get_order(c-size));
 + }
   if (c-ctor)
   c-ctor(b, c, 0);
  
 @@ -347,8 +366,10 @@ static void __kmem_cache_free(void *b, int size)
  {
   if (size  PAGE_SIZE)
   slob_free(b, size);
 - else
 + else {
 + clear_slabflags(b, get_order(size));
   free_pages((unsigned long)b, get_order(size));
 + }
  }
  
  static void kmem_rcu_free(struct rcu_head *head)

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/