Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Johannes Weiner
On Tue, Mar 18, 2014 at 05:23:37PM -0700, Andy Lutomirski wrote:
> On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim  wrote:
> > Hello,
> >
> > On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
> >> On 03/13/2014 11:37 PM, Minchan Kim wrote:
> >> > This patch is an attempt to support MADV_FREE for Linux.
> >> >
> >> > Rationale is following as.
> >> >
> >> > Allocators call munmap(2) when user call free(3) if ptr is
> >> > in mmaped area. But munmap isn't cheap because it have to clean up
> >> > all pte entries, unlinking a vma and returns free pages to buddy
> >> > so overhead would be increased linearly by mmaped area's size.
> >> > So they like madvise_dontneed rather than munmap.
> >> >
> >> > "dontneed" holds read-side lock of mmap_sem so other threads
> >> > of the process could go with concurrent page faults so it is
> >> > better than munmap if it's not lack of address space.
> >> > But the problem is that most of allocator reuses that address
> >> > space soonish so applications see page fault, page allocation,
> >> > page zeroing if allocator already called madvise_dontneed
> >> > on the address space.
> >> >
> >> > For avoidng that overheads, other OS have supported MADV_FREE.
> >> > The idea is just mark pages as lazyfree when madvise called
> >> > and purge them if memory pressure happens. Otherwise, VM doesn't
> >> > detach pages on the address space so application could use
> >> > that memory space without above overheads.
> >>
> >> I must be missing something.
> >>
> >> If the application issues MADV_FREE and then writes to the MADV_FREEd
> >> range, the kernel needs to know that the pages are no longer safe to
> >> lazily free.  This would presumably happen via a page fault on write.
> >> For that to happen reliably, the kernel has to write protect the pages
> >> when MADV_FREE is called, which in turn requires flushing the TLBs.
> >
> > It could be done by pte_dirty bit check. Of course, if some architectures
> > don't support it by H/W, pte_mkdirty would make it CoW as you said.
> 
> If the page already has dirty PTEs, then you need to clear the dirty
> bits and flush TLBs so that other CPUs notice that the PTEs are clean,
> I think.
> 
> Also, this has very odd semantics wrt reading the page after MADV_FREE
> -- is reading the page guaranteed to un-free it?

MADV_FREE simply invalidates content.  Sure, you can read at a given
address repeatedly after it.  You might see a different page every
time you do it, but it doesn't matter; the content is undefined.

It's no different than doing malloc() and looking at the memory before
writing anything in it.  After MADV_FREE, the memory is like a freshly
malloc'd chunk: the first access may result in page faults and the
content is undefined until you write it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Andy Lutomirski
On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim  wrote:
> Hello,
>
> On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
>> On 03/13/2014 11:37 PM, Minchan Kim wrote:
>> > This patch is an attempt to support MADV_FREE for Linux.
>> >
>> > Rationale is following as.
>> >
>> > Allocators call munmap(2) when user call free(3) if ptr is
>> > in mmaped area. But munmap isn't cheap because it have to clean up
>> > all pte entries, unlinking a vma and returns free pages to buddy
>> > so overhead would be increased linearly by mmaped area's size.
>> > So they like madvise_dontneed rather than munmap.
>> >
>> > "dontneed" holds read-side lock of mmap_sem so other threads
>> > of the process could go with concurrent page faults so it is
>> > better than munmap if it's not lack of address space.
>> > But the problem is that most of allocator reuses that address
>> > space soonish so applications see page fault, page allocation,
>> > page zeroing if allocator already called madvise_dontneed
>> > on the address space.
>> >
>> > For avoidng that overheads, other OS have supported MADV_FREE.
>> > The idea is just mark pages as lazyfree when madvise called
>> > and purge them if memory pressure happens. Otherwise, VM doesn't
>> > detach pages on the address space so application could use
>> > that memory space without above overheads.
>>
>> I must be missing something.
>>
>> If the application issues MADV_FREE and then writes to the MADV_FREEd
>> range, the kernel needs to know that the pages are no longer safe to
>> lazily free.  This would presumably happen via a page fault on write.
>> For that to happen reliably, the kernel has to write protect the pages
>> when MADV_FREE is called, which in turn requires flushing the TLBs.
>
> It could be done by pte_dirty bit check. Of course, if some architectures
> don't support it by H/W, pte_mkdirty would make it CoW as you said.

If the page already has dirty PTEs, then you need to clear the dirty
bits and flush TLBs so that other CPUs notice that the PTEs are clean,
I think.

Also, this has very odd semantics wrt reading the page after MADV_FREE
-- is reading the page guaranteed to un-free it?

>>
>> How does this end up being faster than munmap?
>
> MADV_FREE doesn't need to return back the pages into page allocator
> compared to MADV_DONTNEED and the overhead is not small when I measured
> that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
> avoiding involving page allocator.)
>
> But I'd like to clarify that it's not MADV_FREE's goal that syscall
> itself should be faster than MADV_DONTNEED but major goal is to
> avoid unnecessary page fault + page allocation + page zeroing +
> garbage swapout.

This sounds like it might be better solved by trying to make munmap or
MADV_DONTNEED faster.  Maybe those functions should lazily give pages
back to the buddy allocator.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Andy Lutomirski
On 03/13/2014 11:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I must be missing something.

If the application issues MADV_FREE and then writes to the MADV_FREEd
range, the kernel needs to know that the pages are no longer safe to
lazily free.  This would presumably happen via a page fault on write.
For that to happen reliably, the kernel has to write protect the pages
when MADV_FREE is called, which in turn requires flushing the TLBs.

How does this end up being faster than munmap?

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Andy Lutomirski
On 03/13/2014 11:37 PM, Minchan Kim wrote:
 This patch is an attempt to support MADV_FREE for Linux.
 
 Rationale is following as.
 
 Allocators call munmap(2) when user call free(3) if ptr is
 in mmaped area. But munmap isn't cheap because it have to clean up
 all pte entries, unlinking a vma and returns free pages to buddy
 so overhead would be increased linearly by mmaped area's size.
 So they like madvise_dontneed rather than munmap.
 
 dontneed holds read-side lock of mmap_sem so other threads
 of the process could go with concurrent page faults so it is
 better than munmap if it's not lack of address space.
 But the problem is that most of allocator reuses that address
 space soonish so applications see page fault, page allocation,
 page zeroing if allocator already called madvise_dontneed
 on the address space.
 
 For avoidng that overheads, other OS have supported MADV_FREE.
 The idea is just mark pages as lazyfree when madvise called
 and purge them if memory pressure happens. Otherwise, VM doesn't
 detach pages on the address space so application could use
 that memory space without above overheads.

I must be missing something.

If the application issues MADV_FREE and then writes to the MADV_FREEd
range, the kernel needs to know that the pages are no longer safe to
lazily free.  This would presumably happen via a page fault on write.
For that to happen reliably, the kernel has to write protect the pages
when MADV_FREE is called, which in turn requires flushing the TLBs.

How does this end up being faster than munmap?

--Andy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Andy Lutomirski
On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim minc...@kernel.org wrote:
 Hello,

 On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
 On 03/13/2014 11:37 PM, Minchan Kim wrote:
  This patch is an attempt to support MADV_FREE for Linux.
 
  Rationale is following as.
 
  Allocators call munmap(2) when user call free(3) if ptr is
  in mmaped area. But munmap isn't cheap because it have to clean up
  all pte entries, unlinking a vma and returns free pages to buddy
  so overhead would be increased linearly by mmaped area's size.
  So they like madvise_dontneed rather than munmap.
 
  dontneed holds read-side lock of mmap_sem so other threads
  of the process could go with concurrent page faults so it is
  better than munmap if it's not lack of address space.
  But the problem is that most of allocator reuses that address
  space soonish so applications see page fault, page allocation,
  page zeroing if allocator already called madvise_dontneed
  on the address space.
 
  For avoidng that overheads, other OS have supported MADV_FREE.
  The idea is just mark pages as lazyfree when madvise called
  and purge them if memory pressure happens. Otherwise, VM doesn't
  detach pages on the address space so application could use
  that memory space without above overheads.

 I must be missing something.

 If the application issues MADV_FREE and then writes to the MADV_FREEd
 range, the kernel needs to know that the pages are no longer safe to
 lazily free.  This would presumably happen via a page fault on write.
 For that to happen reliably, the kernel has to write protect the pages
 when MADV_FREE is called, which in turn requires flushing the TLBs.

 It could be done by pte_dirty bit check. Of course, if some architectures
 don't support it by H/W, pte_mkdirty would make it CoW as you said.

If the page already has dirty PTEs, then you need to clear the dirty
bits and flush TLBs so that other CPUs notice that the PTEs are clean,
I think.

Also, this has very odd semantics wrt reading the page after MADV_FREE
-- is reading the page guaranteed to un-free it?


 How does this end up being faster than munmap?

 MADV_FREE doesn't need to return back the pages into page allocator
 compared to MADV_DONTNEED and the overhead is not small when I measured
 that on my machine.(Roughly, MADV_FREE's cost is half of DONTNEED through
 avoiding involving page allocator.)

 But I'd like to clarify that it's not MADV_FREE's goal that syscall
 itself should be faster than MADV_DONTNEED but major goal is to
 avoid unnecessary page fault + page allocation + page zeroing +
 garbage swapout.

This sounds like it might be better solved by trying to make munmap or
MADV_DONTNEED faster.  Maybe those functions should lazily give pages
back to the buddy allocator.

--Andy
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-18 Thread Johannes Weiner
On Tue, Mar 18, 2014 at 05:23:37PM -0700, Andy Lutomirski wrote:
 On Tue, Mar 18, 2014 at 5:18 PM, Minchan Kim minc...@kernel.org wrote:
  Hello,
 
  On Tue, Mar 18, 2014 at 10:55:24AM -0700, Andy Lutomirski wrote:
  On 03/13/2014 11:37 PM, Minchan Kim wrote:
   This patch is an attempt to support MADV_FREE for Linux.
  
   Rationale is following as.
  
   Allocators call munmap(2) when user call free(3) if ptr is
   in mmaped area. But munmap isn't cheap because it have to clean up
   all pte entries, unlinking a vma and returns free pages to buddy
   so overhead would be increased linearly by mmaped area's size.
   So they like madvise_dontneed rather than munmap.
  
   dontneed holds read-side lock of mmap_sem so other threads
   of the process could go with concurrent page faults so it is
   better than munmap if it's not lack of address space.
   But the problem is that most of allocator reuses that address
   space soonish so applications see page fault, page allocation,
   page zeroing if allocator already called madvise_dontneed
   on the address space.
  
   For avoidng that overheads, other OS have supported MADV_FREE.
   The idea is just mark pages as lazyfree when madvise called
   and purge them if memory pressure happens. Otherwise, VM doesn't
   detach pages on the address space so application could use
   that memory space without above overheads.
 
  I must be missing something.
 
  If the application issues MADV_FREE and then writes to the MADV_FREEd
  range, the kernel needs to know that the pages are no longer safe to
  lazily free.  This would presumably happen via a page fault on write.
  For that to happen reliably, the kernel has to write protect the pages
  when MADV_FREE is called, which in turn requires flushing the TLBs.
 
  It could be done by pte_dirty bit check. Of course, if some architectures
  don't support it by H/W, pte_mkdirty would make it CoW as you said.
 
 If the page already has dirty PTEs, then you need to clear the dirty
 bits and flush TLBs so that other CPUs notice that the PTEs are clean,
 I think.
 
 Also, this has very odd semantics wrt reading the page after MADV_FREE
 -- is reading the page guaranteed to un-free it?

MADV_FREE simply invalidates content.  Sure, you can read at a given
address repeatedly after it.  You might see a different page every
time you do it, but it doesn't matter; the content is undefined.

It's no different than doing malloc() and looking at the memory before
writing anything in it.  After MADV_FREE, the memory is like a freshly
malloc'd chunk: the first access may result in page faults and the
content is undefined until you write it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Minchan Kim
Hello Zhang,

On Fri, Mar 14, 2014 at 03:37:28PM +0800, Zhang Yanfei wrote:
> Hello Minchan
> 
> On 03/14/2014 02:37 PM, Minchan Kim wrote:
> > This patch is an attempt to support MADV_FREE for Linux.
> > 
> > Rationale is following as.
> > 
> > Allocators call munmap(2) when user call free(3) if ptr is
> > in mmaped area. But munmap isn't cheap because it have to clean up
> > all pte entries, unlinking a vma and returns free pages to buddy
> > so overhead would be increased linearly by mmaped area's size.
> > So they like madvise_dontneed rather than munmap.
> > 
> > "dontneed" holds read-side lock of mmap_sem so other threads
> > of the process could go with concurrent page faults so it is
> > better than munmap if it's not lack of address space.
> > But the problem is that most of allocator reuses that address
> > space soonish so applications see page fault, page allocation,
> > page zeroing if allocator already called madvise_dontneed
> > on the address space.
> > 
> > For avoidng that overheads, other OS have supported MADV_FREE.
> > The idea is just mark pages as lazyfree when madvise called
> > and purge them if memory pressure happens. Otherwise, VM doesn't
> > detach pages on the address space so application could use
> > that memory space without above overheads.
> 
> I didn't look into the code. Does this mean we just keep the vma,
> the pte entries, and page itself for later possible reuse? If so,

Just clear pte access bit and dirty bit so the VM could notice
that user made page dirty since it called madvise(MADV_FREE).
If then, VM couldn't purge the page. Otherwise, VM could purge
the page instead of swapping and later, user could see the zeroed
pages.

> how can we reuse the vma? The kernel would mark the vma kinds of
> special so that it can be reused other than unmapped? Do you have

I don't get it. Could you elaborate it a bit?

> an example about this reuse?

As I said, jemalloc and tcmalloc have supported it for other OS.

> 
> Another thing is when I search MADV_FREE in the internet, I see that
> Rik posted the similar patch in 2007 but that patch didn't
> go into the upstream kernel.  And some explanation from Andrew:
> 
> --
>  lazy-freeing-of-memory-through-madv_free.patch
> 
>  
> lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch
> 
>  restore-madv_dontneed-to-its-original-linux-behaviour.patch
> 
> 
> 
> I think the MADV_FREE changes need more work:
> 
> 
> 
> We need crystal-clear statements regarding the present functionality, the new
> 
> functionality and how these relate to the spec and to implmentations in other
> 
> OS'es.  Once we have that info we are in a position to work out whether the
> 
> code can be merged as-is, or if additional changes are needed.
> 
> 
> 
> Because right now, I don't know where we are with respect to these things and
> 
> I doubt if many of our users know either.  How can Michael write a manpage for
> 
> this is we don't tell him what it all does?
> --

True. I need more documentation and will do it if everybody agree on
this new feature.

Thanks.

> 
> Thanks
> Zhang Yanfei
> 
> > 
> > I tweaked jamalloc to use MADV_FREE for the testing.
> > 
> > diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> > index 8a42e75..20e31af 100644
> > --- a/src/chunk_mmap.c
> > +++ b/src/chunk_mmap.c
> > @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
> >  #  else
> >  #error "No method defined for purging unused dirty pages."
> >  #  endif
> > -   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> > +   int err = madvise(addr, length, 5);
> > unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
> >  #  undef JEMALLOC_MADV_PURGE
> >  #  undef JEMALLOC_MADV_ZEROS
> > 
> > 
> > RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> > 
> > (1.1) stands for 1 process and 1 thread so for exmaple,
> > (1.4) is 1 process and 4 thread.
> > 
> > vanilla jemalloc patched jemalloc
> > 
> > 1.1   1.1
> > records:  5  records:  5
> > avg:  7404.60avg:  14059.80
> > std:  116.67(1.58%)  std:  93.92(0.67%)
> > max:  7564.00max:  14152.00
> > min:  7288.00min:  13893.00
> > 1.4   1.4
> > records:  5  records:  5
> > avg:  16160.80   avg:  30173.00
> > std:  509.80(3.15%)  std:  3050.72(10.11%)
> > max:  16728.00   max:  33989.00
> > min:  15216.00   min:  25173.00
> > 1.8   1.8
> > records:  5  records:  5
> > avg:  16003.00   avg:  30080.20
> > std:  290.40(1.81%)  std:  2063.57(6.86%)
> > max:  16537.00   max:  32735.00
> > min:  15727.00   min:  27381.00
> > 4.1   4.1
> > records:  5  records:  5
> > avg:  4003.60avg:  8064.80
> > std:  65.33(1.63%)   std:  

Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Zhang Yanfei
Hello Minchan

On 03/14/2014 02:37 PM, Minchan Kim wrote:
> This patch is an attempt to support MADV_FREE for Linux.
> 
> Rationale is following as.
> 
> Allocators call munmap(2) when user call free(3) if ptr is
> in mmaped area. But munmap isn't cheap because it have to clean up
> all pte entries, unlinking a vma and returns free pages to buddy
> so overhead would be increased linearly by mmaped area's size.
> So they like madvise_dontneed rather than munmap.
> 
> "dontneed" holds read-side lock of mmap_sem so other threads
> of the process could go with concurrent page faults so it is
> better than munmap if it's not lack of address space.
> But the problem is that most of allocator reuses that address
> space soonish so applications see page fault, page allocation,
> page zeroing if allocator already called madvise_dontneed
> on the address space.
> 
> For avoidng that overheads, other OS have supported MADV_FREE.
> The idea is just mark pages as lazyfree when madvise called
> and purge them if memory pressure happens. Otherwise, VM doesn't
> detach pages on the address space so application could use
> that memory space without above overheads.

I didn't look into the code. Does this mean we just keep the vma,
the pte entries, and page itself for later possible reuse? If so,
how can we reuse the vma? The kernel would mark the vma kinds of
special so that it can be reused other than unmapped? Do you have
an example about this reuse?

Another thing is when I search MADV_FREE in the internet, I see that
Rik posted the similar patch in 2007 but that patch didn't
go into the upstream kernel.  And some explanation from Andrew:

--
 lazy-freeing-of-memory-through-madv_free.patch

 
lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch

 restore-madv_dontneed-to-its-original-linux-behaviour.patch



I think the MADV_FREE changes need more work:



We need crystal-clear statements regarding the present functionality, the new

functionality and how these relate to the spec and to implmentations in other

OS'es.  Once we have that info we are in a position to work out whether the

code can be merged as-is, or if additional changes are needed.



Because right now, I don't know where we are with respect to these things and

I doubt if many of our users know either.  How can Michael write a manpage for

this is we don't tell him what it all does?
--

Thanks
Zhang Yanfei

> 
> I tweaked jamalloc to use MADV_FREE for the testing.
> 
> diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
> index 8a42e75..20e31af 100644
> --- a/src/chunk_mmap.c
> +++ b/src/chunk_mmap.c
> @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
>  #  else
>  #error "No method defined for purging unused dirty pages."
>  #  endif
> -   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
> +   int err = madvise(addr, length, 5);
> unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
>  #  undef JEMALLOC_MADV_PURGE
>  #  undef JEMALLOC_MADV_ZEROS
> 
> 
> RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
> 
> (1.1) stands for 1 process and 1 thread so for exmaple,
> (1.4) is 1 process and 4 thread.
> 
> vanilla jemalloc   patched jemalloc
> 
> 1.1   1.1
> records:  5  records:  5
> avg:  7404.60avg:  14059.80
> std:  116.67(1.58%)  std:  93.92(0.67%)
> max:  7564.00max:  14152.00
> min:  7288.00min:  13893.00
> 1.4   1.4
> records:  5  records:  5
> avg:  16160.80   avg:  30173.00
> std:  509.80(3.15%)  std:  3050.72(10.11%)
> max:  16728.00   max:  33989.00
> min:  15216.00   min:  25173.00
> 1.8   1.8
> records:  5  records:  5
> avg:  16003.00   avg:  30080.20
> std:  290.40(1.81%)  std:  2063.57(6.86%)
> max:  16537.00   max:  32735.00
> min:  15727.00   min:  27381.00
> 4.1   4.1
> records:  5  records:  5
> avg:  4003.60avg:  8064.80
> std:  65.33(1.63%)   std:  143.89(1.78%)
> max:  4118.00max:  8319.00
> min:  3921.00min:  7888.00
> 4.4   4.4
> records:  5  records:  5
> avg:  3907.40avg:  7199.80
> std:  48.68(1.25%)   std:  80.21(1.11%)
> max:  3997.00max:  7320.00
> min:  3863.00min:  7113.00
> 4.8   4.8
> records:  5  records:  5
> avg:  3893.00avg:  7195.20
> std:  19.11(0.49%)   std:  101.55(1.41%)
> max:  3927.00max:  7309.00
> min:  3869.00min:  7012.00
> 8.1   8.1
> records:  5  records:  5
> avg:  1942.00avg:  3602.80
> std:  34.60(1.78%)   std:  22.97(0.64%)
> max:  2010.00max:  3632.00
> min:  

[RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Minchan Kim
This patch is an attempt to support MADV_FREE for Linux.

Rationale is following as.

Allocators call munmap(2) when user call free(3) if ptr is
in mmaped area. But munmap isn't cheap because it have to clean up
all pte entries, unlinking a vma and returns free pages to buddy
so overhead would be increased linearly by mmaped area's size.
So they like madvise_dontneed rather than munmap.

"dontneed" holds read-side lock of mmap_sem so other threads
of the process could go with concurrent page faults so it is
better than munmap if it's not lack of address space.
But the problem is that most of allocator reuses that address
space soonish so applications see page fault, page allocation,
page zeroing if allocator already called madvise_dontneed
on the address space.

For avoidng that overheads, other OS have supported MADV_FREE.
The idea is just mark pages as lazyfree when madvise called
and purge them if memory pressure happens. Otherwise, VM doesn't
detach pages on the address space so application could use
that memory space without above overheads.

I tweaked jamalloc to use MADV_FREE for the testing.

diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
index 8a42e75..20e31af 100644
--- a/src/chunk_mmap.c
+++ b/src/chunk_mmap.c
@@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
 #  else
 #error "No method defined for purging unused dirty pages."
 #  endif
-   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
+   int err = madvise(addr, length, 5);
unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
 #  undef JEMALLOC_MADV_PURGE
 #  undef JEMALLOC_MADV_ZEROS


RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)

(1.1) stands for 1 process and 1 thread so for exmaple,
(1.4) is 1 process and 4 thread.

vanilla jemalloc patched jemalloc

1.1   1.1
records:  5  records:  5
avg:  7404.60avg:  14059.80
std:  116.67(1.58%)  std:  93.92(0.67%)
max:  7564.00max:  14152.00
min:  7288.00min:  13893.00
1.4   1.4
records:  5  records:  5
avg:  16160.80   avg:  30173.00
std:  509.80(3.15%)  std:  3050.72(10.11%)
max:  16728.00   max:  33989.00
min:  15216.00   min:  25173.00
1.8   1.8
records:  5  records:  5
avg:  16003.00   avg:  30080.20
std:  290.40(1.81%)  std:  2063.57(6.86%)
max:  16537.00   max:  32735.00
min:  15727.00   min:  27381.00
4.1   4.1
records:  5  records:  5
avg:  4003.60avg:  8064.80
std:  65.33(1.63%)   std:  143.89(1.78%)
max:  4118.00max:  8319.00
min:  3921.00min:  7888.00
4.4   4.4
records:  5  records:  5
avg:  3907.40avg:  7199.80
std:  48.68(1.25%)   std:  80.21(1.11%)
max:  3997.00max:  7320.00
min:  3863.00min:  7113.00
4.8   4.8
records:  5  records:  5
avg:  3893.00avg:  7195.20
std:  19.11(0.49%)   std:  101.55(1.41%)
max:  3927.00max:  7309.00
min:  3869.00min:  7012.00
8.1   8.1
records:  5  records:  5
avg:  1942.00avg:  3602.80
std:  34.60(1.78%)   std:  22.97(0.64%)
max:  2010.00max:  3632.00
min:  1913.00min:  3563.00
8.4   8.4
records:  5  records:  5
avg:  1938.00avg:  3405.60
std:  32.77(1.69%)   std:  36.25(1.06%)
max:  1998.00max:  3468.00
min:  1905.00min:  3374.00
8.8   8.8
records:  5  records:  5
avg:  1977.80avg:  3434.20
std:  25.75(1.30%)   std:  57.95(1.69%)
max:  2011.00max:  3533.00
min:  1937.00min:  3363.00

So, MADV_FREE is 2 time faster than MADV_DONTNEED for
every cases.

I didn't test a lot but it's enough to show the concept and
direction before LSF/MM.

Patchset is based on 3.14-rc6.

Welcome any comment!

Minchan Kim (6):
  mm: clean up PAGE_MAPPING_FLAGS
  mm: work deactivate_page with anon pages
  mm: support madvise(MADV_FREE)
  mm: add stat about lazyfree pages
  mm: reclaim lazyfree pages in swapless system
  mm: ksm: don't merge lazyfree page

 include/asm-generic/tlb.h  |  9 
 include/linux/mm.h | 39 +-
 include/linux/mm_inline.h  |  9 
 include/linux/mmzone.h |  1 +
 include/linux/rmap.h   |  1 +
 include/linux/swap.h   | 15 +
 include/linux/vm_event_item.h  |  1 +
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/ksm.c   | 18 +++-
 mm/madvise.c   | 17 +--
 mm/memory.c| 12 ++-
 mm/page_alloc.c|  5 -
 

[RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Minchan Kim
This patch is an attempt to support MADV_FREE for Linux.

Rationale is following as.

Allocators call munmap(2) when user call free(3) if ptr is
in mmaped area. But munmap isn't cheap because it have to clean up
all pte entries, unlinking a vma and returns free pages to buddy
so overhead would be increased linearly by mmaped area's size.
So they like madvise_dontneed rather than munmap.

dontneed holds read-side lock of mmap_sem so other threads
of the process could go with concurrent page faults so it is
better than munmap if it's not lack of address space.
But the problem is that most of allocator reuses that address
space soonish so applications see page fault, page allocation,
page zeroing if allocator already called madvise_dontneed
on the address space.

For avoidng that overheads, other OS have supported MADV_FREE.
The idea is just mark pages as lazyfree when madvise called
and purge them if memory pressure happens. Otherwise, VM doesn't
detach pages on the address space so application could use
that memory space without above overheads.

I tweaked jamalloc to use MADV_FREE for the testing.

diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
index 8a42e75..20e31af 100644
--- a/src/chunk_mmap.c
+++ b/src/chunk_mmap.c
@@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
 #  else
 #error No method defined for purging unused dirty pages.
 #  endif
-   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
+   int err = madvise(addr, length, 5);
unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
 #  undef JEMALLOC_MADV_PURGE
 #  undef JEMALLOC_MADV_ZEROS


RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)

(1.1) stands for 1 process and 1 thread so for exmaple,
(1.4) is 1 process and 4 thread.

vanilla jemalloc patched jemalloc

1.1   1.1
records:  5  records:  5
avg:  7404.60avg:  14059.80
std:  116.67(1.58%)  std:  93.92(0.67%)
max:  7564.00max:  14152.00
min:  7288.00min:  13893.00
1.4   1.4
records:  5  records:  5
avg:  16160.80   avg:  30173.00
std:  509.80(3.15%)  std:  3050.72(10.11%)
max:  16728.00   max:  33989.00
min:  15216.00   min:  25173.00
1.8   1.8
records:  5  records:  5
avg:  16003.00   avg:  30080.20
std:  290.40(1.81%)  std:  2063.57(6.86%)
max:  16537.00   max:  32735.00
min:  15727.00   min:  27381.00
4.1   4.1
records:  5  records:  5
avg:  4003.60avg:  8064.80
std:  65.33(1.63%)   std:  143.89(1.78%)
max:  4118.00max:  8319.00
min:  3921.00min:  7888.00
4.4   4.4
records:  5  records:  5
avg:  3907.40avg:  7199.80
std:  48.68(1.25%)   std:  80.21(1.11%)
max:  3997.00max:  7320.00
min:  3863.00min:  7113.00
4.8   4.8
records:  5  records:  5
avg:  3893.00avg:  7195.20
std:  19.11(0.49%)   std:  101.55(1.41%)
max:  3927.00max:  7309.00
min:  3869.00min:  7012.00
8.1   8.1
records:  5  records:  5
avg:  1942.00avg:  3602.80
std:  34.60(1.78%)   std:  22.97(0.64%)
max:  2010.00max:  3632.00
min:  1913.00min:  3563.00
8.4   8.4
records:  5  records:  5
avg:  1938.00avg:  3405.60
std:  32.77(1.69%)   std:  36.25(1.06%)
max:  1998.00max:  3468.00
min:  1905.00min:  3374.00
8.8   8.8
records:  5  records:  5
avg:  1977.80avg:  3434.20
std:  25.75(1.30%)   std:  57.95(1.69%)
max:  2011.00max:  3533.00
min:  1937.00min:  3363.00

So, MADV_FREE is 2 time faster than MADV_DONTNEED for
every cases.

I didn't test a lot but it's enough to show the concept and
direction before LSF/MM.

Patchset is based on 3.14-rc6.

Welcome any comment!

Minchan Kim (6):
  mm: clean up PAGE_MAPPING_FLAGS
  mm: work deactivate_page with anon pages
  mm: support madvise(MADV_FREE)
  mm: add stat about lazyfree pages
  mm: reclaim lazyfree pages in swapless system
  mm: ksm: don't merge lazyfree page

 include/asm-generic/tlb.h  |  9 
 include/linux/mm.h | 39 +-
 include/linux/mm_inline.h  |  9 
 include/linux/mmzone.h |  1 +
 include/linux/rmap.h   |  1 +
 include/linux/swap.h   | 15 +
 include/linux/vm_event_item.h  |  1 +
 include/uapi/asm-generic/mman-common.h |  1 +
 mm/ksm.c   | 18 +++-
 mm/madvise.c   | 17 +--
 mm/memory.c| 12 ++-
 mm/page_alloc.c|  5 -
 

Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Zhang Yanfei
Hello Minchan

On 03/14/2014 02:37 PM, Minchan Kim wrote:
 This patch is an attempt to support MADV_FREE for Linux.
 
 Rationale is following as.
 
 Allocators call munmap(2) when user call free(3) if ptr is
 in mmaped area. But munmap isn't cheap because it have to clean up
 all pte entries, unlinking a vma and returns free pages to buddy
 so overhead would be increased linearly by mmaped area's size.
 So they like madvise_dontneed rather than munmap.
 
 dontneed holds read-side lock of mmap_sem so other threads
 of the process could go with concurrent page faults so it is
 better than munmap if it's not lack of address space.
 But the problem is that most of allocator reuses that address
 space soonish so applications see page fault, page allocation,
 page zeroing if allocator already called madvise_dontneed
 on the address space.
 
 For avoidng that overheads, other OS have supported MADV_FREE.
 The idea is just mark pages as lazyfree when madvise called
 and purge them if memory pressure happens. Otherwise, VM doesn't
 detach pages on the address space so application could use
 that memory space without above overheads.

I didn't look into the code. Does this mean we just keep the vma,
the pte entries, and page itself for later possible reuse? If so,
how can we reuse the vma? The kernel would mark the vma kinds of
special so that it can be reused other than unmapped? Do you have
an example about this reuse?

Another thing is when I search MADV_FREE in the internet, I see that
Rik posted the similar patch in 2007 but that patch didn't
go into the upstream kernel.  And some explanation from Andrew:

--
 lazy-freeing-of-memory-through-madv_free.patch

 
lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch

 restore-madv_dontneed-to-its-original-linux-behaviour.patch



I think the MADV_FREE changes need more work:



We need crystal-clear statements regarding the present functionality, the new

functionality and how these relate to the spec and to implmentations in other

OS'es.  Once we have that info we are in a position to work out whether the

code can be merged as-is, or if additional changes are needed.



Because right now, I don't know where we are with respect to these things and

I doubt if many of our users know either.  How can Michael write a manpage for

this is we don't tell him what it all does?
--

Thanks
Zhang Yanfei

 
 I tweaked jamalloc to use MADV_FREE for the testing.
 
 diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
 index 8a42e75..20e31af 100644
 --- a/src/chunk_mmap.c
 +++ b/src/chunk_mmap.c
 @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
  #  else
  #error No method defined for purging unused dirty pages.
  #  endif
 -   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
 +   int err = madvise(addr, length, 5);
 unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
  #  undef JEMALLOC_MADV_PURGE
  #  undef JEMALLOC_MADV_ZEROS
 
 
 RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
 
 (1.1) stands for 1 process and 1 thread so for exmaple,
 (1.4) is 1 process and 4 thread.
 
 vanilla jemalloc   patched jemalloc
 
 1.1   1.1
 records:  5  records:  5
 avg:  7404.60avg:  14059.80
 std:  116.67(1.58%)  std:  93.92(0.67%)
 max:  7564.00max:  14152.00
 min:  7288.00min:  13893.00
 1.4   1.4
 records:  5  records:  5
 avg:  16160.80   avg:  30173.00
 std:  509.80(3.15%)  std:  3050.72(10.11%)
 max:  16728.00   max:  33989.00
 min:  15216.00   min:  25173.00
 1.8   1.8
 records:  5  records:  5
 avg:  16003.00   avg:  30080.20
 std:  290.40(1.81%)  std:  2063.57(6.86%)
 max:  16537.00   max:  32735.00
 min:  15727.00   min:  27381.00
 4.1   4.1
 records:  5  records:  5
 avg:  4003.60avg:  8064.80
 std:  65.33(1.63%)   std:  143.89(1.78%)
 max:  4118.00max:  8319.00
 min:  3921.00min:  7888.00
 4.4   4.4
 records:  5  records:  5
 avg:  3907.40avg:  7199.80
 std:  48.68(1.25%)   std:  80.21(1.11%)
 max:  3997.00max:  7320.00
 min:  3863.00min:  7113.00
 4.8   4.8
 records:  5  records:  5
 avg:  3893.00avg:  7195.20
 std:  19.11(0.49%)   std:  101.55(1.41%)
 max:  3927.00max:  7309.00
 min:  3869.00min:  7012.00
 8.1   8.1
 records:  5  records:  5
 avg:  1942.00avg:  3602.80
 std:  34.60(1.78%)   std:  22.97(0.64%)
 max:  2010.00max:  3632.00
 min:  1913.00min:  3563.00
 8.4   8.4
 records:  5  records:  5
 avg:  

Re: [RFC 0/6] mm: support madvise(MADV_FREE)

2014-03-14 Thread Minchan Kim
Hello Zhang,

On Fri, Mar 14, 2014 at 03:37:28PM +0800, Zhang Yanfei wrote:
 Hello Minchan
 
 On 03/14/2014 02:37 PM, Minchan Kim wrote:
  This patch is an attempt to support MADV_FREE for Linux.
  
  Rationale is following as.
  
  Allocators call munmap(2) when user call free(3) if ptr is
  in mmaped area. But munmap isn't cheap because it have to clean up
  all pte entries, unlinking a vma and returns free pages to buddy
  so overhead would be increased linearly by mmaped area's size.
  So they like madvise_dontneed rather than munmap.
  
  dontneed holds read-side lock of mmap_sem so other threads
  of the process could go with concurrent page faults so it is
  better than munmap if it's not lack of address space.
  But the problem is that most of allocator reuses that address
  space soonish so applications see page fault, page allocation,
  page zeroing if allocator already called madvise_dontneed
  on the address space.
  
  For avoidng that overheads, other OS have supported MADV_FREE.
  The idea is just mark pages as lazyfree when madvise called
  and purge them if memory pressure happens. Otherwise, VM doesn't
  detach pages on the address space so application could use
  that memory space without above overheads.
 
 I didn't look into the code. Does this mean we just keep the vma,
 the pte entries, and page itself for later possible reuse? If so,

Just clear pte access bit and dirty bit so the VM could notice
that user made page dirty since it called madvise(MADV_FREE).
If then, VM couldn't purge the page. Otherwise, VM could purge
the page instead of swapping and later, user could see the zeroed
pages.

 how can we reuse the vma? The kernel would mark the vma kinds of
 special so that it can be reused other than unmapped? Do you have

I don't get it. Could you elaborate it a bit?

 an example about this reuse?

As I said, jemalloc and tcmalloc have supported it for other OS.

 
 Another thing is when I search MADV_FREE in the internet, I see that
 Rik posted the similar patch in 2007 but that patch didn't
 go into the upstream kernel.  And some explanation from Andrew:
 
 --
  lazy-freeing-of-memory-through-madv_free.patch
 
  
 lazy-freeing-of-memory-through-madv_free-vs-mm-madvise-avoid-exclusive-mmap_sem.patch
 
  restore-madv_dontneed-to-its-original-linux-behaviour.patch
 
 
 
 I think the MADV_FREE changes need more work:
 
 
 
 We need crystal-clear statements regarding the present functionality, the new
 
 functionality and how these relate to the spec and to implmentations in other
 
 OS'es.  Once we have that info we are in a position to work out whether the
 
 code can be merged as-is, or if additional changes are needed.
 
 
 
 Because right now, I don't know where we are with respect to these things and
 
 I doubt if many of our users know either.  How can Michael write a manpage for
 
 this is we don't tell him what it all does?
 --

True. I need more documentation and will do it if everybody agree on
this new feature.

Thanks.

 
 Thanks
 Zhang Yanfei
 
  
  I tweaked jamalloc to use MADV_FREE for the testing.
  
  diff --git a/src/chunk_mmap.c b/src/chunk_mmap.c
  index 8a42e75..20e31af 100644
  --- a/src/chunk_mmap.c
  +++ b/src/chunk_mmap.c
  @@ -131,7 +131,7 @@ pages_purge(void *addr, size_t length)
   #  else
   #error No method defined for purging unused dirty pages.
   #  endif
  -   int err = madvise(addr, length, JEMALLOC_MADV_PURGE);
  +   int err = madvise(addr, length, 5);
  unzeroed = (JEMALLOC_MADV_ZEROS == false || err != 0);
   #  undef JEMALLOC_MADV_PURGE
   #  undef JEMALLOC_MADV_ZEROS
  
  
  RAM 2G, CPU 4, ebizzy benchmark(./ebizzy -S 30 -n 512)
  
  (1.1) stands for 1 process and 1 thread so for exmaple,
  (1.4) is 1 process and 4 thread.
  
  vanilla jemalloc patched jemalloc
  
  1.1   1.1
  records:  5  records:  5
  avg:  7404.60avg:  14059.80
  std:  116.67(1.58%)  std:  93.92(0.67%)
  max:  7564.00max:  14152.00
  min:  7288.00min:  13893.00
  1.4   1.4
  records:  5  records:  5
  avg:  16160.80   avg:  30173.00
  std:  509.80(3.15%)  std:  3050.72(10.11%)
  max:  16728.00   max:  33989.00
  min:  15216.00   min:  25173.00
  1.8   1.8
  records:  5  records:  5
  avg:  16003.00   avg:  30080.20
  std:  290.40(1.81%)  std:  2063.57(6.86%)
  max:  16537.00   max:  32735.00
  min:  15727.00   min:  27381.00
  4.1   4.1
  records:  5  records:  5
  avg:  4003.60avg:  8064.80
  std:  65.33(1.63%)   std:  143.89(1.78%)
  max:  4118.00max:  8319.00
  min:  3921.00min:  7888.00
  4.4   4.4
  records:  5  records:  5
  avg:  3907.40avg: