Paul Mackerras wrote:
Rik van Riel writes:
I guess we'll need to call tlb_remove_tlb_entry() inside the
MADV_FREE code to keep powerpc happy.
I don't see why; once ptep_test_and_clear_young has returned, the
entry in the hash table has already been removed.
OK, so this one won't be
Rik van Riel writes:
> I guess we'll need to call tlb_remove_tlb_entry() inside the
> MADV_FREE code to keep powerpc happy.
I don't see why; once ptep_test_and_clear_young has returned, the
entry in the hash table has already been removed. Adding the
tlb_remove_tlb_entry call certainly won't do
On Mon, 23 Apr 2007 22:53:49 -0400 Rik van Riel <[EMAIL PROTECTED]> wrote:
> I don't see why we need the attached, but in case you find
> a good reason, here's my signed-off-by line for Andrew :)
Andew is in a defensive crouch trying to work his way through all the bugs
he's been sent. After
Nick Piggin wrote:
What the tlb flush used to be able to assume is that the page
has been removed from the pagetables when they are put in the
tlb flush batch.
I think this is still the case, to a degree. There should be
no harm in removing the TLB entries after the page table has
been
Rik van Riel wrote:
This should fix the MADV_FREE code for PPC's hashed tlb.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Nick Piggin wrote:
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the MADV_FREE and
as illegal, aka
This should fix the MADV_FREE code for PPC's hashed tlb.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Nick Piggin wrote:
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the MADV_FREE and
as illegal, aka undefined behaviour
Rik van Riel wrote:
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening
Rik van Riel wrote:
First some ebizzy runs...
This is interesting. Ginormous speedups in ebizzy[1] on my quad core
test system. The following numbers are the average of 10 runs, since
ebizzy shows some variability.
You can see a big influence from the tlb batching and from Nick's
madv_sem
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the
On Mon, Apr 23, 2007 at 08:21:37PM +1000, Nick Piggin wrote:
> I guess it is a good idea to batch these things. But can you
> do that on all architectures? What happens if your tlb flush
> happens after another thread already accesses it again, or
> after it subsequently gets removed from the
Rik van Riel wrote:
Nick Piggin wrote:
It looks like the tlb flushes (and IPIs) from zap_pte_range()
could have been the problem. They're gone now.
I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after
Nick Piggin wrote:
It looks like the tlb flushes (and IPIs) from zap_pte_range()
could have been the problem. They're gone now.
I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after another thread already
Rik van Riel wrote:
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my
Nick Piggin wrote:
I haven't tested your MADV_FREE patch yet.
Good. It turned out that one behaved a bit strange without tlb batching
anyway.
I'm now running ebizzy across the whole set of kernels I tested before,
and will post the results in a bit.
--
Politics is the struggle between
Nick Piggin wrote:
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to MADV_DONTNEED after the
first MADV_FREE call fails.
Thanks! (I edited slightly so it doesn't wrap)
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
---
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run
Nick Piggin wrote:
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to MADV_DONTNEED after the
first MADV_FREE call fails.
Thanks! (I edited slightly so it doesn't wrap)
Nick Piggin wrote:
I haven't tested your MADV_FREE patch yet.
Good. It turned out that one behaved a bit strange without tlb batching
anyway.
I'm now running ebizzy across the whole set of kernels I tested before,
and will post the results in a bit.
--
Politics is the struggle between
Rik van Riel wrote:
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my
Nick Piggin wrote:
It looks like the tlb flushes (and IPIs) from zap_pte_range()
could have been the problem. They're gone now.
I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after another thread already
Rik van Riel wrote:
Nick Piggin wrote:
It looks like the tlb flushes (and IPIs) from zap_pte_range()
could have been the problem. They're gone now.
I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after
On Mon, Apr 23, 2007 at 08:21:37PM +1000, Nick Piggin wrote:
I guess it is a good idea to batch these things. But can you
do that on all architectures? What happens if your tlb flush
happens after another thread already accesses it again, or
after it subsequently gets removed from the address
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the
Rik van Riel wrote:
First some ebizzy runs...
This is interesting. Ginormous speedups in ebizzy[1] on my quad core
test system. The following numbers are the average of 10 runs, since
ebizzy shows some variability.
You can see a big influence from the tlb batching and from Nick's
madv_sem
Rik van Riel wrote:
Use TLB batching for MADV_FREE. Adds another 10-15% extra performance
to the MySQL sysbench results on my quad core system.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening
This should fix the MADV_FREE code for PPC's hashed tlb.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Nick Piggin wrote:
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the MADV_FREE and
as illegal, aka undefined behaviour
Rik van Riel wrote:
This should fix the MADV_FREE code for PPC's hashed tlb.
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Nick Piggin wrote:
Nick Piggin wrote:
3) because of this, we can treat any such accesses as
happening simultaneously with the MADV_FREE and
as illegal, aka
Nick Piggin wrote:
What the tlb flush used to be able to assume is that the page
has been removed from the pagetables when they are put in the
tlb flush batch.
I think this is still the case, to a degree. There should be
no harm in removing the TLB entries after the page table has
been
On Mon, 23 Apr 2007 22:53:49 -0400 Rik van Riel [EMAIL PROTECTED] wrote:
I don't see why we need the attached, but in case you find
a good reason, here's my signed-off-by line for Andrew :)
Andew is in a defensive crouch trying to work his way through all the bugs
he's been sent. After I've
Rik van Riel writes:
I guess we'll need to call tlb_remove_tlb_entry() inside the
MADV_FREE code to keep powerpc happy.
I don't see why; once ptep_test_and_clear_young has returned, the
entry in the hash table has already been removed. Adding the
tlb_remove_tlb_entry call certainly won't do
Paul Mackerras wrote:
Rik van Riel writes:
I guess we'll need to call tlb_remove_tlb_entry() inside the
MADV_FREE code to keep powerpc happy.
I don't see why; once ptep_test_and_clear_young has returned, the
entry in the hash table has already been removed.
OK, so this one won't be
Jakub Jelinek wrote:
On Fri, Apr 20, 2007 at 07:52:44PM -0400, Rik van Riel wrote:
It turns out that Nick's patch does not improve peak
performance much, but it does prevent the decline when
running with 16 threads on my quad core CPU!
We _definately_ want both patches, there's a huge benefit
Nick Piggin wrote:
So where is the down_write coming from in this workload, I wonder?
Heap management? What syscalls?
Trying to answer this question, I straced the mysql threads that
showed up in top when running a single threaded sysbench workload.
There were no mmap, munmap, brk, mprotect
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to MADV_DONTNEED after the
first MADV_FREE call fails.
Thanks! (I edited slightly so it doesn't wrap)
vanilla new
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should
Nick Piggin wrote:
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
vanilla new glibc madv_free kernel madv_free + mmap_sem
threads
1 610 609 596545
2103211361196
On 4/22/07, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
Why isn't MADV_FREE defined to 5 for linux? It's our first free madv
value? Also the behaviour should better match the one in solaris or BSD,
the last thing we need is slightly different behaviour from operating
systems supporting this
On Sun, Apr 22, 2007 at 01:18:10AM -0700, Andrew Morton wrote:
> On Tue, 17 Apr 2007 03:15:51 -0400 Rik van Riel <[EMAIL PROTECTED]> wrote:
>
> > Make it possible for applications to have the kernel free memory
> > lazily. This reduces a repeated free/malloc cycle from freeing
> > pages and
On Tue, 17 Apr 2007 03:15:51 -0400 Rik van Riel <[EMAIL PROTECTED]> wrote:
> Make it possible for applications to have the kernel free memory
> lazily. This reduces a repeated free/malloc cycle from freeing
> pages and allocating them, to just marking them freeable. If the
> application wants
Nick Piggin wrote:
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your
Nick Piggin wrote:
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your
On Tue, 17 Apr 2007 03:15:51 -0400 Rik van Riel [EMAIL PROTECTED] wrote:
Make it possible for applications to have the kernel free memory
lazily. This reduces a repeated free/malloc cycle from freeing
pages and allocating them, to just marking them freeable. If the
application wants to
On Sun, Apr 22, 2007 at 01:18:10AM -0700, Andrew Morton wrote:
On Tue, 17 Apr 2007 03:15:51 -0400 Rik van Riel [EMAIL PROTECTED] wrote:
Make it possible for applications to have the kernel free memory
lazily. This reduces a repeated free/malloc cycle from freeing
pages and allocating
On 4/22/07, Christoph Hellwig [EMAIL PROTECTED] wrote:
Why isn't MADV_FREE defined to 5 for linux? It's our first free madv
value? Also the behaviour should better match the one in solaris or BSD,
the last thing we need is slightly different behaviour from operating
systems supporting this for
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
vanilla new glibc madv_free kernel madv_free + mmap_sem
threads
1 610 609 596545
2103211361196
Nick Piggin wrote:
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Nick Piggin wrote:
Rik van Riel wrote:
Here are the transactions/seconds for each combination:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should
Rik van Riel wrote:
I've added a 5th column, with just your mmap_sem patch and
without my madv_free patch. It is run with the glibc patch,
which should make it fall back to MADV_DONTNEED after the
first MADV_FREE call fails.
Thanks! (I edited slightly so it doesn't wrap)
vanilla new
Nick Piggin wrote:
So where is the down_write coming from in this workload, I wonder?
Heap management? What syscalls?
Trying to answer this question, I straced the mysql threads that
showed up in top when running a single threaded sysbench workload.
There were no mmap, munmap, brk, mprotect
Jakub Jelinek wrote:
On Fri, Apr 20, 2007 at 07:52:44PM -0400, Rik van Riel wrote:
It turns out that Nick's patch does not improve peak
performance much, but it does prevent the decline when
running with 16 threads on my quad core CPU!
We _definately_ want both patches, there's a huge benefit
Nick Piggin wrote:
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your
patch
no longer
Hugh Dickins wrote:
On Fri, 20 Apr 2007, Rik van Riel wrote:
Andrew Morton wrote:
I do go on about that. But we're adding page flags at about one per
year, and when we run out we're screwed - we'll need to grow the
pageframe.
If you want, I can take a look at folding this into the
On 4/21/07, Hugh Dickins <[EMAIL PROTECTED]> wrote:
But the Linux MADV_DONTNEED does throw away
data from a PROT_WRITE,MAP_PRIVATE mapping (or brk or stack) - those
changes are discarded, and a subsequent access will revert to zeroes
or the underlying mapped file. Been like that since before
On Fri, 20 Apr 2007, Ulrich Drepper wrote:
>
> Just for reference: the MADV_CURRENT behavior is to throw away data in
> the range.
Not exactly. The Linux MADV_DONTNEED never throws away data from a
PROT_WRITE,MAP_SHARED mapping (or shm) - it propagates the dirty bit,
the page will eventually
On Fri, 20 Apr 2007, Rik van Riel wrote:
> Andrew Morton wrote:
>
> > I do go on about that. But we're adding page flags at about one per
> > year, and when we run out we're screwed - we'll need to grow the
> > pageframe.
>
> If you want, I can take a look at folding this into the
>
On Fri, Apr 20, 2007 at 07:52:44PM -0400, Rik van Riel wrote:
> It turns out that Nick's patch does not improve peak
> performance much, but it does prevent the decline when
> running with 16 threads on my quad core CPU!
>
> We _definately_ want both patches, there's a huge benefit
> in having
On Fri, Apr 20, 2007 at 07:52:44PM -0400, Rik van Riel wrote:
It turns out that Nick's patch does not improve peak
performance much, but it does prevent the decline when
running with 16 threads on my quad core CPU!
We _definately_ want both patches, there's a huge benefit
in having them
On Fri, 20 Apr 2007, Rik van Riel wrote:
Andrew Morton wrote:
I do go on about that. But we're adding page flags at about one per
year, and when we run out we're screwed - we'll need to grow the
pageframe.
If you want, I can take a look at folding this into the
-mapping
On Fri, 20 Apr 2007, Ulrich Drepper wrote:
Just for reference: the MADV_CURRENT behavior is to throw away data in
the range.
Not exactly. The Linux MADV_DONTNEED never throws away data from a
PROT_WRITE,MAP_SHARED mapping (or shm) - it propagates the dirty bit,
the page will eventually get
On 4/21/07, Hugh Dickins [EMAIL PROTECTED] wrote:
But the Linux MADV_DONTNEED does throw away
data from a PROT_WRITE,MAP_PRIVATE mapping (or brk or stack) - those
changes are discarded, and a subsequent access will revert to zeroes
or the underlying mapped file. Been like that since before
Hugh Dickins wrote:
On Fri, 20 Apr 2007, Rik van Riel wrote:
Andrew Morton wrote:
I do go on about that. But we're adding page flags at about one per
year, and when we run out we're screwed - we'll need to grow the
pageframe.
If you want, I can take a look at folding this into the
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your
patch
no longer
Nick Piggin wrote:
Rik van Riel wrote:
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your
Eric Dumazet wrote:
Rik van Riel a écrit :
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that
Rik van Riel a écrit :
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your
patch
no longer
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your patch
no longer offers a 2x speedup when
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
>
> > I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
> >
> > - Nick's patch also will help this problem. It could be that your patch
> > no longer offers a 2x speedup when
Andrew Morton wrote:
I've also merged Nick's "mm: madvise avoid exclusive mmap_sem".
- Nick's patch also will help this problem. It could be that your patch
no longer offers a 2x speedup when combined with Nick's patch.
It could well be that the combination of the two is even better, but
On 4/20/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
OK, we need to flesh this out a lot please. People often get confused
about what our MADV_DONTNEED behaviour is.
Well, there's not really much to flesh out. The current MADV_DONTNEED
is useful in some situations. The behavior cannot be
On Thu, 19 Apr 2007 17:15:28 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Restore MADV_DONTNEED to its original Linux behaviour. This is still
> not the same behaviour as POSIX, but applications may be depending on
> the Linux behaviour already. Besides, glibc catches POSIX_MADV_DONTNEED
>
On Tue, 17 Apr 2007 03:15:51 -0400
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Make it possible for applications to have the kernel free memory
> lazily. This reduces a repeated free/malloc cycle from freeing
> pages and allocating them, to just marking them freeable. If the
> application wants
On Tue, 17 Apr 2007 03:15:51 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Make it possible for applications to have the kernel free memory
lazily. This reduces a repeated free/malloc cycle from freeing
pages and allocating them, to just marking them freeable. If the
application wants to
On Thu, 19 Apr 2007 17:15:28 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Restore MADV_DONTNEED to its original Linux behaviour. This is still
not the same behaviour as POSIX, but applications may be depending on
the Linux behaviour already. Besides, glibc catches POSIX_MADV_DONTNEED
and
On 4/20/07, Andrew Morton [EMAIL PROTECTED] wrote:
OK, we need to flesh this out a lot please. People often get confused
about what our MADV_DONTNEED behaviour is.
Well, there's not really much to flesh out. The current MADV_DONTNEED
is useful in some situations. The behavior cannot be
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your patch
no longer offers a 2x speedup when combined with Nick's patch.
It could well be that the combination of the two is even better, but
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your patch
no longer offers a 2x speedup when combined with
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your patch
no longer offers a 2x speedup when
Rik van Riel a écrit :
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your
patch
no longer
Eric Dumazet wrote:
Rik van Riel a écrit :
Andrew Morton wrote:
On Fri, 20 Apr 2007 17:38:06 -0400
Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
I've also merged Nick's mm: madvise avoid exclusive mmap_sem.
- Nick's patch also will help this problem. It could be that your
Restore MADV_DONTNEED to its original Linux behaviour. This is still
not the same behaviour as POSIX, but applications may be depending on
the Linux behaviour already. Besides, glibc catches POSIX_MADV_DONTNEED
and makes sure nothing is done...
Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
Restore MADV_DONTNEED to its original Linux behaviour. This is still
not the same behaviour as POSIX, but applications may be depending on
the Linux behaviour already. Besides, glibc catches POSIX_MADV_DONTNEED
and makes sure nothing is done...
Signed-off-by: Rik van Riel [EMAIL PROTECTED]
---
Make it possible for applications to have the kernel free memory
lazily. This reduces a repeated free/malloc cycle from freeing
pages and allocating them, to just marking them freeable. If the
application wants to reuse them before the kernel needs the memory,
not even a page fault will happen.
Make it possible for applications to have the kernel free memory
lazily. This reduces a repeated free/malloc cycle from freeing
pages and allocating them, to just marking them freeable. If the
application wants to reuse them before the kernel needs the memory,
not even a page fault will happen.
86 matches
Mail list logo