lly cause compaction to skip a range of
pages.
Only assign zone->compact_cache_free_pfn if we actually
isolated free pages for compaction.
Split out the calculation to get the start of the last page
block in a zone into its own, commented function.
Signed-off-by: Rik van Riel
---
include
On 07/12/2012 02:50 PM, Andrea Arcangeli wrote:
On Mon, Jul 02, 2012 at 12:24:36AM -0400, Rik van Riel wrote:
On 06/28/2012 08:56 AM, Andrea Arcangeli wrote:
If any of the ptes that khugepaged is collapsing was a pte_numa, the
resulting trans huge pmd will be a pmd_numa too.
Why?
If some of
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
At LSF/MM, there was a presentation comparing Peter's
NUMA code with Andrea's NUMA code. I believe this is
the main reason why Andrea's code performed better in
that particular test...
+ if (sched_feat(NUMA_BALANCE_FILTER)) {
+
that it's an exclusive write-lock in
| that case - suggested by Rik van Riel.
But that commit renames only anon_vma_lock()
Signed-off-by: Konstantin Khlebnikov
Cc: Ingo Molnar
Cc: Rik van Riel
Reviewed-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line
e
change, provided it works for Luigi :)
Acked-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
not solve the PAE OOM issue.)
Paul Szabo p...@maths.usyd.edu.au http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics University of SydneyAustralia
Reported-by: Paul Szabo
Reference: http://bugs.debian.org/695182
Signed-off-by: Paul Szabo
Acked-by: Rik van Riel
On 01/22/2013 06:13 PM, Michel Lespinasse wrote:
Because of these limitations, the MCS queue spinlock implementation does
not always compare favorably to ticket spinlocks under moderate contention.
This alternative queue spinlock implementation has some nice properties:
- One single atomic ope
On 02/13/2013 11:20 AM, Linus Torvalds wrote:
On Wed, Feb 13, 2013 at 4:06 AM, tip-bot for Rik van Riel
wrote:
x86/smp: Move waiting on contended ticket lock out of line
Moving the wait loop for congested loops to its own function
allows us to add things to that wait loop, without growing
On 02/13/2013 02:36 PM, Linus Torvalds wrote:
On Wed, Feb 13, 2013 at 11:08 AM, Rik van Riel wrote:
The spinlock backoff code prevents these last cases from
experiencing large performance regressions when the hardware
is upgraded.
I still want *numbers*.
There are real cases where backoff
On 02/13/2013 05:40 PM, Linus Torvalds wrote:
On Wed, Feb 13, 2013 at 2:21 PM, Rik van Riel wrote:
What kind of numbers would you like?
Numbers showing that the common case is not affected by this
code?
Or numbers showing that performance of something is improved
with this code?
Of course
for the problem merged upstream.
-Original Message-
From: linux-kernel-ow...@vger.kernel.org
[mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of dormando
Sent: Monday, February 11, 2013 9:01 PM
To: Rik van Riel
Cc: Randy Dunlap; Satoru Moriya; linux-kernel@vger.kernel.org;
linux
On 10/11/2012 01:59 PM, Andrea Arcangeli wrote:
On Thu, Oct 11, 2012 at 01:34:12PM -0400, Rik van Riel wrote:
That is indeed a future optimization I have suggested
in the past. Allocation of this struct could be deferred
until the first time knuma_scand unmaps pages from the
process to
possible that more pages than necessary are isolated but the check
still fails and I missed that this fix was not picked up before RC1. This
same problem has been identified in 3.7-RC1 by Tony Prisk and should be
addressed by the following patch.
Signed-off-by: Mel Gorman
Tested-by: Tony Prisk
On 11/20/2012 08:54 PM, Andrew Theurer wrote:
I can confirm single JVM JBB is working well for me. I see a 30%
improvement over autoNUMA. What I can't make sense of is some perf
stats (taken at 80 warehouses on 4 x WST-EX, 512GB memory):
AutoNUMA does not have native THP migration, that may
On 11/21/2012 12:02 PM, Linus Torvalds wrote:
The same is true of all your arguments about Mel's numbers wrt THP
etc. Your arguments are misleading - either intentionally, of because
you yourself didn't think things through. For schednuma, it's not
enough to be par with mainline with THP off - t
On 11/21/2012 02:15 PM, Mel Gorman wrote:
On Wed, Nov 21, 2012 at 07:25:37PM +0100, Ingo Molnar wrote:
As mentioned in my other mail, this patch of yours looks very
similar to the numa/core commit attached below, mostly written
by Peter:
30f93abc6cb3 sched, numa, mm: Add the scanning page
On 11/22/2012 10:53 AM, Fengguang Wu wrote:
Ah it's more likely caused by this logic:
if (is_active_lru(lru)) {
if (inactive_list_is_low(mz, file))
shrink_active_list(nr_to_scan, mz, sc, priority, file);
The active file list won't be scanned a
On 11/25/2012 05:44 PM, Johannes Weiner wrote:
On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:
On Sun, 25 Nov 2012 17:57:28 +0100
Johannes Hirte wrote:
With kernel 3.7-rc6 I've still problems with kswapd0 on my laptop
And this is most of the time. I've only obs
On Sun, 25 Nov 2012 17:44:33 -0500
Johannes Weiner wrote:
> On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:
> > Could you try this patch?
>
> It's not quite enough because it's not reaching the conditions you
> changed, see analysis in https://
On 01/10/2013 12:36 PM, Raghavendra K T wrote:
* Rafael Aquini [2013-01-10 00:27:23]:
On Wed, Jan 09, 2013 at 06:20:35PM +0530, Raghavendra K T wrote:
I ran kernbench on 32 core (mx3850) machine with 3.8-rc2 base.
x base_3.8rc2
+ rik_backoff
N Min MaxMedian
On 01/14/2013 01:24 PM, Andrew Clayton wrote:
On Mon, 14 Jan 2013 15:27:36 +0200, Gleb Natapov wrote:
On Sun, Jan 13, 2013 at 10:29:58PM +, Andrew Clayton wrote:
When running qemu-kvm under 64but Fedora 16 under current 3.8, it
just hangs at start up. Dong a ps -ef hangs the process at the
an 4k pages. We would potentially
need to be able to handle all the page sizes that we use for
the kernel linear mapping (4k, 2M, 1G).
Acked-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a me
kernel paging request at
I eventually traced it down to the KVM async pagefault code.
This can be worked around by disabling that code either at
compile-time, or on the kernel command-line.
Acked-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line
. Fix that.
Reported-by: Andrew Clayton
Reported-by: Zlatko Calusic
Tested-by: Andrew Clayton
Signed-off-by: Jiri Kosina
Reviewed-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@
On 12/12/2012 04:43 PM, Johannes Weiner wrote:
dc0422c "mm: vmscan: only evict file pages when we have plenty" makes
a point of not going for anonymous memory while there is still enough
inactive cache around.
The check was added only for global reclaim, but it is just as useful
for memory cgrou
claim: anonymous pages are already force-scanned when there is only
very little file cache left, and there very likely isn't when the
reclaimer enters this final cycle.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux
busy work trying to isolate and reclaim pages
that are not there.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
for the last reclaim cycle.
Signed-off-by: Johannes Weiner
Nice cleanup!
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 12/12/2012 04:43 PM, Johannes Weiner wrote:
Fix comment style and elaborate on why anonymous memory is
force-scanned when file cache runs low.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" i
declarations/definitions in order.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.h
the zone level and restart reclaim for all memory
cgroups in a zone when compaction requires more free pages from it.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
in the KSM copy code.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please re
On 12/14/2012 03:37 AM, Michal Hocko wrote:
I can answer the later. Because memsw comes with its price and
swappiness is much cheaper. On the other hand it makes sense that
swappiness==0 doesn't swap at all. Or do you think we should get back to
_almost_ doesn't swap at all?
swappiness==0 will
: Sivaram Nair
Reviewed-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FA
2.86%)
THP faults 237.20 ( +0.00%) 242.40 ( +2.18%)
THP collapse 241.20 ( +0.00%) 248.50 ( +3.01%)
THP splits 157.30 ( +0.00%) 161.40 ( +2.59%)
Signed-off-by: Johannes Weiner
Acked-by: Michal Hocko
Acked-by: Rik van Riel
--
All right
organized. At least make it apparent in the code flow
and document the conditions. It will be it easier to come up with
sane semantics later.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe
measured speed of memory access inside of KVM guests with memory pinned
to one of nodes with this benchmark:
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majo
On Sun, 24 Feb 2008 04:08:38 +0100
"J.C. Pizarro" <[EMAIL PROTECTED]> wrote:
> We will need 64 bit counters of the slow context switches,
> one counter for each new created task (e.g. u64 ctxt_switch_counts;)
Please send a patch ...
> I will explain your later why of it.
... and explain exact
On Sun, 24 Feb 2008 05:08:46 +0100
"J.C. Pizarro" <[EMAIL PROTECTED]> wrote:
OK, one last reply on the (overly optimistic?) assumption that you are not a
troll.
> +++ linux-2.6_git-20080224/include/linux/sched.h2008-02-24
> 04:50:18.0 +0100
> @@ -1007,6 +1007,12 @@
> stru
Signed-off-by: Ingo Molnar
Acked-by: Rik van Riel
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
Reasonable idea, but we need something else than a blind
unmap and add to swap space, which requires people to run
with gigantic amounts of swap space they will likely never
use.
I suspect that Andrea's _PAGE_NUMA stuff could be implemented
using _PA
On 03/23/2012 07:50 AM, Mel Gorman wrote:
On Fri, Mar 16, 2012 at 03:40:31PM +0100, Peter Zijlstra wrote:
From: Lee Schermerhorn
This patch adds another mbind() flag to request "lazy migration".
The flag, MPOL_MF_LAZY, modifies MPOL_MF_MOVE* such that the selected
pages are simply unmapped from
On 07/06/2012 04:04 PM, Lee Schermerhorn wrote:
On Fri, 2012-07-06 at 12:38 -0400, Rik van Riel wrote:
4. Putting a lot of pages in the swap cache ends up allocating
swap space. This means this NUMA migration scheme will only
work on systems that have a substantial amount of memory
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+/*
+ * Assumes symmetric NUMA -- that is, each node is of equal size.
+ */
+static void set_max_mem_load(unsigned long load)
+{
+ unsigned long old_load;
+
+ spin_lock(&max_mem_load.lock);
+ old_load = max_mem_load.load;
+ if
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+static u64 process_cpu_runtime(struct numa_entity *ne)
+{
+ struct task_struct *p, *t;
+ u64 runtime = 0;
+
+ rcu_read_lock();
+ t = p = ne_owner(ne);
+ if (p) do {
+ runtime += t->se.sum_exec_runtime; //
On 03/16/2012 10:40 AM, Peter Zijlstra wrote:
+static bool can_move_ne(struct numa_entity *ne)
+{
+ /*
+* XXX: consider mems_allowed, stinking cpusets has mems_allowed
+* per task and it can actually differ over a whole process, la-la-la.
+*/
+ return true;
+}
in the dozens-of-millisecond timeframes.
And in some workloads, TLB flush overhead is very heavy. In my simple
multithread app with a lot of swap to several pcie SSD, removing the tlb flush
gives about 20% ~ 30% swapout speedup.
Signed-off-by: Shaohua Li
Reviewed-by: Rik van Riel
--
All right
On 01/08/2013 12:03 AM, H. Peter Anvin wrote:
On 01/07/2013 08:55 PM, Shaohua Li wrote:
I searched a little bit, the change (doing TLB flush to clear access bit) is
made between 2.6.7 - 2.6.8, I can't find the changelog, but I found a patch:
http://www.kernel.org/pub/linux/kernel/people/akpm/pa
On 01/08/2013 12:09 AM, H. Peter Anvin wrote:
On 01/07/2013 09:08 PM, Rik van Riel wrote:
On 01/08/2013 12:03 AM, H. Peter Anvin wrote:
On 01/07/2013 08:55 PM, Shaohua Li wrote:
I searched a little bit, the change (doing TLB flush to clear access
bit) is
made between 2.6.7 - 2.6.8, I can
Many spinlocks are embedded in data structures; having many CPUs
pounce on the cache line the lock is in will slow down the lock
holder, and can cause system performance to fall off a cliff.
The paper "Non-scalable locks are dangerous" is a good reference:
http://pdos.csail.mit.edu/papers
Mbits/s with same bench, so an increase
of 45 % instead of a 13 % regression.
Signed-off-by: Eric Dumazet
Signed-off-by: Rik van Riel
---
arch/x86/kernel/smp.c | 22 +++---
1 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kerne
automatically tunes the delay value.
Signed-off-by: Rik van Riel
Signed-off-by: Michel Lespinasse
---
arch/x86/kernel/smp.c | 23 ---
1 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 20da354..aa743e9 100644
this form.
Not-signed-off-by: Rik van Riel
Not-signed-off-by: Eric Dumazet
---
arch/x86/kernel/smp.c |8
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 1877890..d80aee7 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch
Moving the wait loop for congested loops to its own function allows
us to add things to that wait loop, without growing the size of the
kernel text appreciably.
Signed-off-by: Rik van Riel
Reviewed-by: Steven Rostedt
Reviewed-by: Michel Lespinasse
Reviewed-by: Rafael Aquini
---
v2: clean up
n.
Signed-off-by: Rik van Riel
---
v3: use fixed-point math for the delay calculations, suggested by Michel
Lespinasse
arch/x86/kernel/smp.c | 43 +++
1 files changed, 39 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/ke
On 01/08/2013 05:50 PM, Eric Dumazet wrote:
On Tue, 2013-01-08 at 17:32 -0500, Rik van Riel wrote:
Subject: x86,smp: proportional backoff for ticket spinlocks
Simple fixed value proportional backoff for ticket spinlocks.
By pounding on the cacheline with the spin lock less often,
bus traffic
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the parisc arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the alpha arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the frv arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the ia64 arch_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the ia64 hugetlb_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
. Next one will convert that function to use the
vm_unmapped_area() infrastructure and regain the performance.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
n
Cc: Trond Myklebust
Cc: linux-...@vger.kernel.org
Cc: Rik van Riel
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.h
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Update the powerpc slice_get_unmapped_area function to make use of
vm_unmapped_area() instead of implementing a brute force search.
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
On 01/08/2013 08:28 PM, Michel Lespinasse wrote:
Since all architectures have been converted to use vm_unmapped_area(),
there is no remaining use for the free_area_cache.
Signed-off-by: Michel Lespinasse
Yay
Acked-by: Rik van Riel
--
To unsubscribe from this list: send the line
up in the future if
somebody changes the algorithm and forgets to update one of the
copies :-)
All right, does the following look more palatable then ?
(didn't re-test it, though)
Looks equivalent. I have also not tested :)
Signed-off-by: Michel Lespinasse
Acked-by: Rik van Riel
--
On 01/10/2013 08:01 AM, Michel Lespinasse wrote:
On Tue, Jan 8, 2013 at 2:31 PM, Rik van Riel wrote:
From: Eric Dumazet
Eric Dumazet found a regression with the first version of the spinlock
backoff code, in a workload where multiple spinlocks were contended,
each having a different wait
On 01/10/2013 10:19 AM, Mike Galbraith wrote:
On Tue, 2013-01-08 at 17:26 -0500, Rik van Riel wrote:
Please let me know if you manage to break this code in any way,
so I can fix it...
I didn't break it, but did let it play with rq->lock contention. Using
cyclictest -Smp99 -i 100 -d
Hi,
this will be my last email to linux-kernel for a while since
davem and matti are using DUL on vger.kernel.org
If you need to know something, don't count on me posting
anything here. For memory management things, please use
[EMAIL PROTECTED] instead.
Rik
--
Virtual memory is like a game you
On Sun, 8 Apr 2001, Matti Aarnio wrote:
> The incentive behind the DUL is to force users not to post
> straight out to the world, but to use their ISP's servers
> for outbound email --- normal M$ users do that, after all.
> Only spammers - and UNIX powerusers - want to pos
On Sun, 8 Apr 2001, David S. Miller wrote:
> Rik van Riel writes:
> > Anyway, since linux-kernel has chosen to not receive email from me
>
> Funny how this posting went through then...
>
> If it is specifically when you are sending mail from some other place,
> stat
On Tue, 10 Apr 2001, Matti Aarnio wrote:
> Dave said "remove DUL", I did that.
> VGER uses now RBL and RSS, no others.
Thanks !
To come back to the spamfilter promise I made some time ago,
people can now get a CVS tree with spam regular expressions
and a script to generate a majo
On Mon, 9 Apr 2001, Joseph Carter wrote:
> On Tue, Apr 10, 2001 at 01:00:08AM +0300, Matti Aarnio wrote:
> > Dave said "remove DUL", I did that.
> >
> > VGER uses now RBL and RSS, no others.
>
> Thank you, I don't believe there is anyone on this list who is likely
> to object to these
On 10 Apr 2001, Richard Russon wrote:
> VM: Undead swap entry 000bb300
> VM: Undead swap entry 00abb300
> VM: Undead swap entry 016fb300
Known bug ... unknown cause ;(
http://www.linux-mm.org/bugzilla.shtml has it already listed
regards,
Rik
--
Virtual memory is like a game you can't win;
How
On Mon, 9 Apr 2001, george anzinger wrote:
> SodaPop wrote:
> >
> > I too have noticed that nicing processes does not work nearly as
> > effectively as I'd like it to. I run on an underpowered machine,
> > and have had to stop running things such as seti because it steals too
> > much cpu time,
On Tue, 10 Apr 2001, Alan Cox wrote:
> > Any time I start injecting lots of mail into the qmail queue, *one* of the
> > two processors gets pegged at 99%, and it takes forever for anything typed
> > at the console to actually appear (just as you describe). But I don't see
>
> Yes I've seen this
On Tue, 10 Apr 2001, Rik van Riel wrote:
> I'll try to come up with a recalculation change that will make
> this thing behave better, while still retaining the short time
> slices for multiple normal-priority tasks and the cache footprint
> schedule() and friends currently hav
On Wed, 11 Apr 2001, Rik van Riel wrote:
> OK, here it is. It's nothing like montavista's singing-dancing
> scheduler patch that does all, just a really minimal change that
> should stretch the nice levels to yield the following CPU usage:
>
> Nice05 10 15
On Wed, 11 Apr 2001, Miles Lane wrote:
> Matti Aarnio wrote:
> > Proper place to do this discussion is [EMAIL PROTECTED]
>
> It sounds good in theory. In practice, though, almost all of the
> design discussions have been occuring in private e-mail.
Actually, I tried to setup a mailing list
On Wed, 11 Apr 2001, Jon Eisenstein wrote:
> (2) Every so often, I get a non-fatal error on my screen about a
> kernel paging request error.
If it's usually the same address, we're probably dealing with
a kernel bug. If you always get different addresses, chances
are your RAM is broken (you can
On Thu, 12 Apr 2001, Ed Tomlinson wrote:
> I have been playing around with patches that fix this problem. What
> seems to happen is that the VM code is pretty efficent at avoiding the
> calls to shrink the caches. When they do get called its a case of to
> little to late. This is espically bad
On Thu, 12 Apr 2001, Alexander Viro wrote:
> On Thu, 12 Apr 2001, Jan Harkes wrote:
>
> > But the VM pressure on the dcache and icache only comes into play once
> > the system still has a free_shortage _after_ other attempts of freeing
> > up memory in do_try_to_free_pages.
>
> I don't think tha
On Thu, 12 Apr 2001, Alexander Viro wrote:
> IOW. keeping dcache/icache size low is not a good thing, unless you
> have a memory pressure that requires it. More agressive kupdate _is_
> a good thing, though - possibly kupdate sans flushing buffers, so that
> it would just keep the icache clean an
On Thu, 12 Apr 2001, Alan Cox wrote:
> > 2.4.3-pre6 quietly made a very significant change there:
> > it used to say "if (!order) goto try_again;" and now just
> > says "goto try_again;". Which seems very sensible since
> > __GFP_WAIT is set, but I do wonder if it was a safe change.
> > We have
On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
> This should fix it
>
> --- mm/page_alloc.c.orig Thu Apr 12 13:47:53 2001
> +++ mm/page_alloc.cThu Apr 12 13:48:06 2001
> @@ -454,7 +454,7 @@
> if (gfp_mask & __GFP_WAIT) {
> memory_pressure++;
>
On Thu, 12 Apr 2001, Szabolcs Szakacsits wrote:
> On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
>
> > This patch is broken, ignore it.
> > Just removing wakeup_bdflush() is indeed correct.
> > We already wakeup bdflush at try_to_free_buffers() anyway.
>
> I still feel a bit unconfortable about pro
On Thu, 12 Apr 2001, Szabolcs Szakacsits wrote:
> You mean without dropping out_of_memory() test in kswapd and calling
> oom_kill() in page fault [i.e. without additional patch]?
No. I think it's ok for __alloc_pages() to call oom_kill()
IF we turn out to be out of memory, but that should not e
On Thu, 12 Apr 2001, Adam J. Richter wrote:
> I have attached the patch below. I have also adjusted the
> comment describing the code. Please let me know if this hand waving
> explanation is sufficient. I'm trying to be lazy on not do a
> measurement project to justify this relatively si
On Fri, 13 Apr 2001, Linus Torvalds wrote:
> On Sat, 14 Apr 2001, Rik van Riel wrote:
> >
> > Also, have you managed to find a real difference with this?
>
> It actually makes a noticeable difference on lmbench, so I think adam is
> 100% right.
>
> > If it turn
On Sat, 14 Apr 2001, Linus Torvalds wrote:
> On Sat, 14 Apr 2001, Adam J. Richter wrote:
> >
> > [...]
> > >If it turns out to be beneficial to run the child first (you
> > >can measure this), why not leave everything the same as it is
> > >now but have do_fork() "switch threads" internally ?
> >
On Sat, 14 Apr 2001, Marcelo Tosatti wrote:
> There is a nasty race between shmem_getpage_locked() and
> swapin_readahead() with the new shmem code (introduced in
> 2.4.3-ac3 and merged in the main tree in 2.4.4-pre3):
> I don't see any clean fix for this one.
> Suggestions ?
As we discussed wi
On Sat, 14 Apr 2001, George Bonser wrote:
> 2.4.4pre3 works, sorta, but is very "pumpy". The load avg will go up to
> about 60, then drop, then climb again, then drop. It will vary from very
> sluggish performance to snappy and back again to sluggish.
So it's stable ;))
> With 2.2 kernels I see
On Thu, 12 Apr 2001, Pavel Machek wrote:
> > One rule of optimization is to move any code you can outside the loop.
> > Why isn't the nice_to_ticks calculation done when nice is changed
> > instead of EVERY recalc.? I guess another way to ask this is, who needs
>
> This way change is localized
On Mon, 16 Apr 2001, gis88530 wrote:
> Does linux kernel swap data out to disk?
> or It just reside in the physical memory.
The Linux kernel always resides in physical memory.
Rik
--
Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...
Hi,
2.4.3-ac4 seems to work great on my test box (UP K6-2 with SCSI
disk), but 2.4.3-ac6 and 2.4.3-ac7 hang pretty hard when I try
to access any of the logical volumes on my test box.
The following changelog entry in Linus' changelog suggests me
whom to bother: ;)
- Jens Axboe: LVM and loop f
On Tue, 17 Apr 2001, Dave Zarzycki wrote:
> On Tue, 17 Apr 2001 [EMAIL PROTECTED] wrote:
> ^^
>
> Arrggg!!! Mumble... grumble... F*cking spammer using my hostname as the
> from address for sending spam...
Funny, I saw a "From: [EMAIL PROTECTED]" ...
regards,
On Wed, 18 Apr 2001, Laurent Chavet wrote:
> Try this (my example I've 2GB of ram)
>
> turn all your swap off
>
> dd about 15% of the size of your RAM:
> dd if=/dev/zero of=/local/test count=300 bs=100
>
> Run this program with SIZE about 95% of your RAM:
>
> #include
> #include
> #include
On Thu, 19 Apr 2001, Daniel Phillips wrote:
> OK, now I know what's happening, the next question is, what should be
> dones about it. If anything.
[ discovered by alexey on #kernelnewbies ]
One thing we should do is make sure the buffer cache code sets
the referenced bit on pages, so we don't
On Thu, 19 Apr 2001, Daniel Phillips wrote:
> Jan Harkes wrote:
> > On Thu, Apr 19, 2001 at 02:27:48AM +0200, Daniel Phillips wrote:
> > > more memory. If you have enough memory, the inode cache won't thrash,
> > > and even when it does, it does so gracefully - performance falls off
> > > nice an
On Wed, 18 Apr 2001, Alexander Viro wrote:
> Sorry, but that's just plain wrong. We shouldn't keep inode table in
> buffer-cache at all.
Then tell me, how exactly DO you plan to do write clustering
of inodes when you want to flush them to disk ?
If you don't keep them in the buffer cache for a
901 - 1000 of 4015 matches
Mail list logo