On Mon, Jan 07, 2008 at 12:04:06PM -0800, Christoph Lameter wrote:
> Here is the cleaned version of the patch. Dhaval is testing it.
>
>
> quicklists: Only consider memory that can be used with GFP_KERNEL
>
> Quicklists calculates the size of the quicklists based on the number
> of free pages.
Here is the cleaned version of the patch. Dhaval is testing it.
quicklists: Only consider memory that can be used with GFP_KERNEL
Quicklists calculates the size of the quicklists based on the number
of free pages. This must be the number of free pages that can be
allocated with GFP_KERNEL.
Here is the cleaned version of the patch. Dhaval is testing it.
quicklists: Only consider memory that can be used with GFP_KERNEL
Quicklists calculates the size of the quicklists based on the number
of free pages. This must be the number of free pages that can be
allocated with GFP_KERNEL.
On Mon, Jan 07, 2008 at 12:04:06PM -0800, Christoph Lameter wrote:
Here is the cleaned version of the patch. Dhaval is testing it.
quicklists: Only consider memory that can be used with GFP_KERNEL
Quicklists calculates the size of the quicklists based on the number
of free pages. This
On Thu, 3 Jan 2008, Dhaval Giani wrote:
> Yes, no oom even after 20 mins of running (which is double the normal
> time for the oom to occur), also no changes in free lowmem.
Ahhh.. Good then lets redo the patchset the right way (the patch so far
does not address the ZONE_MOVABLE issues) . Does
On Thu, 3 Jan 2008, Dhaval Giani wrote:
Yes, no oom even after 20 mins of running (which is double the normal
time for the oom to occur), also no changes in free lowmem.
Ahhh.. Good then lets redo the patchset the right way (the patch so far
does not address the ZONE_MOVABLE issues) . Does
On Wed, Jan 02, 2008 at 01:54:12PM -0800, Christoph Lameter wrote:
> Just traced it again on my system: It is okay for the number of pages on
> the quicklist to reach the high count that we see (although the 16 bit
> limits are weird. You have around 4GB of memory in the system?). Up to
>
On Thu, Jan 03, 2008 at 09:29:42AM +0530, Dhaval Giani wrote:
> On Wed, Jan 02, 2008 at 01:54:12PM -0800, Christoph Lameter wrote:
> > Just traced it again on my system: It is okay for the number of pages on
> > the quicklist to reach the high count that we see (although the 16 bit
> > limits
Just traced it again on my system: It is okay for the number of pages on
the quicklist to reach the high count that we see (although the 16 bit
limits are weird. You have around 4GB of memory in the system?). Up to
1/16th of free memory of a node can be allocated for quicklists (this
allows
On Sun, 30 Dec 2007, Ingo Molnar wrote:
> so we still dont seem to understand the failure mode well enough. This
> also looks like a quite dangerous change so late in the v2.6.24 cycle.
> Does it really fix the OOM? If yes, why exactly?
Not exactly sure. I suspect that there is some memory
On Fri, 28 Dec 2007, Dhaval Giani wrote:
> we managed to get your required information. Last 10,000 lines are
> attached (The uncompressed file comes to 500 kb).
>
> Hope it helps.
Somehow the nr_pages field is truncated to 16 bit and it
seems that there are sign issues there? We are wrapping
On Fri, 28 Dec 2007, Dhaval Giani wrote:
we managed to get your required information. Last 10,000 lines are
attached (The uncompressed file comes to 500 kb).
Hope it helps.
Somehow the nr_pages field is truncated to 16 bit and it
seems that there are sign issues there? We are wrapping
On Sun, 30 Dec 2007, Ingo Molnar wrote:
so we still dont seem to understand the failure mode well enough. This
also looks like a quite dangerous change so late in the v2.6.24 cycle.
Does it really fix the OOM? If yes, why exactly?
Not exactly sure. I suspect that there is some memory
Just traced it again on my system: It is okay for the number of pages on
the quicklist to reach the high count that we see (although the 16 bit
limits are weird. You have around 4GB of memory in the system?). Up to
1/16th of free memory of a node can be allocated for quicklists (this
allows
On Thu, Jan 03, 2008 at 09:29:42AM +0530, Dhaval Giani wrote:
On Wed, Jan 02, 2008 at 01:54:12PM -0800, Christoph Lameter wrote:
Just traced it again on my system: It is okay for the number of pages on
the quicklist to reach the high count that we see (although the 16 bit
limits are
On Wed, Jan 02, 2008 at 01:54:12PM -0800, Christoph Lameter wrote:
Just traced it again on my system: It is okay for the number of pages on
the quicklist to reach the high count that we see (although the 16 bit
limits are weird. You have around 4GB of memory in the system?). Up to
1/16th of
On Sun, Dec 30, 2007 at 03:01:16PM +0100, Ingo Molnar wrote:
>
> * Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> > Index: linux-2.6/arch/x86/mm/pgtable_32.c
> > ===
> > --- linux-2.6.orig/arch/x86/mm/pgtable_32.c 2007-12-26
* Christoph Lameter <[EMAIL PROTECTED]> wrote:
> Index: linux-2.6/arch/x86/mm/pgtable_32.c
> ===
> --- linux-2.6.orig/arch/x86/mm/pgtable_32.c 2007-12-26 12:55:10.0
> -0800
> +++ linux-2.6/arch/x86/mm/pgtable_32.c
* Christoph Lameter [EMAIL PROTECTED] wrote:
Index: linux-2.6/arch/x86/mm/pgtable_32.c
===
--- linux-2.6.orig/arch/x86/mm/pgtable_32.c 2007-12-26 12:55:10.0
-0800
+++ linux-2.6/arch/x86/mm/pgtable_32.c
On Sun, Dec 30, 2007 at 03:01:16PM +0100, Ingo Molnar wrote:
* Christoph Lameter [EMAIL PROTECTED] wrote:
Index: linux-2.6/arch/x86/mm/pgtable_32.c
===
--- linux-2.6.orig/arch/x86/mm/pgtable_32.c 2007-12-26
On Thu, Dec 27, 2007 at 11:22:34AM -0800, Christoph Lameter wrote:
> On Thu, 27 Dec 2007, Dhaval Giani wrote:
>
> > anything specific you are looking for? I still hit the oom.
>
> Weird WTH is this? You run an unmodified upstream tree? Can you add a
> printk in quicklist_trim that shows
>
On Thu, Dec 27, 2007 at 11:22:34AM -0800, Christoph Lameter wrote:
On Thu, 27 Dec 2007, Dhaval Giani wrote:
anything specific you are looking for? I still hit the oom.
Weird WTH is this? You run an unmodified upstream tree? Can you add a
printk in quicklist_trim that shows
Hi,
I
On Fri, 21 Dec 2007, Dhaval Giani wrote:
> No, it does not stop the oom I am seeing here.
Duh. Disregard that patch. It looks like check_pgt_cache() is not called.
This could happen if tlb_flush_mmu is never called during the
fork/terminate sequences in your script. pgd_free is called *after*
On Fri, 21 Dec 2007, Dhaval Giani wrote:
No, it does not stop the oom I am seeing here.
Duh. Disregard that patch. It looks like check_pgt_cache() is not called.
This could happen if tlb_flush_mmu is never called during the
fork/terminate sequences in your script. pgd_free is called *after* a
> > It was just
> >
> > while echo ; do cat /sys/kernel/ ; done
> >
> > it's all in the email threads somewhere..
>
> The patch that was posted in the thread that I mentioned earlier is here.
> I ran the test for 15 minutes and things are still fine.
>
>
>
> quicklist: Set
It was just
while echo ; do cat /sys/kernel/some file ; done
it's all in the email threads somewhere..
The patch that was posted in the thread that I mentioned earlier is here.
I ran the test for 15 minutes and things are still fine.
quicklist: Set tlb-need_flush if
On Fri, Dec 14, 2007 at 10:00:30PM -0800, Andrew Morton wrote:
> On Sat, 15 Dec 2007 09:22:00 +0530 Dhaval Giani <[EMAIL PROTECTED]> wrote:
>
> > > Is it really the case that the bug only turns up when you run tests like
> > >
> > > while echo; do cat /sys/kernel/kexec_crash_loaded; done
> > >
On Fri, Dec 14, 2007 at 10:00:30PM -0800, Andrew Morton wrote:
On Sat, 15 Dec 2007 09:22:00 +0530 Dhaval Giani [EMAIL PROTECTED] wrote:
Is it really the case that the bug only turns up when you run tests like
while echo; do cat /sys/kernel/kexec_crash_loaded; done
and
while
On Sat, 15 Dec 2007 09:22:00 +0530 Dhaval Giani <[EMAIL PROTECTED]> wrote:
> > Is it really the case that the bug only turns up when you run tests like
> >
> > while echo; do cat /sys/kernel/kexec_crash_loaded; done
> > and
> > while echo; do cat /sys/kernel/uevent_seqnum ; done;
> >
>
> Is it really the case that the bug only turns up when you run tests like
>
> while echo; do cat /sys/kernel/kexec_crash_loaded; done
> and
> while echo; do cat /sys/kernel/uevent_seqnum ; done;
>
> or will any fork-intensive workload also do it? Say,
>
> while echo ; do
On Fri, 14 Dec 2007 23:58:02 +0530
Dhaval Giani <[EMAIL PROTECTED]> wrote:
> On Fri, Dec 14, 2007 at 09:50:23AM -0800, Andrew Morton wrote:
> > On Fri, 14 Dec 2007 21:46:37 +0530 Dhaval Giani <[EMAIL PROTECTED]> wrote:
> >
> > > On Sat, Dec 15, 2007 at 12:54:09AM +0900, Tejun Heo wrote:
> > > >
On Fri, 14 Dec 2007 21:46:37 +0530 Dhaval Giani <[EMAIL PROTECTED]> wrote:
> On Sat, Dec 15, 2007 at 12:54:09AM +0900, Tejun Heo wrote:
> > Dhaval Giani wrote:
> > > XXX sysfs_page_cnt=1
> >
> > Hmm.. so, sysfs r/w buffer wasn't the culprit. I'm curious what eats up
> > all your low memory.
Dhaval Giani wrote:
> XXX sysfs_page_cnt=1
Hmm.. so, sysfs r/w buffer wasn't the culprit. I'm curious what eats up
all your low memory. Please do the following.
1. Right after boot, record /proc/meminfo and slabinfo.
2. After or near OOM, record /proc/meminfo and slabinfo. This can be
tricky
> > OK, so it ooms there as well. I am attaching its config and part of the
> > dmesg (whatever I could capture).
>
> I can't reproduce it here either. Please apply the attached patch and
> reproduce the problem. It will report the number of allocated buffer
> pages every 10 sec. After oom
;>>> while echo; do cat /sys/kernel/kexec_crash_loaded; done
>>>>>>
>>>>> while echo; do cat /sys/kernel/uevent_seqnum ; done;
>>>>>
>>>>> causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
>>>>> 2.6.24-rc
:33PM +0530, Dhaval Giani wrote:
Hi Greg, Tejun,
The following script causes oomkiller to be invoked on my system here.
while echo; do cat /sys/kernel/kexec_crash_loaded; done
while echo; do cat /sys/kernel/uevent_seqnum ; done;
causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable
OK, so it ooms there as well. I am attaching its config and part of the
dmesg (whatever I could capture).
I can't reproduce it here either. Please apply the attached patch and
reproduce the problem. It will report the number of allocated buffer
pages every 10 sec. After oom occurs,
Dhaval Giani wrote:
XXX sysfs_page_cnt=1
Hmm.. so, sysfs r/w buffer wasn't the culprit. I'm curious what eats up
all your low memory. Please do the following.
1. Right after boot, record /proc/meminfo and slabinfo.
2. After or near OOM, record /proc/meminfo and slabinfo. This can be
tricky
On Fri, 14 Dec 2007 21:46:37 +0530 Dhaval Giani [EMAIL PROTECTED] wrote:
On Sat, Dec 15, 2007 at 12:54:09AM +0900, Tejun Heo wrote:
Dhaval Giani wrote:
XXX sysfs_page_cnt=1
Hmm.. so, sysfs r/w buffer wasn't the culprit. I'm curious what eats up
all your low memory. Please do the
On Fri, 14 Dec 2007 23:58:02 +0530
Dhaval Giani [EMAIL PROTECTED] wrote:
On Fri, Dec 14, 2007 at 09:50:23AM -0800, Andrew Morton wrote:
On Fri, 14 Dec 2007 21:46:37 +0530 Dhaval Giani [EMAIL PROTECTED] wrote:
On Sat, Dec 15, 2007 at 12:54:09AM +0900, Tejun Heo wrote:
Dhaval Giani
Is it really the case that the bug only turns up when you run tests like
while echo; do cat /sys/kernel/kexec_crash_loaded; done
and
while echo; do cat /sys/kernel/uevent_seqnum ; done;
or will any fork-intensive workload also do it? Say,
while echo ; do true ; done
On Sat, 15 Dec 2007 09:22:00 +0530 Dhaval Giani [EMAIL PROTECTED] wrote:
Is it really the case that the bug only turns up when you run tests like
while echo; do cat /sys/kernel/kexec_crash_loaded; done
and
while echo; do cat /sys/kernel/uevent_seqnum ; done;
or will any
> > > Hi Greg, Tejun,
> > > >
> > > > The following script causes oomkiller to be invoked on my system here.
> > > >
> > > > while echo; do cat /sys/kernel/kexec_crash_loaded; done
> > > >
> > >
> > > while echo; do cat /
ller to be invoked on my system here.
> > >
> > > while echo; do cat /sys/kernel/kexec_crash_loaded; done
> > >
> >
> > while echo; do cat /sys/kernel/uevent_seqnum ; done;
> >
> > causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable
s/kernel/kexec_crash_loaded; done
> >
>
> while echo; do cat /sys/kernel/uevent_seqnum ; done;
>
> causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
> 2.6.24-rc5 as well. It seems not be particularly related to any single
> file in sysfs.
>
And on 2.6.24-
causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
2.6.24-rc5 as well. It seems not be particularly related to any single
file in sysfs.
Thanks,
--
regards,
Dhaval
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PR
Hi Greg, Tejun,
The following script causes oomkiller to be invoked on my system here.
while echo; do cat /sys/kernel/kexec_crash_loaded; done
It gets invoked within 10 mins.
[EMAIL PROTECTED] ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model
Hi Greg, Tejun,
The following script causes oomkiller to be invoked on my system here.
while echo; do cat /sys/kernel/kexec_crash_loaded; done
It gets invoked within 10 mins.
[EMAIL PROTECTED] ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model
to be invoked on 2.6.22-stable, 2.6.23-stable and
2.6.24-rc5 as well. It seems not be particularly related to any single
file in sysfs.
Thanks,
--
regards,
Dhaval
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info
; do cat /sys/kernel/uevent_seqnum ; done;
causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
2.6.24-rc5 as well. It seems not be particularly related to any single
file in sysfs.
And on 2.6.24-rc5-mm1 as well.
--
regards,
Dhaval
--
To unsubscribe from this list: send
echo; do cat /sys/kernel/kexec_crash_loaded; done
while echo; do cat /sys/kernel/uevent_seqnum ; done;
causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
2.6.24-rc5 as well. It seems not be particularly related to any single
file in sysfs.
And on 2.6.24-rc5
causes oomkiller to be invoked on my system here.
while echo; do cat /sys/kernel/kexec_crash_loaded; done
while echo; do cat /sys/kernel/uevent_seqnum ; done;
causes oomkiller to be invoked on 2.6.22-stable, 2.6.23-stable and
2.6.24-rc5 as well. It seems
52 matches
Mail list logo