James Lewis Nance <[EMAIL PROTECTED]> writes:
> On an unrelated note, is it possible for a process in 2.4 to see more
> than 2G of address space? They seem to be limited to 2G for me. I
> was hoping that the HIMEM stuff had removed that limit.
You have 3GB user space address space. 1 GB
James Lewis Nance [EMAIL PROTECTED] writes:
On an unrelated note, is it possible for a process in 2.4 to see more
than 2G of address space? They seem to be limited to 2G for me. I
was hoping that the HIMEM stuff had removed that limit.
You have 3GB user space address space. 1 GB is
On Fri, Oct 27, 2000 at 07:03:29AM -0400, James Lewis Nance wrote:
> I left a single large job running when I left yesterday afternoon
> (size=1651M, RSS=1.5G). When I got in this morning I wanted to see if
> it was still running so I typed "top" in an Xterm. When I hit return I
> thought
On Fri, Oct 27, 2000 at 07:03:29AM -0400, James Lewis Nance wrote:
I left a single large job running when I left yesterday afternoon
(size=1651M, RSS=1.5G). When I got in this morning I wanted to see if
it was still running so I typed "top" in an Xterm. When I hit return I
thought the
Rik van Riel <[EMAIL PROTECTED]> writes:
> Hmmm, could you help me with drawing up a selection algorithm
> on how to choose which SHM segment to destroy when we run OOM?
>
> The criteria would be about the same as with normal programs:
>
> 1) minimise the amount of work lost
> 2) try to
Rik van Riel [EMAIL PROTECTED] writes:
Hmmm, could you help me with drawing up a selection algorithm
on how to choose which SHM segment to destroy when we run OOM?
The criteria would be about the same as with normal programs:
1) minimise the amount of work lost
2) try to protect
[replying to a really old email now that I've started work
on integrating the OOM handler]
On 25 Sep 2000, Christoph Rohland wrote:
> Rik van Riel <[EMAIL PROTECTED]> writes:
>
> > > Because as you said the machine can lockup when you run out of memory.
> >
> > The fix for this is to kill a
[replying to a really old email now that I've started work
on integrating the OOM handler]
On 25 Sep 2000, Christoph Rohland wrote:
Rik van Riel [EMAIL PROTECTED] writes:
Because as you said the machine can lockup when you run out of memory.
The fix for this is to kill a user process
:On Mon, 2 Oct 2000, Rik van Riel wrote:
:> On Mon, 2 Oct 2000, Linus Torvalds wrote:
:>
:> > Why do you apparently ignore the fact that page-out write-back
:> > performance is horribly crappy because it always starts out
:> > doing synchronous writes?
:>
:> Because it is fixed in the patch I
:On Mon, 2 Oct 2000, Rik van Riel wrote:
: On Mon, 2 Oct 2000, Linus Torvalds wrote:
:
: Why do you apparently ignore the fact that page-out write-back
: performance is horribly crappy because it always starts out
: doing synchronous writes?
:
: Because it is fixed in the patch I mailed
On Mon, 2 Oct 2000, Rik van Riel wrote:
> On Mon, 2 Oct 2000, Linus Torvalds wrote:
>
> > Why do you apparently ignore the fact that page-out write-back
> > performance is horribly crappy because it always starts out
> > doing synchronous writes?
>
> Because it is fixed in the patch I mailed
On Mon, 2 Oct 2000, Linus Torvalds wrote:
> Why do you apparently ignore the fact that page-out write-back
> performance is horribly crappy because it always starts out
> doing synchronous writes?
Because it is fixed in the patch I mailed yesterday?
regards,
Rik
--
"What you're running that
-program that shows 1MB/s writeout speeds due to it) completely.
The whole _point_ of the new VM was performance. Without that, the new VM
is pointless, and discussing TODO features is equally pointless.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-k
[MM TODO list, updated for october 2000]
---
Here is the TODO list for the new VM. The only thing
really needed for 2.4 is the OOM handler and a fix
for the highmem deadlock.
The page->mapping->flush() callback is really wanted
by the journaling filesystem folks.
The rest are mostly e
[MM TODO list, updated for october 2000]
---
Here is the TODO list for the new VM. The only thing
really needed for 2.4 is the OOM handler and a fix
for the highmem deadlock.
The page-mapping-flush() callback is really wanted
by the journaling filesystem folks.
The rest are mostly extra's
-program that shows 1MB/s writeout speeds due to it) completely.
The whole _point_ of the new VM was performance. Without that, the new VM
is pointless, and discussing TODO features is equally pointless.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-k
On Mon, 2 Oct 2000, Linus Torvalds wrote:
Why do you apparently ignore the fact that page-out write-back
performance is horribly crappy because it always starts out
doing synchronous writes?
Because it is fixed in the patch I mailed yesterday?
regards,
Rik
--
"What you're running that
On Mon, 2 Oct 2000, Rik van Riel wrote:
On Mon, 2 Oct 2000, Linus Torvalds wrote:
Why do you apparently ignore the fact that page-out write-back
performance is horribly crappy because it always starts out
doing synchronous writes?
Because it is fixed in the patch I mailed yesterday?
On Wed, Sep 27, 2000 at 09:42:45AM +0200, Ingo Molnar wrote:
> such screwups by checking for NULL and trying to handle it. I suggest to
> rather fix those screwups.
How do you know which is the minimal amount of RAM that allows you not to be in
the screwedup state?
We for sure need a kind of
On Wed, Sep 27, 2000 at 09:42:45AM +0200, Ingo Molnar wrote:
>
> On Tue, 26 Sep 2000, Pavel Machek wrote:
> of the VM allocation issues. Returning NULL in kmalloc() is just a way to
> say: 'oops, we screwed up somewhere'. And i'd suggest to not work around
That is not at all how it is currently
On Tue, 26 Sep 2000, Pavel Machek wrote:
> Okay, I'm user on small machine and I'm doing stupid thing: I've got
> 6MB ram, and I keep inserting modules. I insert module_1mb.o. Then I
> insert module_1mb.o. Repeat. How does it end? I think that
> kmalloc(GFP_KERNEL) *has* to return NULL at some
On Tue, 26 Sep 2000, Pavel Machek wrote:
Okay, I'm user on small machine and I'm doing stupid thing: I've got
6MB ram, and I keep inserting modules. I insert module_1mb.o. Then I
insert module_1mb.o. Repeat. How does it end? I think that
kmalloc(GFP_KERNEL) *has* to return NULL at some
On Wed, Sep 27, 2000 at 09:42:45AM +0200, Ingo Molnar wrote:
such screwups by checking for NULL and trying to handle it. I suggest to
rather fix those screwups.
How do you know which is the minimal amount of RAM that allows you not to be in
the screwedup state?
We for sure need a kind of
On Tue, Sep 26, 2000 at 09:10:16PM +0200, Pavel Machek wrote:
> Hi!
> > > i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
> >
> > My bad, you're right I was talking about GFP_USER indeed.
> >
> > But even GFP_KERNEL allocations like the init of a module or any other
Hi!
> > i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
>
> My bad, you're right I was talking about GFP_USER indeed.
>
> But even GFP_KERNEL allocations like the init of a module or any other thing
> that is static sized during production just checking the retval
>
Hi Rik,
Rik van Riel <[EMAIL PROTECTED]> writes:
> > Because as you said the machine can lockup when you run out of memory.
>
> The fix for this is to kill a user process when you're OOM
> (you need to do this anyway).
>
> The last few allocations of the "condemned" process can come
> frome
Hi Rik,
Rik van Riel [EMAIL PROTECTED] writes:
Because as you said the machine can lockup when you run out of memory.
The fix for this is to kill a user process when you're OOM
(you need to do this anyway).
The last few allocations of the "condemned" process can come
frome the
Hi!
i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
My bad, you're right I was talking about GFP_USER indeed.
But even GFP_KERNEL allocations like the init of a module or any other thing
that is static sized during production just checking the retval
looks be ok.
On Tue, Sep 26, 2000 at 09:10:16PM +0200, Pavel Machek wrote:
Hi!
i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
My bad, you're right I was talking about GFP_USER indeed.
But even GFP_KERNEL allocations like the init of a module or any other thing
that is
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> On Mon, Sep 25, 2000 at 04:27:24PM +0200, Ingo Molnar wrote:
> > i think an application should not fail due to other applications
> > allocating too much RAM. OOM behavior should be a central thing and based
>
> At least Linus's point is that doing
On Mon, Sep 25, 2000 at 04:40:44PM +0100, Stephen C. Tweedie wrote:
> Allowing GFP_ATOMIC to eat PF_MEMALLOC's last-chance pages is the
> wrong thing to do if we want to guarantee swapper progress under
> extreme load.
You're definitely right. We at least need the garantee of the memory to
On Mon, Sep 25, 2000 at 05:16:06PM +0200, Ingo Molnar wrote:
> situation is just 1% RAM away from the 'root cannot log in', situation.
The root cannot log in is a little different. Just think that in the "root
cannot log in" you only need to press SYSRQ+E (or as worse +I).
If all tasks in the
On Mon, Sep 25, 2000 at 05:26:59PM +0200, Ingo Molnar wrote:
>
> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>
> > > i think the GFP_USER case should do the oom logic within __alloc_pages(),
> >
> > What's the difference of implementing the logic outside alloc_pages?
> > Putting the logic
On Mon, 25 Sep 2000, Alan Cox wrote:
> Unless Im missing something here think about this case
>
> 2 active processes, no swap
>
> #1#2
> kmalloc 32K kmalloc 16K
> OKOK
> kmalloc 16K
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> > i think the GFP_USER case should do the oom logic within __alloc_pages(),
>
> What's the difference of implementing the logic outside alloc_pages?
> Putting the logic inside looks not clean design to me.
it gives consistency and simplicity. The
On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
> > this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
> > limitations (ie. due to the above RAID1 cannot be used as a swap-device).
> as commonly patched in by RedHat? Should I instead use a swap file
> for a machine that should be
Ingo Molnar wrote:
> this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
> limitations (ie. due to the above RAID1 cannot be used as a swap-device).
Eh, just to be clear about this: does this apply to the RAID 0.90 code
as commonly patched in by RedHat? Should I instead use a swap
On Mon, Sep 25, 2000 at 04:43:44PM +0200, Ingo Molnar wrote:
> i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
My bad, you're right I was talking about GFP_USER indeed.
But even GFP_KERNEL allocations like the init of a module or any other thing
that is static sized
> > Because as you said the machine can lockup when you run out of memory.
>
> well, i think all kernel-space allocations have to be limited carefully,
> denying succeeding allocations is not a solution against over-allocation,
> especially in a multi-user environment.
GFP_KERNEL has to be able
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> On Mon, Sep 25, 2000 at 03:02:58PM +0200, Ingo Molnar wrote:
> > On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> >
> > > Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
> > > that is a showstopper bug. [...]
> >
> > why?
>
>
On Mon, Sep 25, 2000 at 11:26:48AM -0300, Marcelo Tosatti wrote:
> This thread keeps freeing pages from the inactive clean list when needed
> (when zone->free_pages < zone->pages_low), making them available for
> atomic allocations.
This is flawed. It's the irq that have to shrink the memory
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> At least Linus's point is that doing perfect accounting (at least on
> the userspace allocation side) may cause you to waste resources,
> failing even if you could still run and I tend to agree with him.
> We're lazy on that side and that's global
On Mon, Sep 25, 2000 at 04:27:24PM +0200, Ingo Molnar wrote:
> i think an application should not fail due to other applications
> allocating too much RAM. OOM behavior should be a central thing and based
At least Linus's point is that doing perfect accounting (at least on the
userspace
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> I talked with Alexey about this and it seems the best way is to have a
> per-socket reservation of clean cache in function of the receive window. So we
> don't need an huge atomic pool but we can have a special lru with an irq
> spinlock that is
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> I'm not sure if we should restrict the limiting only to the cases that
> needs them. For example do_anonymous_page looks a place that could
> rely on the GFP retval.
i think an application should not fail due to other applications
allocating too
On Mon, Sep 25, 2000 at 04:04:14PM +0200, Ingo Molnar wrote:
> exactly, and this is why if a higher level lets through a GFP_KERNEL, then
> it *must* succeed. Otherwise either the higher level code is buggy, or the
> VM balance is buggy, but we want to have clear signs of it.
I'm not sure if we
On Mon, Sep 25, 2000 at 03:39:51PM +0200, Ingo Molnar wrote:
> Andrea, if you really mean this then you should not be let near the VM
> balancing code :-)
What I mean is that the VM balancing is in the lower layer that knows anything
about the per-socket gigabit ethernet skbs limits, the limit
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> Again: the bean counting and all the limit happens at the higher
> layer. I shouldn't know anything about it when I play with the lower
> layer GFP memory balancing code.
exactly, and this is why if a higher level lets through a GFP_KERNEL, then
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> > yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
> > latencies)
>
> Very good! Many thanks Ingo.
this was actually coded/fixed by Neil Brown - so the kudos go to him!
Ingo
-
To unsubscribe from this list: send the
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> And if the careful limit avoids the deadlock in the layer above
> alloc_pages, then it will also avoid alloc_pages to return NULL and
> you won't need an infinite loop in first place (unless the memory
> balancing is buggy).
yes i like this
On Mon, Sep 25, 2000 at 03:21:01PM +0200, Ingo Molnar wrote:
> yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
> latencies)
Very good! Many thanks Ingo.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL
On Mon, Sep 25, 2000 at 03:12:58PM +0200, Ingo Molnar wrote:
> well, i think all kernel-space allocations have to be limited carefully,
When a machine without a gigabit ethernet runs oom it's userspace that
allocated the memory via page faults not the kernel.
And if the careful limit avoids the
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> Is it safe to sleep on the waitqueue in the kmalloc fail path in
> raid1?
yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
latencies)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> > huh, what do you mean?
>
> I mean this:
>
> while (!( /* FIXME: now we are rather fault tolerant than nice */
this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
limitations (ie. due to the above RAID1 cannot be used
On Mon, Sep 25, 2000 at 03:04:10PM +0200, Ingo Molnar wrote:
>
> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>
> > Please fix raid1 instead of making things worse.
>
> huh, what do you mean?
I mean this:
while (!( /* FIXME: now we are rather fault tolerant than nice */
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> > > Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
> > > that is a showstopper bug. [...]
> >
> > why?
>
> Because as you said the machine can lockup when you run out of memory.
well, i think all kernel-space allocations have
On Mon, Sep 25, 2000 at 03:02:58PM +0200, Ingo Molnar wrote:
>
> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>
> > Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
> > that is a showstopper bug. [...]
>
> why?
Because as you said the machine can lockup when you run out of
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> Please fix raid1 instead of making things worse.
huh, what do you mean?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
> that is a showstopper bug. [...]
why?
> machine power for simulations runs out of memory all the time. If you
> put this kind of obvious deadlock into the main kernel allocator
On Mon, Sep 25, 2000 at 12:42:09PM +0200, Ingo Molnar wrote:
> believe could simplify unrelated kernel code significantly. Eg. no need to
> check for NULL pointers on most allocations, a GFP_KERNEL allocation
> always succeeds, end of story. This behavior also has the 'nice'
Sorry I totally
pect that this is because the new VM does
much less 'guessing' and blind list-walking.
- i'm also happy that __alloc_pages() now 'guarantees' allocation. This i
believe could simplify unrelated kernel code significantly. Eg. no need to
check for NULL pointers on most allocations, a GFP_KERNEL alloca
that this is because the new VM does
much less 'guessing' and blind list-walking.
- i'm also happy that __alloc_pages() now 'guarantees' allocation. This i
believe could simplify unrelated kernel code significantly. Eg. no need to
check for NULL pointers on most allocations, a GFP_KERNEL allocation
always
On Mon, Sep 25, 2000 at 12:42:09PM +0200, Ingo Molnar wrote:
believe could simplify unrelated kernel code significantly. Eg. no need to
check for NULL pointers on most allocations, a GFP_KERNEL allocation
always succeeds, end of story. This behavior also has the 'nice'
Sorry I totally
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
that is a showstopper bug. [...]
why?
machine power for simulations runs out of memory all the time. If you
put this kind of obvious deadlock into the main kernel allocator
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Please fix raid1 instead of making things worse.
huh, what do you mean?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at
On Mon, Sep 25, 2000 at 03:02:58PM +0200, Ingo Molnar wrote:
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
that is a showstopper bug. [...]
why?
Because as you said the machine can lockup when you run out of memory.
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
that is a showstopper bug. [...]
why?
Because as you said the machine can lockup when you run out of memory.
well, i think all kernel-space allocations have to be
On Mon, Sep 25, 2000 at 03:04:10PM +0200, Ingo Molnar wrote:
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Please fix raid1 instead of making things worse.
huh, what do you mean?
I mean this:
while (!( /* FIXME: now we are rather fault tolerant than nice */
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
huh, what do you mean?
I mean this:
while (!( /* FIXME: now we are rather fault tolerant than nice */
this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
limitations (ie. due to the above RAID1 cannot be used as a
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Is it safe to sleep on the waitqueue in the kmalloc fail path in
raid1?
yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
latencies)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On Mon, Sep 25, 2000 at 03:12:58PM +0200, Ingo Molnar wrote:
well, i think all kernel-space allocations have to be limited carefully,
When a machine without a gigabit ethernet runs oom it's userspace that
allocated the memory via page faults not the kernel.
And if the careful limit avoids the
On Mon, Sep 25, 2000 at 03:21:01PM +0200, Ingo Molnar wrote:
yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
latencies)
Very good! Many thanks Ingo.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
And if the careful limit avoids the deadlock in the layer above
alloc_pages, then it will also avoid alloc_pages to return NULL and
you won't need an infinite loop in first place (unless the memory
balancing is buggy).
yes i like this property
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
yes. every RAID1-bh has a bound lifetime. (bound by worst-case IO
latencies)
Very good! Many thanks Ingo.
this was actually coded/fixed by Neil Brown - so the kudos go to him!
Ingo
-
To unsubscribe from this list: send the line
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Again: the bean counting and all the limit happens at the higher
layer. I shouldn't know anything about it when I play with the lower
layer GFP memory balancing code.
exactly, and this is why if a higher level lets through a GFP_KERNEL, then
it
On Mon, Sep 25, 2000 at 04:04:14PM +0200, Ingo Molnar wrote:
exactly, and this is why if a higher level lets through a GFP_KERNEL, then
it *must* succeed. Otherwise either the higher level code is buggy, or the
VM balance is buggy, but we want to have clear signs of it.
I'm not sure if we
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
I'm not sure if we should restrict the limiting only to the cases that
needs them. For example do_anonymous_page looks a place that could
rely on the GFP retval.
i think an application should not fail due to other applications
allocating too much
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
snip
I talked with Alexey about this and it seems the best way is to have a
per-socket reservation of clean cache in function of the receive window. So we
don't need an huge atomic pool but we can have a special lru with an irq
spinlock that is
On Mon, Sep 25, 2000 at 04:27:24PM +0200, Ingo Molnar wrote:
i think an application should not fail due to other applications
allocating too much RAM. OOM behavior should be a central thing and based
At least Linus's point is that doing perfect accounting (at least on the
userspace allocation
On Mon, Sep 25, 2000 at 11:26:48AM -0300, Marcelo Tosatti wrote:
This thread keeps freeing pages from the inactive clean list when needed
(when zone-free_pages zone-pages_low), making them available for
atomic allocations.
This is flawed. It's the irq that have to shrink the memory itself.
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
On Mon, Sep 25, 2000 at 03:02:58PM +0200, Ingo Molnar wrote:
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
Sorry I totally disagree. If GFP_KERNEL are garanteeded to succeed
that is a showstopper bug. [...]
why?
Because as you said
Because as you said the machine can lockup when you run out of memory.
well, i think all kernel-space allocations have to be limited carefully,
denying succeeding allocations is not a solution against over-allocation,
especially in a multi-user environment.
GFP_KERNEL has to be able to
On Mon, Sep 25, 2000 at 04:43:44PM +0200, Ingo Molnar wrote:
i talked about GFP_KERNEL, not GFP_USER. Even in the case of GFP_USER i
My bad, you're right I was talking about GFP_USER indeed.
But even GFP_KERNEL allocations like the init of a module or any other thing
that is static sized
Ingo Molnar wrote:
this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
limitations (ie. due to the above RAID1 cannot be used as a swap-device).
Eh, just to be clear about this: does this apply to the RAID 0.90 code
as commonly patched in by RedHat? Should I instead use a swap
On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
this is fixed in 2.4. The 2.2 RAID code is frozen, and has known
limitations (ie. due to the above RAID1 cannot be used as a swap-device).
as commonly patched in by RedHat? Should I instead use a swap file
for a machine that should be
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
i think the GFP_USER case should do the oom logic within __alloc_pages(),
What's the difference of implementing the logic outside alloc_pages?
Putting the logic inside looks not clean design to me.
it gives consistency and simplicity. The
On Mon, 25 Sep 2000, Alan Cox wrote:
Unless Im missing something here think about this case
2 active processes, no swap
#1#2
kmalloc 32K kmalloc 16K
OKOK
kmalloc 16K
On Mon, Sep 25, 2000 at 05:16:06PM +0200, Ingo Molnar wrote:
situation is just 1% RAM away from the 'root cannot log in', situation.
The root cannot log in is a little different. Just think that in the "root
cannot log in" you only need to press SYSRQ+E (or as worse +I).
If all tasks in the
On Mon, Sep 25, 2000 at 04:40:44PM +0100, Stephen C. Tweedie wrote:
Allowing GFP_ATOMIC to eat PF_MEMALLOC's last-chance pages is the
wrong thing to do if we want to guarantee swapper progress under
extreme load.
You're definitely right. We at least need the garantee of the memory to
allocate
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
On Mon, Sep 25, 2000 at 04:27:24PM +0200, Ingo Molnar wrote:
i think an application should not fail due to other applications
allocating too much RAM. OOM behavior should be a central thing and based
At least Linus's point is that doing perfect
On Mon, Sep 25, 2000 at 05:26:59PM +0200, Ingo Molnar wrote:
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
i think the GFP_USER case should do the oom logic within __alloc_pages(),
What's the difference of implementing the logic outside alloc_pages?
Putting the logic inside looks
ast
chance to post anything for the next few days ;(
If this patch proves stable for everyone and makes 8MB machines
workable again with the new VM, please include this in the next
pre-patch. If it doesn't work correctly, I'll somehow find it
in my email while at Linux Kongress and I'll prepare a
Hi,
Here is the TODO list for the new VM. The only thing
really needed for 2.4 is the OOM handler and the
page->mapping->flush() callback is really wanted by
the journaling filesystem folks.
The rest are mostly extra's that would be nice; these
things won't be pushed for inclusion
Hi,
Here is the TODO list for the new VM. The only thing
really needed for 2.4 is the OOM handler and the
page-mapping-flush() callback is really wanted by
the journaling filesystem folks.
The rest are mostly extra's that would be nice; these
things won't be pushed for inclusion except
chance to post anything for the next few days ;(
If this patch proves stable for everyone and makes 8MB machines
workable again with the new VM, please include this in the next
pre-patch. If it doesn't work correctly, I'll somehow find it
in my email while at Linux Kongress and I'll prepare a new
On Fri, 15 Sep 2000, James Lewis Nance wrote:
> On Fri, Sep 15, 2000 at 10:09:57PM -0300, Rik van Riel wrote:
> > Hi,
> >
> > today I released a new VM patch with 4 small improvements:
>
> Are these 4 improvements in the code test9-pre1 patch that Linus
> just rel
On Fri, 15 Sep 2000, James Lewis Nance wrote:
> On Fri, Sep 15, 2000 at 10:09:57PM -0300, Rik van Riel wrote:
> > Hi,
> >
> > today I released a new VM patch with 4 small improvements:
>
> Are these 4 improvements in the code test9-pre1 patch that Linus
> just rel
On Fri, Sep 15, 2000 at 10:09:57PM -0300, Rik van Riel wrote:
> Hi,
>
> today I released a new VM patch with 4 small improvements:
Are these 4 improvements in the code test9-pre1 patch that Linus just
released?
Jim
-
To unsubscribe from this list: send the line "unsubscrib
On Fri, Sep 15, 2000 at 10:09:57PM -0300, Rik van Riel wrote:
Hi,
today I released a new VM patch with 4 small improvements:
Are these 4 improvements in the code test9-pre1 patch that Linus just
released?
Jim
-
To unsubscribe from this list: send the line "unsubscribe linux-k
On Fri, 15 Sep 2000, James Lewis Nance wrote:
On Fri, Sep 15, 2000 at 10:09:57PM -0300, Rik van Riel wrote:
Hi,
today I released a new VM patch with 4 small improvements:
Are these 4 improvements in the code test9-pre1 patch that Linus
just released?
Oh well, I may as well give
100 matches
Mail list logo