Hi Chris,
It seems there is a possible deadlock condition with your patch which
changes flush_dirty_buffers() to use -writepage (something which we
_definately_ want for 2.5). Take a look:
mark_buffer_dirty-balance_dirty-wakeup_bdflush-flush_dirty_buffers-
On Thu, 11 Jan 2001, Ulrich Schwarz wrote:
2.4.0 (final i586) patched with reiserfs 3.6.25 produced the following BUG:
the console report (ksymoopsed):
kernel BUG at vmscan.c:452!
invalid operand:
Does reiserfs patch changes vmscan.c ?
If so, whats in line 452 of mm/vmscan.c of
Hi,
While taking a look at page_launder()...
/* And re-start the thing.. */
spin_lock(pagemap_lru_lock); --
if (result != 1)
continue;
/* writepage refused to do anything */
On Fri, 12 Jan 2001, Vlad Bolkhovitine wrote:
After upgrade from 2.4.0-test7 to 2.4.0 while running tiotest v0.3.1 I found two
following problems.
There have been quite a lot of things changed from 2.4.0-test7 to 2.4.0,
so I'm not sure what caused the slowdown.
Anyway, important VM changes
On Sun, 14 Jan 2001, Mark Orr wrote:
I've been running 2.4.0-ac9 for a day and a half now.
I have pretty low-end hardware (Pentium 1/ 100MHz, 16Mb RAM,
17Mb swap) and it really seems to bog down with anything
heavy in memory.Netscape seems to really drag, and any
Java applets I
On Mon, 15 Jan 2001, Jure Pecar wrote:
Hi all,
I was running 2.4.0test10pre5 happily for months and wanted to see how
things stand in the 'latest stuff'. Here's what i found:
I compiled 2.4.0-ac8 with nearly the same .config as test10pre5 (with
latest gcc on rh7). Then i booted it and
On Tue, 16 Jan 2001, Rainer Mager wrote:
Attached is my oops.txt and the result sent through ksymoops. The results
don't look particularly useful to me so perhaps I'm doing something wrong.
PLEASE tell me if I should parse this differently. Likewise, if there is
anything else I can
On Tue, 16 Jan 2001, Rainer Mager wrote:
I knew that, I was just testing you all. ;-)
EIP; f889e044 END_OF_CODE+385bfe34/ =
Trace; f889d966 END_OF_CODE+385bf756/
Trace; c0140c10 vfs_readdir+90/ec
Trace; c0140e7c filldir+0/d8
Trace; c0140f9e sys_getdents+4a/98
Trace; c0140e7c
On Tue, 16 Jan 2001, Rainer Mager wrote:
Ok, now were making progress. I did as you said and have attached (really!)
the new parsed output. Now we have some useful information (I hope). I still
got lots of warnings on symbols (which I have edited out of the parsed file
for the sake of
Hi,
On my dbench runs I've noted a slowdown between pre4 and pre8 with 48
threads. (128MB, 2 CPU's machine)
pre4:
Throughput 7.05841 MB/sec (NB=8.82301 MB/sec 70.5841 MBit/sec)
70.94user 232.54system 15:17.39elapsed 33%CPU (0avgtext+0avgdata
0maxresident)k 0inputs+0outputs
On Thu, 18 Jan 2001, Rik van Riel wrote:
On Fri, 12 Jan 2001, Vlad Bolkhovitine wrote:
You can see, mmap() read performance dropped significantly as
well as read() one raised. Plus, "interactivity" of 2.4.0 system
was much worse during mmap'ed test, than using read()
(everything was
On Thu, 18 Jan 2001, Steven Cole wrote:
On Thu, 18 Jan 2001, Marcelo Tosatti wrote:
On my dbench runs I've noted a slowdown between pre4 and pre8 with 48
threads. (128MB, 2 CPU's machine)
I ran dbench 48 four times in succession for 2.4.0 and 2.4.1-pre8.
The change in performance
On Fri, 19 Jan 2001, Andrea Arcangeli wrote:
Marcelo can you give a try with `high_queued_sectors = total_ram / 3' and
low_queued_sectors = high_queued_sectors / 2 and drop the big ram machine
check?
Andrea,
With the changes you suggested I got almost the same results with
pre8.
-
To
On 15 Jan 2001, Linus Torvalds wrote:
In article [EMAIL PROTECTED],
Jeff Garzik [EMAIL PROTECTED] wrote:
$!@#@! pre6 is already out :)
Yes, and for heavens sake don't use it, because the reiserfs merge got
some dirty inode logic wrong. pre7 fixes just that one line and should
be ok
On Fri, 19 Jan 2001, Marcelo Tosatti wrote:
The swapin readahead code tries to allocate (1 page_cluster) pages at
each swapin.
This means there's a big chance of having (1 page_cluster)
"self-swap-out"'s at each swapin if we're under low memory.
Nasty.
Actually its
Hi,
I'm starting to implement a generic write clustering scheme and I would
like to receive comments and suggestions.
The write clustering issue has already been discussed (mainly at Miami)
and the agreement, AFAIK, was to implement the write clustering at the
per-address-space writepage()
On Sat, 20 Jan 2001, Rik van Riel wrote:
Is there ever a reason NOT to do the best possible IO
clustering at write time ?
Remember that disk writes do not cost memory and have
no influence on the resident set ... completely unlike
read clustering, which does need to be limited.
You dont
On Sat, 20 Jan 2001, Christoph Hellwig wrote:
snip
I think there is a big disadvantage of this appropeach:
To find out which pages are clusterable, we need do do bmap/get_block,
that means we have to go through the block-allocation functions, which
is rather expensive, and then we have to
On Sat, 20 Jan 2001, Christoph Hellwig wrote:
On Sat, Jan 20, 2001 at 01:24:40PM -0200, Marcelo Tosatti wrote:
In case the metadata was not already cached before -cluster() (in this
case there is no disk IO at all), -cluster() will cache it avoiding
further disk accesses by writepage
On Sat, 20 Jan 2001, Christoph Hellwig wrote:
On Sat, Jan 20, 2001 at 02:00:24PM -0200, Marcelo Tosatti wrote:
True. But you have to go through ext2_get_branch (under the big kernel
lock) - if we can do only one logical-physical block translations,
why doing it multiple times
On Fri, 19 Jan 2001, Rik van Riel wrote:
On Thu, 18 Jan 2001, Marcelo Tosatti wrote:
On Thu, 18 Jan 2001, Rik van Riel wrote:
On Fri, 12 Jan 2001, Vlad Bolkhovitine wrote:
You can see, mmap() read performance dropped significantly as
well as read() one raised. Plus
Jens,
Steven is a seeing a slowdown in his results, too.
On Mon, 22 Jan 2001, Steven Cole wrote:
On Thursday 18 January 2001 14:49, Marcelo Tosatti wrote:
Steven,
The issue is the difference between pre4 and pre8.
Could you please try pre4 and report results ?
Thanks
Ok
Any technical reason why the background page aging fix was not applied?
On Mon, 22 Jan 2001, Linus Torvalds wrote:
The ChangeLog may not be 100% complete. The physically big things are the
PPC and ACPI updates, even if most people won't notice.
-
To unsubscribe from this list: send the line
On 23 Jan 2001, Yann Dupont wrote:
I remember sawing that those errors were due to improperly written
drivers . Is the buslogic driver or tape driver are to blame here ?? Or
maybe this is a vm balancing issue ?
Could you please send the output of "Alt+SysRq+m" (kernel must be compiled
with
On Tue, 23 Jan 2001, Andre Hedrick wrote:
Just my nickel on the issue.
Andre,
This patch I'm talking about is for a different issue from what was
discussed in the IO clustering thread.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
On Wed, 24 Jan 2001, V Ganesh wrote:
now that we have inode-i_mapping-dirty_pages, what do we need
inode-i_dirty_buffers for ? I understand the latter was added for the O_SYNC
changes before dirty_pages came into the picture. but now both seem to be
doing more or less the same thing.
On Thu, 25 Jan 2001, Stephen C. Tweedie wrote:
Hi,
On Thu, Jan 25, 2001 at 04:17:30PM +0530, V Ganesh wrote:
so i_dirty_buffers contains buffer_heads of pages coming from write() as
well as metadata buffers from mark_buffer_dirty_inode(). a dirty MAP_SHARED
page which has been
On Thu, 25 Jan 2001, Daniel Phillips wrote:
"Stephen C. Tweedie" wrote:
We also maintain the
per-page buffer lists as caches of the virtual-to-physical mapping to
avoid redundant bmap()ping.
Could you clarify that one, please?
Daniel,
With "physical mapping" Stephen means on-disk
On Sat, 27 Jan 2001, David Ford wrote:
Since the testN series and up through ac12, I experience total loss of
control when memory is nearly exhausted.
I start with 256M and eat it up with programs until there is only about
7 megs left, no swap. From that point all user processes stall
On Sat, 27 Jan 2001, David Ford wrote:
I have Marcelo's patch. It isn't applicable because I am purposely not enabling any
swap. The problem is the system gets down to about 7 megs of buffers free and within
three seconds has become functionally dead. Zero response on any user
On Sun, 28 Jan 2001, Jens Axboe wrote:
On Sat, Jan 27 2001, Linus Torvalds wrote:
What was the trace of this? Just curious, the below case outlined by
Linus should be pretty generic, but I'd still like to know what
can lead to this condition.
It was posted on linux-kernel - I
(ugh, sorry about last mail)
On 27 Jan 2001, Linus Torvalds wrote:
In article [EMAIL PROTECTED], David Ford [EMAIL PROTECTED] wrote:
Unfortunately klogd reads /procerg.
So the following is a painstakingly slow hand translation, I'll only print
the D state entries unless someone asks
On Sat, 27 Jan 2001, Linus Torvalds wrote:
On Sun, 28 Jan 2001, Marcelo Tosatti wrote:
This is the smoking gun here, I bet, but I'd like to make sure I see the
whole thing. I don't see _why_ we'd have deadlocked on __wait_on_page(),
but I think this is the thread that hangs
On Sun, 28 Jan 2001, Marcelo Tosatti wrote:
On Sat, 27 Jan 2001, Linus Torvalds wrote:
On Sun, 28 Jan 2001, Marcelo Tosatti wrote:
This is the smoking gun here, I bet, but I'd like to make sure I see the
whole thing. I don't see _why_ we'd have deadlocked
On Tue, 30 Jan 2001, Rik van Riel wrote:
Hi Linus,
the patch below contains 3 small changes to mm/filemap.c:
1. replace the aging in __find_page_nolock() with setting
PageReferenced(), otherwise a large number of small
reads from (or writes to) a page can drive up the page
Hi,
The current swapin readahead code reads a number of pages (1
page_cluster) which are physically contiguous on disk with reference to
the page which needs to be faulted in.
However, the pages which are contiguous on swap are not necessarily
contiguous in the virtual memory area where the
On Wed, 31 Jan 2001, Stephen C. Tweedie wrote:
Hi,
On Wed, Jan 31, 2001 at 01:05:02AM -0200, Marcelo Tosatti wrote:
However, the pages which are contiguous on swap are not necessarily
contiguous in the virtual memory area where the fault happened. That means
the swapin readahead
On Wed, 31 Jan 2001, Timo Jantunen wrote:
Heip!
While I was looking unused partitions to be used for ReiserFS testing
Haven't you forgot to inform which kernel version are you using?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
On Thu, 10 May 2001, Mark Hemment wrote:
On Wed, 9 May 2001, Marcelo Tosatti wrote:
On Wed, 9 May 2001, Mark Hemment wrote:
Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
with a __GFP_WAIT to limit the looping?
__GFP_FAIL is in the -ac tree already
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
Hi,
On Thu, May 10, 2001 at 01:43:46PM -0300, Marcelo Tosatti wrote:
No. __GFP_FAIL can to try to reclaim pages from inactive clean.
We just want to avoid __GFP_FAIL allocations from going to
try_to_free_pages().
Why? __GFP_FAIL
On Thu, 10 May 2001, Stephen C. Tweedie wrote:
Hi,
On Thu, May 10, 2001 at 03:22:57PM -0300, Marcelo Tosatti wrote:
Initially I thought about __GFP_FAIL to be used by writeout routines which
want to cluster pages until they can allocate memory without causing any
pressure
Hi,
The following patch addresses two issues:
- Buffer cache pages in the inactive lists are not getting their age
increased if they get touched by getblk (which will set the referenced bit
on the page). page_launder() simply cleans the referenced bit on such
pages and moves them to the
On Fri, 11 May 2001, Marcelo Tosatti wrote:
Hi,
The following patch addresses two issues:
- Buffer cache pages in the inactive lists are not getting their age
increased if they get touched by getblk (which will set the referenced bit
on the page). page_launder() simply cleans
On Wed, 9 May 2001, Marcelo Tosatti wrote:
Locked for the not wrote out case (I will fix my patch now, thanks)
I just found out that there are filesystems (eg reiserfs) which write out
data even if an error ocurred, which means the unlocking must be done by
the filesystems, always
Well,
Here is the updated version of the patch to add the priority argument to
writepage(). All implementations have been fixed.
No referenced bit changes as I still think its not worth passing this
information down to writepage().
Note: I've removed ramfs_writepage(). If there is no
On Thu, 10 May 2001, Chris Mason wrote:
On Wednesday, May 09, 2001 10:51:17 PM -0300 Marcelo Tosatti
[EMAIL PROTECTED] wrote:
On Wed, 9 May 2001, Marcelo Tosatti wrote:
Locked for the not wrote out case (I will fix my patch now, thanks)
I just found out
On Wed, 9 May 2001, David S. Miller wrote:
Marcelo Tosatti writes:
You want writepage() to check/clean the referenced bit and move the page
to the active list itself ?
Well, that's the other part of what my patch was doing.
Let me state it a different way, how is the new
On Thu, 10 May 2001, Andrew Morton wrote:
Marcelo Tosatti wrote:
Well,
Here is the updated version of the patch to add the priority argument to
writepage().
It appears that a -EIO return from block_write_full_page() will
result in an unlock of an unlocked page in page_launder
On Sun, 13 May 2001, Linus Torvalds wrote:
On Sun, 13 May 2001, Rik van Riel wrote:
This means that the swapin path (and the same path for
other pagecache pages) doesn't take the page lock and
the page lock doesn't protect us from other people using
the page while we have it
On Mon, 14 May 2001, Rik van Riel wrote:
Hi Linus,
the following patch does:
snip
pg_data_t *pgdat = pgdat_list;
int sum = 0;
int freeable = nr_free_pages() + nr_inactive_clean_pages();
+ /* XXX: dynamic free target is complicated and may be wrong... */
On Mon, 14 May 2001, Ben LaHaise wrote:
Hey folks,
Hi.
The patch below consists of 3 seperate fixes for helping remove the
deadlocks present in current kernels with respect to highmem systems.
Each fix is to a seperate file, so please accept/reject as such.
snip
The third patch (to
Hi Linus,
There is no reason why bdflush should call page_launder().
Its pretty obvious that bdflush's job is to only write out _buffers_.
Under my tests this patch makes things faster.
Guess why? Because bdflush is writing out buffers when it should instead
blocking inside
Two seconds after I sent the message Benjamin told me on IRC that
PAGE_ACCESSED is included in the default page protections... duh.
On Thu, 17 May 2001, Marcelo Tosatti wrote:
Linus,
I was looking at mm/memory.c (2.4), and I've noticed that we don't call
pte_mkyoung() on newly created
Linus,
I was looking at mm/memory.c (2.4), and I've noticed that we don't call
pte_mkyoung() on newly created pte's for most of the fault paths.
break_cow(), for example:
establish_pte(vma, address, page_table, pte_mkwrite(pte_mkdirty(mk_pte(new_page, v
ma-vm_page_prot;
Is there any
On Wed, 9 May 2001, David S. Miller wrote:
Marcelo Tosatti writes:
Let me state it a different way, how is the new writepage() framework
going to do things like ignore the referenced bit during page_launder
for dead swap pages?
Its not able to ignore the referenced bit
On Sun, 20 May 2001, Mike Galbraith wrote:
Also in all recent kernels, if the machine is swapping, swap cache
grows without limits and is hard to recycle, but then again that is
a known problem.
This one bugs me. I do not see that and can't understand why.
To throw away dirty and
On Sat, 19 May 2001, Mike Galbraith wrote:
@@ -1054,7 +1033,7 @@
if (!zone-size)
continue;
- while (zone-free_pages zone-pages_low) {
+ while (zone-free_pages
Hi,
I just noticed a bad effect of write drop behind yesterday during some
tests.
The problem is that we deactivate written pages, thus making the inactive
list become pretty big (full of unfreeable pages) under write intensive IO
workloads.
So what happens is that we don't do _any_ aging
On Wed, 23 May 2001, Daniel Phillips wrote:
On Wednesday 23 May 2001 09:33, Marcelo Tosatti wrote:
Hi,
I just noticed a bad effect of write drop behind yesterday during
some tests.
The problem is that we deactivate written pages, thus making the
inactive list become pretty big
On Mon, 28 May 2001, Jens Axboe wrote:
Hi,
One minor bug found that would possibly oops if the SCSI pool ran out of
memory for the sg table and had to revert to a single segment request.
This should never happen, as the pool is sized after number of devices
and queue depth -- but it
On Tue, 29 May 2001, André Dahlqvist wrote:
André Dahlqvist [EMAIL PROTECTED] wrote:
I agree. Kernels after 2.4.4 uses a *lot* more swap for me, which I guess
might be part of the reason for the slowdown.
Following up on myself, here are some numbers:
Freshly booted 2.4.4 with X
On Wed, 30 May 2001, Jonathan Morton wrote:
The page aging logic does seems fragile as heck. You never know how
many folks are aging pages or at what rate. If aging happens too fast,
it defeats the garbage identification logic and you rape your cache. If
aging happens too slowly..
On Wed, 30 May 2001, Steve Whitehouse wrote:
Hi,
Attached is a patch I came up with recently to do add zerocopy support to
NBD for writes. I'm not intending that this should go into the kernel
before at least 2.5, I'm just sending it here in case it is useful to anyone.
I wrote it is
On Wed, 30 May 2001, Steve Whitehouse wrote:
Hi,
On Wed, 30 May 2001, Steve Whitehouse wrote:
[info about NBD patch deleted]
Cool.
Are you seeing performance improvements with the patch ?
Yes, but my testing is not in anyway complete yet. The only network device
On Wed, 30 May 2001, Rik van Riel wrote:
On Wed, 30 May 2001, Marcelo Tosatti wrote:
The problem is that we allow _every_ task to age pages on the system
at the same time --- this is one of the things which is fucking up.
This should not have any effect on the ratio of cache
On Wed, 30 May 2001, Mike Galbraith wrote:
On Wed, 30 May 2001, Rik van Riel wrote:
On Wed, 30 May 2001, Marcelo Tosatti wrote:
The problem is that we allow _every_ task to age pages on the system
at the same time --- this is one of the things which is fucking up.
This should
On Thu, 31 May 2001, J . A . Magallon wrote:
On 05.30 Marcelo Tosatti wrote:
Its at
http://bazar.conectiva.com.br/~marcelo/patches/v2.4/2.4.5ac4/reapswap.patch
Please test.
Which kind of test, something like the gcc think I posted recently ?
I don't remember
On Sat, 26 May 2001, Marcelo Tosatti wrote:
You're trying to fix the symptoms, by attacking the final end. And what
I've been trying to say is that this problem likely has a higher-level
_cause_, and I want that _cause_ fixed. Not the symptoms.
You are not going to fix the problem
Zlatko,
I've read your patch to remove nr_async_pages limit while reading an
archive on the web. (I have to figure out why lkml is not being delivered
correctly to me...)
Quoting your message:
That artificial limit hurts both swap out and swap in path as it
introduces synchronization
On Tue, 5 Sep 2000 [EMAIL PROTECTED] wrote:
# make all
make[1]: Entering directory /usr/src/pcmcia-cs-3.1.20/modules'
cc -MD -O2 -Wall -Wstrict-prototypes -pipe -I../include
-I/usr/src/linux/include -D__KERNEL__ -DMODULE -c cs.c
In file included from
On Thu, 7 Sep 2000, Matthew Hawkins wrote:
I'd like to advocate the inclusion of the majority of these patches of
Andrea's. I've been patching most of them in for a while now simply
because I've found my SMP system much more stable and useable.
Andrea VM patches will be included in
On Thu, 7 Sep 2000, Urban Widmark wrote:
On Thu, 7 Sep 2000, G. Hugh Song wrote:
if [ "$CONFIG_JOLIET" = "y" -o "$CONFIG_FAT_FS" != "n" \
-o "$CONFIG_NTFS_FS" != "n" -o "$CONFIG_NCPFS_NLS" = "y" \
-o "$CONFIG_SMB_FS" != n ]; then
n vs "n" is my error.
However
On Fri, 8 Sep 2000, Urban Widmark wrote:
On Thu, 7 Sep 2000, Marcelo Tosatti wrote:
oldconfig always ask about CONFIG_SMB_NLS_REMOTE in case it
was set CONFIG_SMB_NLS_REMOTE="" in the previous config.
This is expected?
It's certainly annoying, especially for peopl
On Fri, 8 Sep 2000, Torben Mathiasen wrote:
On Fri, Sep 08 2000, Arnaldo Carvalho de Melo wrote:
Hi,
Please take a look and consider applying. Some of it are small cleanups, if
they're deemed unnecessary, lemme now and I'm back it off. I think that there
are some more unchecked
On Sat, 9 Sep 2000, Rasmus Andersen wrote:
snip
Code: 0f b6 0c 03 89 4c 24 14 51 68 8e e5 17 c0 e8 de a4 00 00 83
EIP; c0107f27 show_registers+237/268 =
Trace; c300 END_OF_CODE+2e30398/
Trace; c0107f85 die+2d/38
Thats not the first oops yet, and as Keith told you, its
On Sun, 17 Sep 2000, Andrea Arcangeli wrote:
snip
If nobody does that before me I will try this "remeber last position of the
head" idea in my blkdev tree (there are many other pending elevator fixes in
it) as soon as I finished with 2.2.18pre9aa1 LFS nfsv3 and as soon as I finish
the fix
On Fri, 22 Sep 2000, Michael R. Jinks wrote:
I'm trying to bond all four interfaces of a D-Link DFE-570TX ethernet card.
Not sure who maintains the bonding module, so writing directly to the main
list. Tips on better people to bother are welcome.
Note on my kernel version: I'm using
On Sat, 23 Sep 2000, David Ford wrote:
Keith Owens wrote:
That would take my 2.4.0 bzImage to 893864, it does not leave much room
out of a 1.4Mb floppy for LILO files. We could have multiple make
targets, with and without appended config/map but that just complicates
the build
On Sun, 24 Sep 2000, Linus Torvalds wrote:
On Sun, 24 Sep 2000, Andrea Arcangeli wrote:
On Sun, Sep 24, 2000 at 10:26:11PM +0200, Ingo Molnar wrote:
where will it deadlock?
ext2_new_block (or whatever that runs getblk with the superlock lock
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
snip
kmem_cache_reap shrinks the slabs at _very_ low frequency. It's worthless to
keep lots of dentries and icache into the slab internal queues until
kmem_cache_reap kicks in again, if we free them such memory immediatly instead
we'll run
On Mon, 25 Sep 2000, Martin Diehl wrote:
On Mon, 25 Sep 2000, Martin Diehl wrote:
PS: vmfixes-2.4.0-test9-B2 not yet tested - will do later.
Hi - done now:
using 2.4.0-t9p6 + vmfixes-2.4.0-test9-B2 I ended up with the box
deadlocked again! Was "make bzImage" on UP booted with
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
snip
I talked with Alexey about this and it seems the best way is to have a
per-socket reservation of clean cache in function of the receive window. So we
don't need an huge atomic pool but we can have a special lru with an irq
spinlock that is
On Tue, 26 Sep 2000, Ingo Molnar wrote:
On 26 Sep 2000, Juan J. Quintela wrote:
Ingo, I am very wrong, or vmfixes-B2_deadlock is not included in
test9-pre7.
well, the __GFP_IO part is included (in a different way). The slab.c part
is not included.
Actually the __GFP_IO
Alan,
I think adding a document about MCE in the kernel would be very useful.
Or at least a pointer to Intel's documentation about it.
On 26 Sep 2000, H. Peter Anvin wrote:
Followup to: [EMAIL PROTECTED]
By author:"Martin Bene" [EMAIL PROTECTED]
In newsgroup: linux.dev.kernel
On Wed, 27 Sep 2000, Ingo Molnar wrote:
On Tue, 26 Sep 2000, Marcelo Tosatti wrote:
well, the __GFP_IO part is included (in a different way). The slab.c part
is not included.
Actually the __GFP_IO check is now only inside ext2.
no, it isnt. It's in the VFS. In fact
On Fri, 29 Sep 2000, Eyal Lebedinsky wrote:
Alan Cox wrote:
2.2.18pre11
I should mention that using an almost-all-modularised config I get this
for the last few pachlevels:
depmod: *** Unresolved symbols in /lib/modules/2.2.18pre11/misc/rio.o
This patch fixes this:
---
On Fri, 29 Sep 2000 [EMAIL PROTECTED] wrote:
Can you assist?
Sep 29 11:46:06 plato kernel: Unable to handle kernel paging request at
virtual address 40ab06c8
Sep 29 11:46:06 plato kernel: current-tss.cr3 = 00101000, %cr3 = 00101000
Sep 29 11:46:06 plato kernel: *pde =
Sep 29
On Tue, 3 Oct 2000, Simon Richter wrote:
Hi,
I'm running 2.2.17 with the rtl8139 fix from 2.2.18pre, and after about
two hours of normal operation (no crashes, no fs corruption -- Thanks
Jeff) the network suddenly stops responding. Calling "ifconfig" (just
looking at the stats) sometimes
On Mon, 11 Sep 2000, Marcelo Tosatti wrote:
On Mon, 11 Sep 2000, octave klaba wrote:
Hello,
upgrading from 2.2.16 to 2.2.17 a raid-soft config (adaptec 5x36Go)
/sbin/lilo gave a D process ( :) )
root 14823 0.0 0.1 1184 496 ?DSep10 0:00 /sbin/lilo
On Mon, 11 Sep 2000, octave klaba wrote:
Hello,
upgrading from 2.2.16 to 2.2.17 a raid-soft config (adaptec 5x36Go)
/sbin/lilo gave a D process ( :) )
root 14823 0.0 0.1 1184 496 ?DSep10 0:00 /sbin/lilo
one question:
reboot or not to reboot ?
I have no flopy
On Fri, 6 Oct 2000, David S. Miller wrote:
Date: Fri, 6 Oct 2000 19:25:38 -0300 (BRST)
From: Rik van Riel [EMAIL PROTECTED]
Is this an actual bug, or am I overlooking something?
It is a bug and I'll change TCP's sendmsg to use sk-allocation as it
should. Thanks for pointing
2.2.18pre15 defines udelay as (in file include/asm-i386/delay.h) :
...
extern void __bad_udelay(void);
...
#define udelay(n) (__builtin_constant_p(n) ? \
((n) 2 ? __bad_udelay() : __const_udelay((n) *
0x10c6ul)) : \
__udelay(n))
...
It seems __bad_udelay is not
On Wed, 11 Oct 2000, Mike Elmore wrote:
All,
Had a crash this morning for the first time in a while...
2.2.17 Locked up Cold.
Machine is a SMP 2xPII450 w/ 128M RAM on a Tyan Tiger100
BX board.
Here's the kernel output:
Oct 11 08:29:30
On Thu, 12 Oct 2000, Andrea Arcangeli wrote:
On Wed, Oct 11, 2000 at 03:35:40PM -0200, Marcelo Tosatti wrote:
Now I'm not sure if this can be caused by a memory problem.
It can.
Ok, thanks.
Mike, could you try to run memtest86 (you can find it at freshmeat) to
find out if your
On Thu, 19 Oct 2000, Alan Cox wrote:
This is just to give folks something to sync against. Test it by all means
however.
Must fix stuff left to do for 2.2.18final
- Merge the S/390 stuff and make S/390 build again
- Fix the megaraid (revert if need be)
- Fix the ps/2
On Thu, 19 Oct 2000, Alan Cox wrote:
- Get to the bottom of the VM mystery if possible
The RAID problem (which is caused by VM changes) is the same deadlock
found in drbd and nbd.
It was not a problem with kernels 2.2.17 because there was no write
throttling in shrink_mmap.
I'm
Octave,
Andrea fixed a corruption problem which looks exactly what you're
hitting.
Please try
ftp://ftp.kernel.org/pub/people/andrea/patches/v2.2/2.2.18pre17/VM-global-2.2.18pre17-7.bz2
On Wed, 25 Oct 2000, octave klaba wrote:
Hi,
We test a smp server (bi-piii) and we have
some problems
On Fri, 23 Feb 2001, Shawn Starr wrote:
Feb 23 21:17:47 coredump kernel: __alloc_pages: 3-order allocation failed.
Feb 23 21:17:47 coredump kernel: __alloc_pages: 2-order allocation failed.
Feb 23 21:17:47 coredump kernel: __alloc_pages: 1-order allocation failed.
Feb 23 21:17:47 coredump
On Sun, 25 Feb 2001, Mike Galbraith wrote:
The way sg_low_malloc() tries to allocate, failure messages are
pretty much garanteed. It tries high order allocations (which
are unreliable even when not stressed) and backs off until it
succeeds.
In other words, the messages are a red
On Mon, 26 Feb 2001, Alan Cox wrote:
We can add an allocation flag (__GFP_NO_CRITICAL?) which can be used by
sg_low_malloc() (and other non critical allocations) to fail previously
and not print the message.
It is just for debugging. The message can go. If anytbing it would be more
1 - 100 of 2527 matches
Mail list logo