[PATCH] md: Remove unnecessary printk when raid5 gets an unaligned read.

2007-01-24 Thread NeilBrown
One more... (sorry about the dribs-and-drabs approach) NeilBrown ### Comments for Changeset raid5_mergeable_bvec tries to ensure that raid5 never sees a read request that does not fit within just one chunk. However as we must always accept a single-page read, that is not always possible. So wh

[PATCH] md: Fix potential memalloc deadlock in md

2007-01-24 Thread NeilBrown
Another md patch suitable for 2.6.20. Thanks, NeilBrown ### Comments for Changeset If a GFP_KERNEL allocation is attempted in md while the mddev_lock is held, it is possible for a deadlock to eventuate. This happens if the array was marked 'clean', and the memalloc triggers a write-out to the m

Re: There is an advice

2007-01-24 Thread Neil Brown
On Friday January 5, [EMAIL PROTECTED] wrote: > In the stop function of any level raid device driver such as faulty, > multipath, and raidx > It would better set the mddev->private NULL before free the conf structure. We set it to NULL immediately after the free, and it really doesn't make any d

Re: change strip_cache_size freeze the whole raid

2007-01-24 Thread Neil Brown
On Wednesday January 24, [EMAIL PROTECTED] wrote: > > Okay-- thanks for the explanation and I will await a future patch.. > This would be that patch. It doesn't seem to break anything, but I haven't reproduced he bug yet (I think I need to reduce the amount of memory I have available) so I have

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Bill Cizek
Justin Piszcz wrote: On Mon, 22 Jan 2007, Andrew Morton wrote: On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]> wrote: Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke the OOM killer and kill all of my processes? Running with PREEMP

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
On Thu, 25 Jan 2007, Pavel Machek wrote: > Hi! > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > > use an onboard graphics

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Nick Piggin
Justin Piszcz wrote: On Mon, 22 Jan 2007, Andrew Morton wrote: After the oom-killing, please see if you can free up the ZONE_NORMAL memory via a few `echo 3 > /proc/sys/vm/drop_caches' commands. See if you can work out what happened to the missing couple-of-hundred MB from ZONE_NORMAL. Ru

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
On Thu, 25 Jan 2007, Pavel Machek wrote: > Hi! > > > > Is it highmem-related? Can you try it with mem=256M? > > > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > > use an onboard graphics

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Pavel Machek
Hi! > > Is it highmem-related? Can you try it with mem=256M? > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I > use an onboard graphics controller that has 128MB of RAM allocated to it > and I

Re: change strip_cache_size freeze the whole raid

2007-01-24 Thread Justin Piszcz
On Thu, 25 Jan 2007, Neil Brown wrote: > On Wednesday January 24, [EMAIL PROTECTED] wrote: > > Here you go Neil: > > > > p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size > > p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size > > p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size >

Re: change strip_cache_size freeze the whole raid

2007-01-24 Thread Neil Brown
On Wednesday January 24, [EMAIL PROTECTED] wrote: > Here you go Neil: > > p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size > p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size > p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size > p34:~# echo 4096 > /sys/block/md3/md/stripe_cache_size

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]> > > wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
And FYI yes I used mem=256M just as you said, not mem=256. Justin. On Wed, 24 Jan 2007, Justin Piszcz wrote: > > Is it highmem-related? Can you try it with mem=256M? > > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot > 2.6.20-rc5-6 single to get back into my machine, ve

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
On Mon, 22 Jan 2007, Andrew Morton wrote: > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]> > > wrote: > > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke > > the OOM killer and kill all of my processes? > > What's that? Software raid

Re: 2.6.20-rc5: cp 18gb 18gb.2 = OOM killer, reproducible just like 2.16.19.2

2007-01-24 Thread Justin Piszcz
> Is it highmem-related? Can you try it with mem=256M? Bad idea, the kernel crashes & burns when I use mem=256, I had to boot 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I use an onboard graphics controller that has 128MB of RAM allocated to it and I believe the ICH8

Re: Kernel 2.6.19.2 New RAID 5 Bug (oops when writing Samba -> RAID5)

2007-01-24 Thread Justin Piszcz
On Mon, 22 Jan 2007, Chuck Ebbert wrote: > Justin Piszcz wrote: > > My .config is attached, please let me know if any other information is > > needed and please CC (lkml) as I am not on the list, thanks! > > > > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to > > the R

Re: Raid-1 to Raid-5 conversion possible?

2007-01-24 Thread Neil Brown
On Wednesday January 24, [EMAIL PROTECTED] wrote: > I have for a long time been wondering if it is possible to convert a 2 > disk Raid-1 array to a 3 disk Raid-5 array using mdadm? > > Tonight I stumbled upon this article: > > http://www.n8gray.org/blog/2006/09/05/stupid-raid-tricks-with-evms-an

Raid-1 to Raid-5 conversion possible?

2007-01-24 Thread Vidar Sonerud
I have for a long time been wondering if it is possible to convert a 2 disk Raid-1 array to a 3 disk Raid-5 array using mdadm? Tonight I stumbled upon this article: http://www.n8gray.org/blog/2006/09/05/stupid-raid-tricks-with-evms-and-mdadm/ Is this safe / ok to do? Any comments from you mdad

Re: Ooops on read-only raid5 while unmounting as xfs

2007-01-24 Thread Nix
On 23 Jan 2007, Neil Brown said: > On Tuesday January 23, [EMAIL PROTECTED] wrote: >> >> My question is then : what prevents the upper layer to open the array >> read-write, submit a write and make the md code BUG_ON() ? > > The theory is that when you tell an md array to become read-only, it > t

2.6.20-rc5: known unfixed regressions (v3) (part 1)

2007-01-24 Thread Adrian Bunk
This email lists some known regressions in 2.6.20-rc5 compared to 2.6.19 that are not yet fixed in Linus' tree. If you find your name in the Cc header, you are either submitter of one of the bugs, maintainer of an affectected subsystem or driver, a patch of you caused a breakage or I'm considering

Always ask me to run fsck...Why?

2007-01-24 Thread Yu-Chen Wu
Hi all, My kernel is 2.6.18, and use mdadm 2.5.3 to manage my SW-RAID. I always run "sync" and "umount" before reset, but linux always ask me to run "fsck" when I mount again, Why? THX - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a messag

Nice Sideeffect of growing (was: RAID5 Resize Experience)

2007-01-24 Thread Benjamin Schieder
On 20.01.2007 09:56:21, Benjamin Schieder wrote: > md3 : active raid5 hda6[0] hdf6[3] hdb6[2] hdc6[1] > 12000192 blocks level 5, 64k chunk, algorithm 2 [4/4] [] I just noticed something. Before I grew the md3 array, it did NOT assemble with # mdadm -As --auto=yes --symlink=yes I