Re: [PATCH 000 of 4] md: Introduction

2006-08-24 Thread Andrew Morton
On Thu, 24 Aug 2006 17:40:56 +1000 NeilBrown <[EMAIL PROTECTED]> wrote: > > Following are 4 patches against 2.6.18-rc4-mm2 > > The first 2 are bug fixes which should go in 2.6.18, and apply > equally well to that tree as to -mm. > > The latter two should stay in -mm until after 2.6.18. > > The

Re: Linux: Why software RAID?

2006-08-24 Thread Richard Scobie
Jeff Garzik wrote: Richard Scobie wrote: Jeff, on a slightly related note, is the driver status for the NVIDIA as reflected on your site, correct for the new nForce 590/570 AM2 chipset? Unfortunately I rarely have an idea about how marketing names correlate to chipsets. Do you have a PC

Re: md: only binds to one mirror after reboot

2006-08-24 Thread Rob Bray
> hello, > > after reboot, md only binds to one mirror (/dev/hdb1). > raid1: raid set md0 active with 1 out of 2 mirrors > > After adding /dev/hda1 manually 'mdadm --add /dev/md0 /dev/hda1', the > raid seems to work well: > > isp:/var/log# cat /proc/mdstat > Personalities : [raid1] > md0 : active r

md: only binds to one mirror after reboot

2006-08-24 Thread Andreas Pelzner
hello, after reboot, md only binds to one mirror (/dev/hdb1). raid1: raid set md0 active with 1 out of 2 mirrors After adding /dev/hda1 manually 'mdadm --add /dev/md0 /dev/hda1', the raid seems to work well: isp:/var/log# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hda1[0] hdb1[

Re: strange raid6 assembly problem

2006-08-24 Thread H. Peter Anvin
Mickael Marchand wrote: so basically I don't really know what to do with my sdf3 at the moment and fear to reboot again :o) maybe a --re-add /dev/sdf3 could work here ? but will it survive a reboot ? At this point, for whatever reason, your kernel doesn't see /dev/sdf3 as part of the array.

Re: Linux: Why software RAID?

2006-08-24 Thread Alan Cox
Ar Iau, 2006-08-24 am 07:31 -0700, ysgrifennodd Marc Perkel: > So - the bottom line answer to my question is that unless you are > running raid 5 and you have a high powered raid card with cache and > battery backup that there is no significant speed increase to use > hardware raid. For raid 0 ther

Re: Linux: Why software RAID?

2006-08-24 Thread Mark Lord
Adam Kropelin wrote: On Thu, Aug 24, 2006 at 02:20:50PM +0100, Alan Cox wrote: Generally speaking the channels on onboard ATA are independant with any vaguely modern card. Ahh, I did not know that. Does this apply to master/slave connections on the same PATA cable as well? No, it doesn't.

Re: Linux: Why software RAID?

2006-08-24 Thread Gordon Henderson
On Thu, 24 Aug 2006, Adam Kropelin wrote: > > Generally speaking the channels on onboard ATA are independant with any > > vaguely modern card. > > Ahh, I did not know that. Does this apply to master/slave connections on > the same PATA cable as well? I know zero about PATA, but I assumed from > th

Re: Linux: Why software RAID?

2006-08-24 Thread Adam Kropelin
On Thu, Aug 24, 2006 at 02:20:50PM +0100, Alan Cox wrote: > Ar Iau, 2006-08-24 am 09:07 -0400, ysgrifennodd Adam Kropelin: > > Jeff Garzik <[EMAIL PROTECTED]> wrote: > > with sw RAID of course if the builder is careful to use multiple PCI > > cards, etc. Sw RAID over your motherboard's onboard con

Re: Linux: Why software RAID?

2006-08-24 Thread Alan Cox
Ar Iau, 2006-08-24 am 09:07 -0400, ysgrifennodd Adam Kropelin: > Jeff Garzik <[EMAIL PROTECTED]> wrote: > with sw RAID of course if the builder is careful to use multiple PCI > cards, etc. Sw RAID over your motherboard's onboard controllers leaves > you vulnerable. Generally speaking the channels

Re: Linux: Why software RAID?

2006-08-24 Thread Adam Kropelin
Jeff Garzik <[EMAIL PROTECTED]> wrote: > But anyway, to help answer the question of hardware vs. software RAID, I > wrote up a page: > > http://linux.yyz.us/why-software-raid.html > > Generally, you want software RAID unless your PCI bus (or more rarely, > your CPU) is getting saturated.

strange raid6 assembly problem

2006-08-24 Thread Mickael Marchand
Hi, I am having a little fun with a raid6 array these days. kernel : 2.6.17.10-if.1 #1 SMP Wed Aug 23 11:25:03 CEST 2006 i686 GNU/Linux Debian sarge and backported mdadm 2.4.1 here is the initial shape of the array : /dev/md2 /dev/sda2,/dev/sdb2(F),/dev/sdc3,/dev/sdd3,/dev/sdf3 so sdf3 is a new

Re: RAID over Firewire

2006-08-24 Thread Francois Barre
2006/8/23, Richard Scobie <[EMAIL PROTECTED]>: Has anyone had any experience or comment regarding linux RAID over ieee1394? I've been successfully running a 4x250Gb Raid5 over ieee1394 with XFS on top. The 4 drives are sharing the same ieee1394 bus, so the bandwidth is awfull, because they have

Re: Linux: Why software RAID?

2006-08-24 Thread Jeff Garzik
Richard Scobie wrote: Jeff Garzik wrote: Mark Perkel wrote: Running Linux on an AMD AM2 nVidia chip ser that supports Raid 0 striping on the motherboard. Just wondering if hardware raid (SATA2) is going to be faster that software raid and why? Jeff, on a slightly related note, is the driv

[PATCH 001 of 4] md: Fix recent breakage of md/raid1 array checking

2006-08-24 Thread NeilBrown
A recent patch broke the ability to do a user-request check of a raid1. This patch fixes the breakage and also moves a comment that was dislocated by the same patch. Signed-off-by: Neil Brown <[EMAIL PROTECTED]> ### Diffstat output ./drivers/md/raid1.c |7 --- 1 file changed, 4 inserti

[PATCH 003 of 4] md: new sysfs interface for setting bits in the write-intent-bitmap

2006-08-24 Thread NeilBrown
From: Paul Clements <[EMAIL PROTECTED]> This patch (tested against 2.6.18-rc1-mm1) adds a new sysfs interface that allows the bitmap of an array to be dirtied. The interface is write-only, and is used as follows: echo "1000" > /sys/block/md2/md/bitmap (dirty the bit for chunk 1000 [offset 0]

[PATCH 004 of 4] md: Remove unnecessary variable x in stripe_to_pdidx().

2006-08-24 Thread NeilBrown
>From : Coywolf Qi Hunt <[EMAIL PROTECTED]> Signed-off-by: Coywolf Qi Hunt <[EMAIL PROTECTED]> Signed-off-by: Neil Brown <[EMAIL PROTECTED]> ### Diffstat output ./drivers/md/raid5.c |5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff .prev/drivers/md/raid5.c ./drivers/md/raid5

[PATCH 002 of 4] md: Fix issues with referencing rdev in md/raid1.

2006-08-24 Thread NeilBrown
We need to be careful when referencing mirrors[i].rdev. so it can disappear under us at various times. So: fix a couple of problem places. comment a couple of non-problem places move an 'atomic_add' which deferences rdev down a little way to some where where it is sure to not be NULL.

[PATCH 000 of 4] md: Introduction

2006-08-24 Thread NeilBrown
Following are 4 patches against 2.6.18-rc4-mm2 The first 2 are bug fixes which should go in 2.6.18, and apply equally well to that tree as to -mm. The latter two should stay in -mm until after 2.6.18. The second patch is maybe bigger than it absolutely needs to be as a bugfix. If you like I can

[PATCH 001 of 2] md: Avoid backward event updates in md superblock when degraded.

2006-08-24 Thread NeilBrown
If we - shut down a clean array, - restart with one (or more) drive(s) missing - make some changes - pause, so that they array gets marked 'clean', the event count on the superblock of included drives will be the same as that of the removed drives. So adding the removed drive back in will

[PATCH 002 of 2] md: replace magic numbers in sb_dirty with well defined bit flags

2006-08-24 Thread NeilBrown
From: NeilBrown <[EMAIL PROTECTED]> Instead of magic numbers (0,1,2,3) in sb_dirty, we have some flags instead: MD_CHANGE_DEVS Some device state has changed requiring superblock update on all devices. MD_CHANGE_CLEAN The array has transitions from 'clean' to 'dirty' or back, requiring

[PATCH 000 of 2] md: Fix a bug with backward event updates.

2006-08-24 Thread NeilBrown
Hi Andrew, There is a bug in 2.6.18-rc which needs to be fixed before -final, but there is a patch in -mm which is not scheduled for 2.6.18 which touches the same code - so that patch has to change. So: wind back to md-replace-magic-numbers-in-sb_dirty-with-well-defined-bit-flags.patch dis