[PATCH 009 of 12] md: Require CAP_SYS_ADMIN for (re-)configuring md devices via sysfs.

2006-06-27 Thread NeilBrown
The ioctl requires CAP_SYS_ADMIN, so sysfs should too. Note that we don't require CAP_SYS_ADMIN for reading attributes even though the ioctl does. There is no reason to limit the read access, and much of the information is already available via /proc/mdstat cc: Chris Wright [EMAIL PROTECTED]

[PATCH 000 of 12] md: Introduction

2006-06-27 Thread NeilBrown
Following are 12 assorted small patches for md. They are against 2.6.17-mm2 and are suitable for inclusion in 2.6.18. They are primarily small bug fixes, many fixing possible races, some of which have been seen in the wild, some not. Thanks, NeilBrown [PATCH 001 of 12] md: Possible fix for

[PATCH 006 of 12] md: Fix some small races in bitmap plugging in raid5.

2006-06-27 Thread NeilBrown
The comment gives more details, but I didn't quite have the sequencing write, so there was room for races to leave bits unset in the on-disk bitmap for short periods of time. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c | 30

[PATCH 008 of 12] md: Unify usage of symbolic names for perms.

2006-06-27 Thread NeilBrown
Some places we use number (0660) someplaces names (S_IRUGO). Change all numbers to be names, and change 0655 to be what it should be. Also make some formatting more consistent. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c | 56

[PATCH 001 of 12] md: Possible fix for unplug problem

2006-06-27 Thread NeilBrown
I have reports of a problem with raid5 which turns out to be because the raid5 device gets stuck in a 'plugged' state. This shouldn't be able to happen as 3msec after it gets plugged it should get unplugged. However it happens none-the-less. This patch fixes the problem and is a reasonable

[PATCH 003 of 12] md: Delay starting md threads until array is completely setup.

2006-06-27 Thread NeilBrown
When an array is started we start one or two threads (two if there is a reshape or recovery that needs to be completed). We currently start these *before* the array is completely set up and in particular before queue-queuedata is set. If the thread actually starts very quickly on another CPU,

[PATCH 005 of 12] md: Fix a plug/unplug race in raid5

2006-06-27 Thread NeilBrown
When a device is unplugged, requests are moved from one or two (depending on whether a bitmap is in use) queues to the main request queue. So whenever requests are put on either of those queues, we should make sure the raid5 array is 'plugged'. However we don't. We currently plug the raid5

[PATCH 012 of 12] md: Include sector number in messages about corrected read errors.

2006-06-27 Thread NeilBrown
This is generally useful, but particularly helps see if it is the same sector that always needs correcting, or different ones. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid1.c |3 +++ ./drivers/md/raid10.c |4 ./drivers/md/raid5.c | 30

[PATCH 007 of 12] md: Fix usage of wrong variable in raid1

2006-06-27 Thread NeilBrown
Though it rarely matters, we should be using 's' rather than r1_bio-sector here. ### Diffstat output ./drivers/md/raid1.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff .prev/drivers/md/raid1.c ./drivers/md/raid1.c --- .prev/drivers/md/raid1.c2006-06-27 12:15:16.0

[PATCH 004 of 12] md: Fix resync speed calculation for restarted resyncs.

2006-06-27 Thread NeilBrown
We introduced 'io_sectors' recently so we could count the sectors that causes io during resync separate from sectors which didn't cause IO - there can be a difference if a bitmap is being used to accelerate resync. However when a speed is reported, we find the number of sectors processed

[PATCH 010 of 12] md: Remove a variable that is now unused.

2006-06-27 Thread NeilBrown
Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c |1 - 1 file changed, 1 deletion(-) diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c --- .prev/drivers/md/raid5.c2006-06-27 12:17:33.0 +1000 +++ ./drivers/md/raid5.c2006-06-27

[PATCH 002 of 12] md: Set desc_nr correctly for version-1 superblocks.

2006-06-27 Thread NeilBrown
This has to be done in -load_super, not -validate_super Without this, hot-adding devices to an array doesn't always work right - though there is a work around in mdadm-2.5.2 to make this less of an issue. ### Diffstat output ./drivers/md/md.c |6 +- 1 file changed, 5 insertions(+), 1

[PATCH 011 of 12] md: Fix Will Configure message when interpreting md= kernel parameter.

2006-06-27 Thread NeilBrown
If a partitionable array is used, we should say e.g. Will configure md_d0 (super-block) from rather than Will configure md0 (super-block) from which implies non-partitionable. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./usr/kinit/do_mounts_md.c |4 ++--

Re: recover data from linear raid

2006-06-27 Thread Andreas Dilger
On Jun 27, 2006 11:23 +1000, Neil Brown wrote: You could try hunting for ext3 superblocks on the device. There might be an easier way but od -x /dev/hdb | grep '^.60 ef53 ' Compile the findsuper tool from e2fsprogs - it was made for this. Also possible to use

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-27 Thread Nix
On 26 Jun 2006, Neil Brown said: On Tuesday June 20, [EMAIL PROTECTED] wrote: For some time, mdadm's been dumping core on me in my uClibc-built initramfs. As you might imagine this is somewhat frustrating, not least since my root filesystem's in LVM on RAID. Half an hour ago I got around to

Re: Multiple raids on one machine?

2006-06-27 Thread Nix
On 25 Jun 2006, Chris Allen uttered the following: Back to my 12 terabyte fileserver, I have decided to split the storage into four partitions each of 3TB. This way I can choose between XFS and EXT3 later on. So now, my options are between the following: 1. Single 12TB /dev/md0,

Re: [PATCH*2] mdadm works with uClibc from SVN

2006-06-27 Thread Nix
On Tue, 27 Jun 2006, Neil Brown prattled cheerily: On Tuesday June 27, [EMAIL PROTECTED] wrote: ,[ config.c:load_partitions() ] | name = map_dev(major, minor, 1); | | d = malloc(sizeof(*d)); | d-devname = strdup(name); ` Ahh.. uhmmm... Oh yes. I've fixed that since, but

Re: Multiple raids on one machine?

2006-06-27 Thread Chris Allen
Nix wrote: On 25 Jun 2006, Chris Allen uttered the following: Back to my 12 terabyte fileserver, I have decided to split the storage into four partitions each of 3TB. This way I can choose between XFS and EXT3 later on. So now, my options are between the following: 1. Single 12TB /dev/md0,

ANNOUNCE: mdadm 2.5.2 - A tool for managing Soft RAID under Linux

2006-06-27 Thread Neil Brown
I am pleased to announce the availability of mdadm version 2.5.2 It is available at the usual places: http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and countrycode=xx. http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ and via git at git://neil.brown.name/mdadm

Re: Large single raid... - XFS over NFS woes

2006-06-27 Thread Dexter Filmore
Am Freitag, 23. Juni 2006 14:50 schrieben Sie: Strange that whatever the filesystem you get equal numbers of people saying that they have never lost a single byte to those who have had horrible corruption and would never touch it again. We stopped using XFS about a year ago because we were

Re: recover data from linear raid

2006-06-27 Thread Dimitris Zilaskos
On Tue, 27 Jun 2006, Neil Brown wrote: On Monday June 26, [EMAIL PROTECTED] wrote: This is what I get now, after creating with fdisk /dev/hdb1 and /dev/hdc1 as linux raid autodetect partitions So I'm totally confused now. You said it was 'linear', but the boot log showed 'raid0'.

Re: Multiple raids on one machine?

2006-06-27 Thread Nix
On Tue, 27 Jun 2006, Chris Allen wondered: Nix wrote: There is a third alternative which can be useful if you have a mess of drives of widely-differing capacities: make several RAID arrays so as to tesselate space across all the drives, and then pile an LVM on the top of all of them to

When will the Areca RAID driver be merged into mainline?

2006-06-27 Thread Justin Piszcz
Anyone have an ETA on this? I heard soon but was wondering how soon..? kernel-version-2.6.x kernel-version-2.6.x/arcmsr kernel-version-2.6.x/arcmsr/arcmsr.c kernel-version-2.6.x/arcmsr/arcmsr.h kernel-version-2.6.x/arcmsr/Makefile kernel-version-2.6.x/readme.txt The driver is quite small and

Re: When will the Areca RAID driver be merged into mainline?

2006-06-27 Thread Randy.Dunlap
On Tue, 27 Jun 2006 14:05:54 -0400 (EDT) Justin Piszcz wrote: Anyone have an ETA on this? I heard soon but was wondering how soon..? kernel-version-2.6.x kernel-version-2.6.x/arcmsr kernel-version-2.6.x/arcmsr/arcmsr.c kernel-version-2.6.x/arcmsr/arcmsr.h

mdadm 2.5.2 - Static built , Interesting warnings when

2006-06-27 Thread Mr. James W. Laferriere
Hello All , What change in Glibc mekes this necessary ? Is there a method available to include the getpwnam getgrnam structures so that full static build will work . Tia , JimL gcc -Wall -Werror -Wstrict-prototypes -ggdb -DSendmail=\/usr/sbin/sendmail -t\

Re: Ok to go ahead with this setup?

2006-06-27 Thread Mike Dresser
On Fri, 23 Jun 2006, Molle Bestefich wrote: Christian Pernegger wrote: Anything specific wrong with the Maxtors? I'd watch out regarding the Western Digital disks, apparently they have a bad habit of turning themselves off when used in RAID mode, for some reason:

I need a PCI V2.1 4 port SATA card

2006-06-27 Thread Guy
Hello group, I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA disks. I need a good SATA controller. My system is old and has PCI V 2.1. I need a 4 port card, or 2 2 port cards. My system has multi PCI buses, so 2 cards may give me better performance, but I don't need

Re: I need a PCI V2.1 4 port SATA card

2006-06-27 Thread Brad Campbell
Guy wrote: Hello group, I am upgrading my disks from old 18 Gig SCSI disks to 300 Gig SATA disks. I need a good SATA controller. My system is old and has PCI V 2.1. I need a 4 port card, or 2 2 port cards. My system has multi PCI buses, so 2 cards may give me better performance, but