[PATCH md 001 of 5] drivers/md/raid1.c: make a function static

2005-09-02 Thread NeilBrown
This patch makes a needlessly global function static. Signed-off-by: Adrian Bunk [EMAIL PROTECTED] Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid1.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff ./drivers/md/raid1.c~current~

[PATCH md 005 of 5] Report spare drives in /proc/mdstat

2005-09-02 Thread NeilBrown
Just like failed drives have (F), so spare drives now have (S). Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff ./drivers/md/md.c~current~ ./drivers/md/md.c --- ./drivers/md/md.c~current~

[PATCH md 004 of 5] Add information about superblock version to /proc/mdstat

2005-09-02 Thread NeilBrown
Leave it unchanged if the original (0.90) is used, incase it might be a compatability problem. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |9 + 1 file changed, 9 insertions(+) diff ./drivers/md/md.c~current~ ./drivers/md/md.c ---

[PATCH md 002 of 5] Choose better default offset for bitmap.

2005-09-02 Thread NeilBrown
On reflection, a better default location for hot-adding bitmaps with version-1 superblocks is immediately after the superblock. There might not be much room there, but there is usually atleast 3k, and that is a good start. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output

[PATCH md 003 of 5] Use queue_hardsect_size instead of block_size for md superblock size calc.

2005-09-02 Thread NeilBrown
Doh. I want the physical hard-sector-size, not the current block size... Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff ./drivers/md/md.c~current~ ./drivers/md/md.c --- ./drivers/md/md.c~current~

Re: question regarding multipath Linux 2.6

2005-09-02 Thread Luca Berra
On Thu, Sep 01, 2005 at 02:51:44PM -0400, Jim Faulkner wrote: Hello, Recently my department had a SAN installed, and I am in the process of setting up one of the first Linux machines connected to it. The machine is running Red Hat Enterprise AS4 (x86_64), which uses Linux kernel version

Re: MD or MDADM bug?

2005-09-02 Thread Claas Hilbrecht
--Am Donnerstag, 1. September 2005 17:26 -0400 David M. Strang [EMAIL PROTECTED] schrieb: The problem is; my array is now 26 of 28 disks -- /dev/sdm *IS* bad; it [...] What can I do? I don't believe this is working as intended. I think the posts: 08.08.2005: How to recover a multiple

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Thursday September 1, [EMAIL PROTECTED] wrote: This is somewhat of a crosspost from my thread yesterday; but I think it deserves it's own thread atm. Some time ago, I had a device fail -- with the help of Neil, Tyler others on the mailing list; a few patches to mdadm -- I was able to

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: mdadm 2.0 had a fix for assembling version-1 arrays that would particularly affect raid5. Try using that instead of -devel-3. No luck -- -([EMAIL PROTECTED])-(~)- # mdadm -A /dev/md0 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: Can you run that with '-v' for me? mdadm: looking for devices for /dev/md0 mdadm: /dev/sda is identified as a member of /dev/md0, slot 0. mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1. mdadm: /dev/sdc is identified as a

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Can you run that with '-v' for me? mdadm: looking for devices for /dev/md0 mdadm: /dev/sda is identified as a member of /dev/md0, slot 0. mdadm: /dev/sdb is

Re: MD or MDADM bug?

2005-09-02 Thread David M. Strang
Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Can you run that with '-v' for me? mdadm: looking for devices for /dev/md0 mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: Does this mean I'm going to loose all my data? No. At least, you shouldn't, and doing the --create won't make anything worse. So do the --create with the 'missing', and don't add any spares. Do a 'fsck' or whatever to check that everything is

[EMAIL PROTECTED]: [suse-beta-e] RFC: time_adj for HZ==250 in kernel/timer.c ?]

2005-09-02 Thread Harald Koenig
Hi Ingo linux-kernel, as I'm not subscribed to linix-kernel list, please send answers as with CC: PM. thanks! playing with the openSUSE 10.0 beta-test I slipped over problems with procinfo etc. so I noticed that at least suse/novell again changed the scheduler frequency HZ from 1000 to 250.

[PATCH md ] Make sure the new 'sb_size' is set properly device added without pre-existing superblock.

2005-09-02 Thread NeilBrown
Looks like I should run my test suite with both mdadm-1.12 and mdadm-2.0, as this slipped through my testing. (The bug is in code that didn't reach 2.6.13. Only -mm is affected). Thanks, NeilBrown ### Comments for Changeset There are two ways to add devices to an md/raid array. It can

RE: question regarding multipath Linux 2.6

2005-09-02 Thread Jim Faulkner
Yes, a copy of the whitepaper would be most useful. If you could e-mail it to me or make it available on a website for download, that would be great. thanks, Jim Faulkner On Fri, 2 Sep 2005, Callahan, Tom wrote: Your running into the problem of the active/passive link as you stated. If

Re: Where is the performance bottleneck?

2005-09-02 Thread Al Boldi
Holger Kiehl wrote: top - 08:39:11 up 2:03, 2 users, load average: 23.01, 21.48, 15.64 Tasks: 102 total, 2 running, 100 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0% us, 17.7% sy, 0.0% ni, 0.0% id, 78.9% wa, 0.2% hi, 3.1% si Mem: 8124184k total, 8093068k used,31116k free,

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread Christopher Smith
Daniel Pittman wrote: Christopher Smith [EMAIL PROTECTED] writes: [...] The components are 12x400GB drives attached to a 3ware 9500s-12 controller. They are configured as single disks on the controller, ie: no hardware RAID is involved. A quick question for you, because I have a client

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread Brad Dameron
On Fri, 2005-09-02 at 20:38 +1000, Daniel Pittman wrote: Christopher Smith [EMAIL PROTECTED] writes: [...] The components are 12x400GB drives attached to a 3ware 9500s-12 controller. They are configured as single disks on the controller, ie: no hardware RAID is involved. A quick

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread berk walker
Brad Dameron wrote: On Fri, 2005-09-02 at 20:38 +1000, Daniel Pittman wrote: Christopher Smith [EMAIL PROTECTED] writes: [...] The components are 12x400GB drives attached to a 3ware 9500s-12 controller. They are configured as single disks on the controller, ie: no hardware RAID is

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread Brad Dameron
On Thu, 2005-09-01 at 13:50 -0400, berk walker wrote: I guess if we were all wholesalers with a nice long lead time, that would be great, Brad. But where, and for how much might one purchase these? b- http://www.topmicrousa.com/controllers--tekram.html

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread Ming Zhang
On Fri, 2005-09-02 at 11:09 -0700, Brad Dameron wrote: On Thu, 2005-09-01 at 13:50 -0400, berk walker wrote: I guess if we were all wholesalers with a nice long lead time, that would be great, Brad. But where, and for how much might one purchase these? b-

Re: 3ware RAID (was Re: RAID resync stalled at 99.7% ?)

2005-09-02 Thread Joshua Baker-LePain
On Fri, 2 Sep 2005 at 11:09am, Brad Dameron wrote On Thu, 2005-09-01 at 13:50 -0400, berk walker wrote: I guess if we were all wholesalers with a nice long lead time, that would be great, Brad. But where, and for how much might one purchase these? b-

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Does this mean I'm going to loose all my data? No. At least, you shouldn't, and doing the --create won't make anything worse. So do the --create with the

Re: MD or MDADM bug?

2005-09-02 Thread David M. Strang
Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Friday September 2, [EMAIL PROTECTED] wrote: Does this mean I'm going to loose all my data? No. At least, you shouldn't, and doing the --create won't make anything worse. So do the

Re: MD or MDADM bug?

2005-09-02 Thread Neil Brown
On Friday September 2, [EMAIL PROTECTED] wrote: Sorry. Add -e 1 Well, I'm quite happy to report --- that worked! Excellent! So, once I get the bad drive replaced; and the array re-synced -- will I want to stop the array, and execute: mdadm -C /dev/md0 -e1 -l5 -n28 -c 128