Re: RAID5 drive failure, please verify my commands

2005-01-16 Thread Robin Bowes
Gerd Knops wrote:
Hello all,
One of the dreaded Maxtor SATA drives in my RAID5 failed, after just 3 
months of light use. Anyhow I neither have the disk capacity nor the 
money to buy it to make a backup. To make sure I do it correctly, could 
you folks please double-check my intended course of action? I would 
really appreciate that.
Gerd,
If you've got a credit card you can get Maxtor to send out a replacement 
 without having to pull the failed drive.

I've done this several times :) (by Maxtor drives!)
R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 drive failure, please verify my commands

2005-01-17 Thread Robin Bowes
Mike Hardy wrote:

Gerd Knops wrote:
Hello all,
One of the dreaded Maxtor SATA drives in my RAID5 failed, after just 3 
months of light use. Anyhow I neither have the disk capacity nor the 
money to buy it to make a backup. To make sure I do it correctly, 
could you folks please double-check my intended course of action? I 
would really appreciate that.

Failed how? I have tons and tons of Maxtor drives in service, and only 
one actually had a complete failure (verified by their utility, which is 
present on some bootable CD I got called UltimateBootDisk).
Mike,
When one of my drives fails I test it with the Maxtor Powermax tool - 
it's this tool that is confirming that the drive(s) is(are) dieing.

The only saving grace is the three-year warranty.
R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: migrating raid-1 to different drive geometry ?

2005-01-24 Thread Robin Bowes
Neil Brown wrote:
If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
array with
   mdadm --grow /dev/mdX --size=max
Neil,
Is this just for RAID1? OR will it work for RAID5 too?
R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Broken harddisk

2005-02-01 Thread Robin Bowes
Luca Berra wrote:
On Tue, Feb 01, 2005 at 03:02:54PM +, Robin Bowes wrote:
True enough. However, until SMART support makes it into linux SATA 
drivers I'm pretty much stuck with dd!

http://www.kernel.org/pub/linux/kernel/people/jgarzik/libata/
I avoid patching kernels, preferring to use the stock Fedora releases.
Perhaps I should re-phrase the above statement to read until the Fedora 
Core 3 kernel includes SMART support in libata ... :)

Actually, I'm running 2.6.10 - how can I tell if SMART support is 
included in libata?

R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Joys of spare disks!

2005-02-28 Thread Robin Bowes
Hi,
I run a RAID5 array built from six 250GB Maxtor Maxline II SATA disks. 
After having several problems with Maxtor disks I decided to use a spare 
disk, i.e. 5+1 spare.

Well, *another* disk failed last week. The spare disk was brought into 
play seamlessly:

[EMAIL PROTECTED] ~]# mdadm --detail /dev/md5
/dev/md5:
Version : 00.90.01
  Creation Time : Thu Jul 29 21:41:38 2004
 Raid Level : raid5
 Array Size : 974566400 (929.42 GiB 997.96 GB)
Device Size : 243641600 (232.35 GiB 249.49 GB)
   Raid Devices : 5
  Total Devices : 6
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Mon Feb 28 14:00:54 2005
  State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 0
 Layout : left-symmetric
 Chunk Size : 128K
   UUID : a4bbcd09:5e178c5b:3bf8bd45:8c31d2a1
 Events : 0.6941488
Number   Major   Minor   RaidDevice State
   0   820  active sync   /dev/sda2
   1   8   181  active sync   /dev/sdb2
   2   8   342  active sync   /dev/sdc2
   3   8   823  active sync   /dev/sdf2
   4   8   664  active sync   /dev/sde2
   5   8   50-  faulty   /dev/sdd2
I've done a quick test of /dev/sdd2:
[EMAIL PROTECTED] ~]# dd if=/dev/sdd2 of=/dev/null bs=64k
dd: reading `/dev/sdd2': Input/output error
50921+1 records in
50921+1 records out
So, I guess it's time to raise another return with Maxtor sigh.
/dev/sdd1 is used in /dev/md0. So, just to confirm, is this what I need 
to do to remove bad disk/add new disk:

Remove faulty partition:
mdadm --manage /dev/md5 --remove /dev/sdd2
Remove good from RAID1 array:
mdadm --manage /dev/md0 --fail /dev/sdd1
mdadm --manage /dev/md0 --remove /dev/sdd1
[pull out bad disk, install replacement]
Partition new disk (will be /dev/sdd) (All six disks are partitioned the 
same):

fdisk -l /dev/sda | fdisk /dev/sdd
(I seem to remember having a problem with this when I did it last time. 
Something about a bug in fdisk that won't partition brand new new disks 
correctly? Or was it sfdisk?)

Add new partitions to arrays:
mdadm --manage /dev/md0 --add /dev/sdd1
mdadm --manage /dev/md5 --add /dev/sdd2
Thanks,
R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: non-optimal RAID 5 performance with 8 drive array

2005-03-01 Thread Robin Bowes
Nicola Fankhauser wrote:
see [3] for a description of what I did and more details.
Hi Nicola,
I read your description with interest.
I thought I'd try some speed tests myself but dd doesn't seem to work 
the same for me (on FC3). Here's what I get:

[EMAIL PROTECTED] test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096 
count=10
10+0 records in
10+0 records out

Notice there is no timing information.
For the read test:
[EMAIL PROTECTED] test]# dd of=/dev/null if=/home/test/test.tmp bs=4096
10+0 records in
10+0 records out
Again, no timing information.
Anyone know if this is a quirk of the FC3 version of dd?
R.
--
http://robinbowes.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new array not starting

2006-11-07 Thread Robin Bowes
Robin Bowes wrote:
 
 If I try to start the array manually:
 
 # mdadm --assemble --auto=yes  /dev/md2 /dev/hdc /dev/hdd /dev/hde
 /dev/hdf /dev/hdg /dev/hdh /dev/hdi /dev/hdj
 mdadm: cannot open device /dev/hdc: No such file or directory
 mdadm: /dev/hdc has no superblock - assembly aborted
 
 What's going on here? No superblock? Doesn't that get written when the
 array is created?
 
 Am I doing this right?

SATA disks? hdc? Duh!

This worked:

# mdadm --assemble --auto=yes  /dev/md2 /dev/sdc /dev/sdd /dev/sde
/dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
mdadm: /dev/md2 has been started with 8 drives.

However, I'm not sure why it didn't start automatically at boot. Do I
need to put it in /etc/mdadm.conf for it to star automatically? I
thought md start all arrays it found at a start up?

R.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new array not starting

2006-11-07 Thread Robin Bowes
Robin Bowes wrote:
 This worked:
 
 # mdadm --assemble --auto=yes  /dev/md2 /dev/sdc /dev/sdd /dev/sde
 /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
 mdadm: /dev/md2 has been started with 8 drives.
 
 However, I'm not sure why it didn't start automatically at boot. Do I
 need to put it in /etc/mdadm.conf for it to star automatically? I
 thought md start all arrays it found at a start up?

OK, I put /dev/md2 in /etc/mdadm.conf and it didn't make any difference.

This is mdadm.conf (uuids are on same line as ARRAY):

DEVICE partitions
ARRAY /dev/md1 level=raid1 num-devices=2
uuid=300c1309:53d26470:64ac883f:2e3de671
ARRAY /dev/md0 level=raid1 num-devices=2
uuid=89649359:d89365a6:0192407d:e0e399a3
ARRAY /dev/md2 level=raid6 num-devices=8
UUID=68c2ea69:a30c3cb0:9af9f0b8:1300276b

I saw an error fly by as the server was booting saying /dev/md2 not found.

Do I need to create this device manually?

R.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new array not starting

2006-11-07 Thread Robin Bowes
Robin Bowes wrote:
 Robin Bowes wrote:
 This worked:

 # mdadm --assemble --auto=yes  /dev/md2 /dev/sdc /dev/sdd /dev/sde
 /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
 mdadm: /dev/md2 has been started with 8 drives.

 However, I'm not sure why it didn't start automatically at boot. Do I
 need to put it in /etc/mdadm.conf for it to star automatically? I
 thought md start all arrays it found at a start up?
 
 OK, I put /dev/md2 in /etc/mdadm.conf and it didn't make any difference.
 
 This is mdadm.conf (uuids are on same line as ARRAY):
 
 DEVICE partitions
 ARRAY /dev/md1 level=raid1 num-devices=2
 uuid=300c1309:53d26470:64ac883f:2e3de671
 ARRAY /dev/md0 level=raid1 num-devices=2
 uuid=89649359:d89365a6:0192407d:e0e399a3
 ARRAY /dev/md2 level=raid6 num-devices=8
 UUID=68c2ea69:a30c3cb0:9af9f0b8:1300276b
 
 I saw an error fly by as the server was booting saying /dev/md2 not found.
 
 Do I need to create this device manually?

Well, at the risk of having a complete conversation with myself, I've
created partitions of type fd on each disk and re-created the array
out of the partitions instead of the whole disk.

mdadm --create /dev/md2 --auto=yes --raid-devices=8 --level=6 /dev/sdc1
/dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1

I'm hoping this will enable the array to be auto-detected and started at
boot.

R.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new array not starting

2006-11-08 Thread Robin Bowes
Robin Bowes wrote:
 
 Well, at the risk of having a complete conversation with myself, I've
 created partitions of type fd on each disk and re-created the array
 out of the partitions instead of the whole disk.
 
 mdadm --create /dev/md2 --auto=yes --raid-devices=8 --level=6 /dev/sdc1
 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1
 
 I'm hoping this will enable the array to be auto-detected and started at
 boot.

That seemed to do the trick.

I left the array to build overnight and rebooted this morning, and
/dev/md2 started normally.

R.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 software vs hardware: parity calculations?

2007-01-13 Thread Robin Bowes
Bill Davidsen wrote:

 There have been several recent threads on the list regarding software
 RAID-5 performance. The reference might be updated to reflect the poor
 write performance of RAID-5 until/unless significant tuning is done.
 Read that as tuning obscure parameters and throwing a lot of memory into
 stripe cache. The reasons for hardware RAID should include performance
 of RAID-5 writes is usually much better than software RAID-5 with
 default tuning.

Could you point me at a source of documentation describing how to
perform such tuning?

Specifically, I have 8x500GB WD STAT drives on a Supermicro PCI-X 8-port
SATA card configured as a single RAID6 array (~3TB available space)

Thanks,

R.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 software vs hardware: parity calculations?

2007-01-15 Thread Robin Bowes
Bill Davidsen wrote:
 Robin Bowes wrote:
 Bill Davidsen wrote:
  
 There have been several recent threads on the list regarding software
 RAID-5 performance. The reference might be updated to reflect the poor
 write performance of RAID-5 until/unless significant tuning is done.
 Read that as tuning obscure parameters and throwing a lot of memory into
 stripe cache. The reasons for hardware RAID should include performance
 of RAID-5 writes is usually much better than software RAID-5 with
 default tuning.
 

 Could you point me at a source of documentation describing how to
 perform such tuning?
   
 No. There has been a lot of discussion of this topic on this list, and a
 trip through the archives of the last 60 days or so will let you pull
 out a number of tuning tips which allow very good performance. My
 concern was writing large blocks of data, 1MB per write, to RAID-5, and
 didn't involve the overhead of small blocks at all, that leads through
 other code and behavior.

Actually Bill, I'm running RAID6 (my mistake for not mentioning it
explicitly before) - I found some material relating to RAID5 but nothing
on RAID6.

Are the concepts similar, or is RAID6 a different beast altogether?

 Specifically, I have 8x500GB WD STAT drives on a Supermicro PCI-X 8-port
 SATA card configured as a single RAID6 array (~3TB available space)
   
 No hot spare(s)?

I'm running RAID6 instead of RAID5+1 - I've had a couple of instances
where a drive has failed in a RAID5+1 array and a second has failed
during the rebuild after the hot-spare had kicked in.

R.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html