Re: RAID 5 performance issue.

2007-10-04 Thread Steve Cousins

Andrew Clayton wrote:

On Thu, 4 Oct 2007 10:39:09 -0400 (EDT), Justin Piszcz wrote:


  

What type (make/model) of the drives?



The drives are 250GB  Hitachi Deskstar 7K250 series ATA-6 UDMA/100
  


A couple of things:

   1. I thought you had SATA drives
   2. ATA-6 would be UDMA/133

The SATA-1 versions of the 7K250's did not have NCQ. The SATA-2 versions 
do have NCQ. If you do have SATA drives, are they SATA-1 or SATA-2?


Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 5 performance issue.

2007-10-04 Thread Steve Cousins

Steve Cousins wrote:

A couple of things:

   1. I thought you had SATA drives
   2. ATA-6 would be UDMA/133


Number 2 is not correct. Sorry about that.

Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linux Software RAID a bit of a weakness?

2007-02-23 Thread Steve Cousins

Colin Simpson wrote:
Hi, 


We had a small server here that was configured with a RAID 1 mirror,
using two IDE disks. 


Last week one of the drives failed in this. So we replaced the drive and
set the array to rebuild. The "good" disk then found a bad block and the
mirror failed.

Now I presume that the "good" disk must have had an underlying bad block
in either unallocated space or a file I never access. Now as RAID works
at the block level you only ever see this on an array rebuild when it's
often catastrophic. Is this a bit of a flaw? 


I know there is the definite probability of two drives failing within a
short period of time. But this is a bit different as it's the
probability of two drives failing but over a much larger time scale if
one of the flaws is hidden in unallocated space (maybe a dirt particle
finds it's way onto the surface or something). This would make RAID buy
you a lot less in reliability, I'd have thought. 


I seem to remember seeing in the log file for a Dell perc something
about scavenging for bad blocks. Do hardware RAID systems have a
mechanism that at times of low activity search the disks for bad blocks
to help guard against this sort of failure (so a disk error is reported
early)?

On Software RAID, I was thinking apart from a three way mirror, which I
don't think is at present supported. Is there any merit in say, cat'ing
the whole disk devices to /dev/null every so often to check that the
whole surface is readable (I presume just reading the raw device won't
upset thing, don't worry I don't plan on trying it on a production
system). 


Any thoughts? As I presume people have thought of this before and I must
be missing something.


Yes, this is an important thing to keep on top of, both for hardware 
RAID and software RAID.  For md:


echo check > /sys/block/md0/md/sync_action

This should be done regularly. I have cron do it once a week.

Check out: http://neil.brown.name/blog/20050727141521-002

Good luck,

Steve
--
______
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Changing chunk size

2007-02-16 Thread Steve Cousins



Bill Davidsen wrote:


I'm sure "slow" is a relative term, compared to backing up TBs of data 
and trying to restore them. Not to mention the lack of inexpensive TB 
size backup media. That's totally unavailable at the moment, I'll live 
with what I have, thanks.


You don't backup your RAID arrays?  Yikes! For certain data this would 
be fine (data that you can recreate easily) but it sounds like this 
isn't the case for you otherwise you'd just wipe the array and recreate 
the data. There are other modes of failure than just the drives 
themselves (file system corruption for instance) so it is wise to do 
backups, even on "redundant" systems.


Good luck,

Steve
--
__________
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid on USB flash disk

2007-02-10 Thread Steve Cousins
Arne Jansen wrote:
> 
> The main reason why I'm trying this weird setup is that the USB
> drive is always enumerated last in my kernel, and I want to boot
> from it. That means every time I add a disk or remove one I have
> to edit grub.conf and fstab. Very inconvenient. So my idea was
> to create a single device md on it and leave it to the autodetection
> to find the device. So I never have to edit /etc/fstab again for
> a simple hardware change and I'm independent of any enumeration
> changes in future kernel releases.
> 
> But unfortunately it doesn't work :-(

Sorry not to answer your main question but why not use labels in your
USB device partitions and use the labels in your fstab.  This will make
it so it doesn't matter which dev file it uses.

Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: change strip_cache_size freeze the whole raid

2007-01-22 Thread Steve Cousins



Justin Piszcz wrote:
Yes, I noticed this bug too, if you change it too many times or change it 
at the 'wrong' time, it hangs up when you echo numbr > 
/proc/stripe_cache_size.


Basically don't run it more than once and don't run it at the 'wrong' time 
and it works.  Not sure where the bug lies, but yeah I've seen that on 3 
different machines!


Can you tell us when the "right" time is or maybe what the "wrong" time 
is?  Also, is this kernel specific?  Does it (increasing 
stripe_cache_size) work with RAID6 too?


Thanks,

Steve
--
__________
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: bad performance on RAID 5

2007-01-18 Thread Steve Cousins

Sevrin Robstad wrote:

I'm suffering from bad performance on my RAID5.

a "echo check >/sys/block/md0/md/sync_action"

gives a speed at only about 5000K/sec , and HIGH load average :


What do you get when you try something like:

time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024

where /mount-point is where /dev/md0 is mounted.

This will create a 1 GiB file and it will tell you how long it takes to 
create it.  Also, I'd try running Bonnie++ on it to see what the 
different performance values are.


I don't know a lot about the md sync process but I remember having my 
sync action stuck at a low value at one point and it didn't have 
anything to do with the performance of the RAID array in general.


Steve


# uptime
20:03:55 up 8 days, 19:55,  1 user,  load average: 11.70, 4.04, 1.52

kernel is 2.6.18.1.2257.fc5
mdadm is v2.5.5

the system consist of an athlon XP1,2GHz and two Sil3114 4port S-ATA PCI 
cards with a total of 6  250gb S-ATA drives connected.


[EMAIL PROTECTED] ~]# mdadm --detail /dev/md0
/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Dec  5 00:33:01 2006
Raid Level : raid5
Array Size : 1218931200 (1162.46 GiB 1248.19 GB)
   Device Size : 243786240 (232.49 GiB 249.64 GB)
  Raid Devices : 6
 Total Devices : 6
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Wed Jan 17 23:14:39 2007
 State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
 Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

  UUID : 27dce477:6f45d11b:77377d08:732fa0e6
Events : 0.58

   Number   Major   Minor   RaidDevice State
  0   810  active sync   /dev/sda1
  1   8   171  active sync   /dev/sdb1
  2   8   332  active sync   /dev/sdc1
  3   8   493  active sync   /dev/sdd1
  4   8   654  active sync   /dev/sde1
  5   8   815  active sync   /dev/sdf1
[EMAIL PROTECTED] ~]#


Sevrin
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
______
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Correct way to create multiple RAID volumes with hot-spare?

2006-09-14 Thread Steve Cousins



Ruth Ivimey-Cook wrote:
Steve, 



The recent "Messed up creating new array..." thread has 
someone who started by using the whole drives but she now 
wants to use partitions because the array is not starting 
automatically on boot (I think that was the symptom).  I'm 
guessing this is because there is no partigion ID of "fd" 
since there isn't even a partition.



Yes, that's right.


Thanks Ruth.


Neil (or others), what is the recommended way to have the array start up 
if you use whole drives instead of partitions?  Do you put mdadm -A etc. 
in rc.local?


Thanks,

Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access array from knoppix

2006-09-14 Thread Steve Cousins



Dexter Filmore wrote:

When running Knoppix on my file server, I can't mount /dev/md0 simply because 
it isn't there. 
Am I guessing right that I need to recreate the array?

How do I gather the necessary parameters?


From the man page:

mdadm -Ac partitions -m 0 /dev/md0

This will "assemble" the array, as opposed to "create".  It says to look 
in /proc/partitions for viable partitions and then assemble the array 
with devices that have a superblock minor number of 0.  Once done, 
/dev/md0 will exist.  Tuomas was not being a "wisecracker".  His advice 
was valid.


Steve



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Correct way to create multiple RAID volumes with hot-spare?

2006-09-12 Thread Steve Cousins



Neil Brown wrote:

On Tuesday August 22, [EMAIL PROTECTED] wrote:

>>
Maybe I shouldn't be splitting the drives up into partitions.  I did 
this due to issues with volumes greater than 2TB.  Maybe this isn't an 
issue anymore and I should just rebuild the array from scratch with 
single partitions.  Or should there even be partitions? Should I just 
use /dev/sd[abcdefghijk] ?





I tend to just use whole drives, but your set up should work fine.
md/raid isn't limited to 2TB, but some filesystems might have size
issues (though i think even ext2 gots to at least 8 TB these days).


The recent "Messed up creating new array..." thread has someone who 
started by using the whole drives but she now wants to use partitions 
because the array is not starting automatically on boot (I think that 
was the symptom).  I'm guessing this is because there is no partigion ID 
of "fd" since there isn't even a partition.


I'm on the verge of re-doing this array with 11 full drives (/dev/sd? as 
opposed to /dev/sd?1 and /dev/sd?2).  Will I have the same problems with 
booting?  I like the idea of not having to partition the drives but not 
if it is going to cause hassles.  I realize that there could be a 
potential problem if I need to replace a drive with a slightly different 
model that is slightly smaller.


Thanks,

Steve
--
______
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raidhotadd works, mdadm --add doesn't

2006-09-11 Thread Steve Cousins



Leon Avery wrote:

I've been using RAID for a long time, but have been using the old 
raidtools.  Having just discovered mdadm, I want to switch, but I'm 
having trouble.  I'm trying to figure out how to use mdadm to replace a 
failed disk.  Here is my /proc/mdstat:


Personalities : [linear] [raid1]
read_ahead 1024 sectors
md5 : active linear md3[1] md4[0]
  1024504832 blocks 64k rounding

md4 : active raid1 hdf5[0] hdh5[1]
  731808832 blocks [2/2] [UU]

md3 : active raid1 hde5[0] hdg5[1]
  292696128 blocks [2/2] [UU]

md2 : active raid1 hda5[0] hdc5[1]
  48339456 blocks [2/2] [UU]

md0 : active raid1 hda3[0] hdc3[1]
  9765376 blocks [2/2] [UU]

unused devices: 

The relevant parts are md0 and md2.  Physical disk hda failed, which 
left md0 and md2 running in degraded mode.  Having an old spare used 
disk sitting on the shelf, I plugged it in, repartitioned it, and said


mdadm --add /dev/md0 /dev/hda3



I think the thing to do is to list the md device before the --add :

mdadm /dev/md0 --add /dev/hda3

I use the -a form and do:

mdadm /dev/md0 -a /dev/hda3


Steve



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Care and feeding of RAID?

2006-09-05 Thread Steve Cousins



Rev. Jeffrey Paul wrote:

On Tue, Sep 05, 2006 at 11:03:45AM -0400, Steve Cousins wrote:

These are SATA drives and except for the one machine that has a 3Ware 
8506 card in it I haven't been able to get SMART programs to do anything 
with these drives.  How do others deal with this? 




I use the tw_cli program to check up on my 3ware stuff.


Hi Jeffrey,

Thanks.  I use tw_cli too and I have scripted a check to see if it 
degrades but this doesn't help with checking for disk problems before 
they happen which SMART should help with.  As it happens, smartctl works 
with 3Ware SATA drives.  It is my other SATA drives that I'm unable to 
monitor.


Steve




It took me quite a bit of time to figure that one out.  I don't
have any automated monitoring set up, but it'd be simple enough to
script.  I check on the array every so often and run a verify every few
months to see if it kicks a disk out (it hasn't yet).

0 [EMAIL PROTECTED]:~# tw_cli 
//datavibe> info


Ctl   ModelPorts   Drives   Units   NotOpt   RRate   VRate   BBU

c08006-2LP 2   21   02   -   -


//datavibe> info c0

Unit  UnitType  Status %Cmpl  Stripe  Size(GB)  Cache  AVerify  IgnECC
--
u0RAID-1OK -  -   232.885   ON -- 


Port   Status   Unit   SizeBlocksSerial
---
p0 OK   u0 232.88 GB   488397168 WD-WMAL718611 
p1 OK   u0 232.88 GB   488397168 WD-WMAL718619

//datavibe>

-j



--
______
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Care and feeding of RAID?

2006-09-05 Thread Steve Cousins



Benjamin Schieder wrote:

On 05.09.2006 11:03:45, Steve Cousins wrote:

Would people be willing to list their setup? Including such things as 
mdadm.conf file, crontab -l, plus scripts that they use to check the 
smart data and the array, mdadm daemon parameters and anything else that 
is relevant to checking and maintaining an array? 



Personally, I use this script from cron:
http://shellscripts.org/project/hdtest


Hi Benjamin,

I am checking this out and I see that you are the writer of this script.
I'm getting errors when it comes to lines 76 and 86-90 about the 
arithmetic symbols. This is on a Fedora Core 5 system with bash version 
3.1.7(1).   I weeded out the smartctl command and tried it manually with 
no luck on my SATA /dev/sd? drives.


What do you (or others) recommend for SATA drives?

Thanks,

Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Care and feeding of RAID?

2006-09-05 Thread Steve Cousins

Gordon Henderson wrote:


On Tue, 5 Sep 2006, Paul Waldo wrote:

 


Hi all,

I have a RAID6 array and I wondering about care and feeding instructions :-)

Here is what I currently do:
   - daily incremental and weekly full backups to a separate machine
   - run smartd tests (short once a day, long once a week)
   - check the raid for bad blocks every week

What else can I do make sure the array keeps humming?  Thanks in advance!
   



Stop fiddling with it :)

I run similar stuff, but don't forget running mdadm in daemon mode to send
you an email should a drive fail. I also check each device individually,
rather than the array although I don't know the value of doing this over
the SMART tests on modern drives though...
 



Would people be willing to list their setup? Including such things as 
mdadm.conf file, crontab -l, plus scripts that they use to check the 
smart data and the array, mdadm daemon parameters and anything else that 
is relevant to checking and maintaining an array? 

I'm running the mdmonitor script at startup and a sample mdadm.conf  
(one of 3 machines) looks like:


MAILADDR [EMAIL PROTECTED]
ARRAY /dev/md0 level=raid5 num-devices=3 
UUID=39d07542:f3c97e69:fbb63d9d:64a052d3 
devices=/dev/sdb1,/dev/sdc1,/dev/sdd1


These are SATA drives and except for the one machine that has a 3Ware 
8506 card in it I haven't been able to get SMART programs to do anything 
with these drives.  How do others deal with this? 


Thanks,

Steve
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Correct way to create multiple RAID volumes with hot-spare?

2006-08-25 Thread Steve Cousins


Neil Brown wrote:

> On Tuesday August 22, [EMAIL PROTECTED] wrote:
> > Hi,
> >
> > I have a set of 11 500 GB drives. Currently each has two 250 GB
> > partitions (/dev/sd?1 and /dev/sd?2).  I have two RAID6 arrays set up,
> > each with 10 drives and then I wanted the 11th drive to be a hot-spare.
> >   When I originally created the array I used mdadm and only specified
> > the use of 10 drives since the 11th one wasn't even a thought at the
> > time (I didn't think I could get an 11th drive in the case).  Now I can
> > manually add in the 11th drive partitions into each of the arrays and
> > they show up as a spares but on reboot they aren't part of the set
> > anymore.  I have added them into /etc/mdadm.conf and the partition type
> > is set to be  Software RAID (fd).
>
> Can you show us exactly what /etc/mdadm.conf contains?
> And what kernel messages do you get when it assembled the array but
> leaves off the spare?
>

Here is mdadm.conf:

DEVICE /dev/sd[abcdefghijk]*
ARRAY /dev/md0 level=raid6 num-devices=10 spares=1
UUID=70c02805:0a324ae8:679fc224:3112a95f
devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1,/dev/sdi1,/dev/sdj1,/dev/sdk1

ARRAY /dev/md1 level=raid6 num-devices=10 spares=1
UUID=87692745:1a99d67a:462b8426:4e181b2e
devices=/dev/sda2,/dev/sdb2,/dev/sdc2,/dev/sdd2,/dev/sde2,/dev/sdf2,/dev/sdg2,/dev/sdh2,/dev/sdi2,/dev/sdj2,/dev/sdk2

Below is the info from /var/log/messages. This is a listing of when two
partitions from each array were left off. It also is an example of when it
doesn't list the spare. If you want me a newer listing from when the array
builds correctly but doesn't have a spare let me know.

What I hadn't really looked at before is the lines that say:

sdk1 has different UUID to sdk2

etc.  Of course it doesn't.  Maybe this isn't part of the problem

Aug 21 18:56:09 juno mdmonitor: mdadm shutdown succeeded
Aug 21 19:01:03 juno mdmonitor: mdadm startup succeeded
Aug 21 19:01:04 juno mdmonitor: mdadm succeeded
Aug 21 19:01:06 juno kernel: md: md driver 0.90.3 MAX_MD_DEVS=256,
MD_SB_DISKS=27
Aug 21 19:01:06 juno kernel: md: bitmap version 4.39
Aug 21 19:01:06 juno kernel: md: raid6 personality registered for level 6
Aug 21 19:01:06 juno kernel: md: Autodetecting RAID arrays.
Aug 21 19:01:06 juno kernel: md: autorun ...
Aug 21 19:01:06 juno kernel: md: considering sdk2 ...
Aug 21 19:01:06 juno kernel: md:  adding sdk2 ...
Aug 21 19:01:06 juno kernel: md: sdk1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sdj2 ...
Aug 21 19:01:06 juno kernel: md: sdj1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sdi2 ...
Aug 21 19:01:06 juno kernel: md: sdi1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sdh2 ...
Aug 21 19:01:06 juno kernel: md: sdh1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sdg2 ...
Aug 21 19:01:06 juno kernel: md: sdg1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sdf2 ...
Aug 21 19:01:06 juno kernel: md: sdf1 has different UUID to sdk2
Aug 21 19:01:06 juno kernel: md:  adding sde2 ...
Aug 21 19:01:06 juno kernel: md: sde1 has different UUID to sdk2
Aug 21 19:01:07 juno kernel: md:  adding sdd2 ...
Aug 21 19:01:07 juno kernel: md: sdd1 has different UUID to sdk2
Aug 21 19:01:07 juno kernel: md:  adding sdc2 ...
Aug 21 19:01:07 juno kernel: md: sdc1 has different UUID to sdk2
Aug 21 19:01:07 juno kernel: md:  adding sdb2 ...
Aug 21 19:01:07 juno kernel: md: sdb1 has different UUID to sdk2
Aug 21 19:01:07 juno kernel: md:  adding sda2 ...
Aug 21 19:01:07 juno kernel: md: sda1 has different UUID to sdk2
Aug 21 19:01:07 juno kernel: md: created md1
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: bind
Aug 21 19:01:07 juno kernel: md: export_rdev(sdk2)
Aug 21 19:01:07 juno kernel: md: running:

Aug 21 19:01:07 juno kernel: md: kicking non-fresh sdi2 from array!
Aug 21 19:01:07 juno kernel: md: unbind
Aug 21 19:01:07 juno kernel: md: export_rdev(sdi2)
Aug 21 19:01:07 juno kernel: md: kicking non-fresh sdb2 from array!
Aug 21 19:01:07 juno kernel: md: unbind
Aug 21 19:01:07 juno kernel: md: export_rdev(sdb2)
Aug 21 19:01:07 juno kernel: raid6: allocated 10568kB for md1
Aug 21 19:01:07 juno kernel: raid6: raid level 6 set md1 active with 8 out of
10 devices, algorithm 2
Aug 21 19:01:07 juno kernel: md: considering sdk1 ...
Aug 21 19:01:07 juno kernel: md:  adding sdk1 ...
Aug 21 19:01:07 juno kernel: md:  adding sdj1 ...
Aug 21 19:01:07 juno kernel: md:  adding sdi1 ...
Aug 21 19:01:07 juno kernel: md:  adding sdh1 ...
Aug 21 19:01:07 juno kernel: md:  adding sdg1 ...
Aug 21 19:01:07 juno kernel: md:  adding 

Re: Correct way to create multiple RAID volumes with hot-spare?

2006-08-25 Thread Steve Cousins


Joshua Baker-LePain wrote:

> On Wed, 23 Aug 2006 at 6:34am, Justin Piszcz wrote
>
> > On Tue, 22 Aug 2006, Steve Cousins wrote:
> >
> >> As for system information, it is (was) a Dual Opteron with CentOS 4.3 (now
> >> I'm putting FC5 on it as I write) with a 3Ware 8506-12 SATA RAID card that
> >> I am using in JBOD mode so I could do software RAID6.
> >>
> >
> > If you are using SW RAID, there is no 2TB limit.
>
> Ditto for more recent HW RAIDs.  *However*, there are other considerations
> when dealing with >2TiB devices.  E.g. you can't boot from them.
>

You are both correct. When I said "due to issues with volumes greater than 2TB" 
I
should have indicated that they were more to do with my issues and/or our
environment.  I probably wasn't persistent enough with it.  I now have a DAS box
with >2TB volumes and it is working fine so I should probably start over using 
no
partions and see how it goes.

Thanks,

Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Correct way to create multiple RAID volumes with hot-spare?

2006-08-22 Thread Steve Cousins

Hi,

I have a set of 11 500 GB drives. Currently each has two 250 GB 
partitions (/dev/sd?1 and /dev/sd?2).  I have two RAID6 arrays set up, 
each with 10 drives and then I wanted the 11th drive to be a hot-spare. 
 When I originally created the array I used mdadm and only specified 
the use of 10 drives since the 11th one wasn't even a thought at the 
time (I didn't think I could get an 11th drive in the case).  Now I can 
manually add in the 11th drive partitions into each of the arrays and 
they show up as a spares but on reboot they aren't part of the set 
anymore.  I have added them into /etc/mdadm.conf and the partition type 
is set to be  Software RAID (fd).


Maybe I shouldn't be splitting the drives up into partitions.  I did 
this due to issues with volumes greater than 2TB.  Maybe this isn't an 
issue anymore and I should just rebuild the array from scratch with 
single partitions.  Or should there even be partitions? Should I just 
use /dev/sd[abcdefghijk] ?


On a side note, maybe for another thread, the arrays work great until a 
reboot (using 'shutdown' or 'reboot' and they seem to be shutting down 
the md system correctly).  Sometimes one or even two (yikes!) partitions 
in each array go offline and I have to mdadm /dev/md0 -a /dev/sdx1 it 
back in.  Do others experience this regularly with RAID6?  Is RAID6 not 
ready for prime time?


As for system information, it is (was) a Dual Opteron with CentOS 4.3 
(now I'm putting FC5 on it as I write) with a 3Ware 8506-12 SATA RAID 
card that I am using in JBOD mode so I could do software RAID6.


Thanks for your help.

Steve
--
__________
 Steve Cousins, Ocean Modeling GroupEmail: [EMAIL PROTECTED]
 Marine Sciences, 452 Aubert Hall   http://rocky.umeoce.maine.edu
 Univ. of Maine, Orono, ME 04469Phone: (207) 581-4302


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html