Re: Software based SATA RAID-5 expandable arrays?
Ahh... guess it's time to upgrade again My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case) The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house... So.. Can I expand a RAID6 now, which is good But can I change from RAID5 to RAID6 whilst online? Graham - Original Message From: Daniel Korstad [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Wednesday, 11 July, 2007 11:03:34 AM Subject: RE: Software based SATA RAID-5 expandable arrays? That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived. I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months. You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases. Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity. But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write. I have been using RAID6 with 10 disk for over a year and it has saved me at least once. As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not. Dan. - Inline Message Follows - To: Daniel Korstad ; Michael Cc: linux-raid@vger.kernel.org From: jahammonds prost Subject: Re: Software based SATA RAID-5 expandable arrays? Why do I use RAID6? For the extra redundancy I've been thinking about RAID6 too, having been bitten a couple of times the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale Or am I wrong on both counts? Graham - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Yahoo! Answers - Got a question? Someone out there knows the answer. Try it now. http://uk.answers.yahoo.com/ - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Software based SATA RAID-5 expandable arrays?
Yeah... I kinda suspected that it would need to be a new drive being added - which is fine by me. I'm in the planning stages for building my next home server... One way to do it (with what we have at the moment) would be to have enough drives setup for RAID5, and build an empty RAID6 array. Move the data over, then destroy the old array, and grow out the new one with the recovered disks Ick... but I that works. Graham - Original Message From: Daniel Korstad [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Wednesday, 11 July, 2007 3:26:51 PM Subject: RE: Software based SATA RAID-5 expandable arrays? And if I were a betting man, I would guess you will need to add a physical drive to execute a RAID5 to RAID6 conversation for adding additional parity even if your current RAID5 is not full of data. So if your case only holds 12 Drives, I would not grow your RAID5 to 12 drives and expect to be able to convert to RAID6 with the same 12 drives even if they are not full of data. But that is just my guess on a feature that does not even exist yet... Dan. - Original Message - From: [EMAIL PROTECTED] on behalf of Daniel Korstad Sent: Wed, 7/11/2007 2:14pm To: jahammonds prost Cc: linux-raid@vger.kernel.org Subject: RE: Software based SATA RAID-5 expandable arrays? Currently, no you can't. However it is on the TODO list. http://neil.brown.name/blog/20050727143147-003 Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement... Dan. - Original Message - From: jahammonds prost Sent: Wed, 7/11/2007 12:26pm To: Daniel Korstad Cc: linux-raid@vger.kernel.org Subject: Re: Software based SATA RAID-5 expandable arrays? Ahh... guess it's time to upgrade again My plan was to start off with 3 drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before converting it over to a RAID6, and then topping it out at 12 drives (all I can fit in the case) The performace hit isn't going to bother me too much - it's mainly going to be for video for my media server for the house... So.. Can I expand a RAID6 now, which is good But can I change from RAID5 to RAID6 whilst online? Graham - Original Message From: Daniel Korstad [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Wednesday, 11 July, 2007 11:03:34 AM Subject: RE: Software based SATA RAID-5 expandable arrays? That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 reshape arrived. I have reshaped (added additional drives) to my RAID 6 twice now with no problems in the past few months. You mentioned that as the only disadvantage. There are other things to consider. The overhead for parity of course. You can't have a RAID 6 with only three drives unless you build it with a missing drive and run degraded. Also (my opinion) it might not worth the overhead with only 4 drives, unless you plan to reshape (add drives) down the road. When you have an array with several drives, than it is more advantages as the percentage of disk space lost to parity goes down [((2/N)*100) where N is the number of drives in the array] so your storage efficiency increases ((Number of Drives -2)/Number of Drives). And with more drives the statistics of getting hit with a bit error after you lose a drive and you are trying to rebuild increases. Also, there is a very slight performance drop for write speeds on RAID6 since you are calculating p and q parity. But for what I use my system for, family digital photos, file storage and media server I mostly read data and not bothered with slight performance hit in write. I have been using RAID6 with 10 disk for over a year and it has saved me at least once. As far as converting the RAID6 to RAID5 or RAID4... Never had a need to do this, but no probably not. Dan. - Inline Message Follows - To: Daniel Korstad ; Michael Cc: linux-raid@vger.kernel.org From: jahammonds prost Subject: Re: Software based SATA RAID-5 expandable arrays? Why do I use RAID6? For the extra redundancy I've been thinking about RAID6 too, having been bitten a couple of times the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale Or am I wrong on both counts? Graham - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Yahoo! Answers - Got a question? Someone out there knows
RAID5 Expansion
I've just finished adding some extra disks to my server, and I've got a couple of questions I can't quite work out... I've just added 3 more 320Gb drives into the server, and I want to expand my existing md0 with /dev/sd1 and /dev/hde1. Do I need to do these individually, or can they both be done at the same time. Also can you (and is there any point in) changing the stripe size? most of the files on this disk will be 350Mb+ video files. Also, when I started building this server, I was using a mix of 300Gb and 320Gb drives, so for the 320Gb drives, I made /dev/hda2 with the remaining 20Gb. As time has gone on, I've replaced all the 300Gb drives with 320Gb ones. Is there any easy way to pull these extra 20Gb partitions into the main array? I've got them set as /dev/md1 at the moment, and I use lvm, so I could pull them into my filesystem that way. Killing /dev/md1 is an option (I can back up the data elsewhere). Killing /dev/md0 isn't.. All the data is backed up, but it would be a full weekend of swapping DVDs in and out to restore it. Thanks. Graham ___ Yahoo! Answers - Got a question? Someone out there knows the answer. Try it now. http://uk.answers.yahoo.com/ - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Software based SATA RAID-5 expandable arrays?
From: Brad Campbell [EMAIL PROTECTED] I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's not rocket science. Where did you find reasonably priced cases to hold so many drives? Each of my home servers top out at 8 data drives each - plus a 20Gb one to boot from. Graham - Original Message From: Brad Campbell [EMAIL PROTECTED] To: greenjelly [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Wednesday, 20 June, 2007 4:52:38 PM Subject: Re: Software based SATA RAID-5 expandable arrays? greenjelly wrote: The options I seek are to be able to start with a 6 Drive array RAID-5 array, then as my demand for more space increases in the future I want to be able to plug in more drives and incorporate them into the Array without the need to backup the data. Basically I need the software to add the drive/drives to the Array, then Rebuild the array incorporating the new drives while preserving the data on the original array. I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's not rocket science. Put a lot of drives in a box, make sure you have enough sata ports and power to go around (watch your peak 12V consumption on spin up really) and use linux md. Easy.. Oh, but make sure the drives stay cool! For a cheap-o home server (which is what I have) I'd certainly not bother with a dedicated RAID card. You are not even going to need GB ethernet really.. I've got 15 drives on a single PCI bus, it's as slow as a wet week in may (in the southern hemisphere), but I'm streaming to 3 head units which total a combined 5MB/s if I'm lucky.. Rebuilds can take up to 10 hours though. QUESTIONS Since this is a media server, and would only be used to serve Movies and Video to my two machines It wouldn't have to be powered up full time (My Music consumes less space and will be contained on two seperate machines). Is there a way to considerably lower the power consumption of this server the 90% of time its not in use? Yes, don't poll for SMART and spin down the drives when idle (man hdparm). Use S3 sleep and WOL if you are really clever. (I'm not, my boxes live in a dedicated server room with its own AC, but that's because I'm nuts). I also have over 25k hours on the drives because I don't spin them down. I figure the extra power is a trade off for drive life. They've got less than 50 spin cycles on them in over 25k hours.. Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)? Yes, easily (6TB here) Can Linux Software support RAID-5 expandability, allowing me to increase the number of disks in the array, without the need to backup the media, recreate the array from scratch and then copy the backup to the machine (something I will be unable to do)? Yes but get a cheap UPS at least (it's cheap insurance) I know this is a Linux forum, but I figure many of you guys work with Windows Server. If so does Windows 2003 provide the same support for the requested requirements above? Why would you even _ask_ ?? Read the man page for mdadm, then read it again (and a third time). Then google for Raid-5 two drive failure linux just to familiarise yourself with the background. What you are doing has been done before many, many times. There are some well written sites out there relating to building exactly what you want to build with great detail. If you are serious about using windows, I pity you.. Linux (actually a combination of the kernel md layer and mdadm) makes it so easy you'd be nuts to beat your head against the wall with the alternative. Brad -- Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so. -- Douglas Adams - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html ___ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: trouble creating array
So anyways, I created my RAID device, and waited about 4 hours for it to sync, and all was happy with the world, so I went to bed. This morning, I made an ext3 file system on it, set up some directories, set the acls, added to my smb.conf file, mapped a drive, and after about 4Gb copied onto it, I got an error. Checking in the logs, there was an error about trying to access past the physical extent. I did a quick check of mdstat, and one of the array disks was down, and an dm device had shown up in the array configuration. A quick dmsetup -C info again shows that mpath0p1 has returned again. Does anyone know why it keeps appearing, and more importantly, is there any way of persuading it not to...? Thanks again for any help and pointers people can give... Graham - Original Message From: jahammonds prost [EMAIL PROTECTED] To: Neil Brown [EMAIL PROTECTED] Cc: Justin Piszcz [EMAIL PROTECTED]; linux-raid@vger.kernel.org Sent: Tuesday, 27 February, 2007 8:10:36 PM Subject: Re: trouble creating array Just an update for everyone on this (and for the archives), Did some digging, and had a look at the device mapper config # dmsetup -C info Name Maj Min Stat Open Targ Event UUID mpath0 253 0 L--w11 0 VolGroup00-LogVol01 253 3 L--w11 0 LVM-2sahQifzg9s9k0xSkpnzpGCYd7JNVXzUof6XmQ0fhzJ6jIbQLciAbm6lJLwK0Aex VolGroup00-LogVol00 253 2 L--w11 0 LVM-2sahQifzg9s9k0xSkpnzpGCYd7JNVXzUt3sJ3nwvJF8cr6ebv5AdnY0j4Ajl6U0I mpath0p1 253 1 L--w01 0 Those 2 multipath devices shouldn't have been there, and indeed mpath0p1 were my 2 recalcitrant disks. # dmsetup -C remove mpath0p1 # dmsetup -C remove mpath0 And they were both gone (you need to start at the bottom of the tree when removing them). A quick fdisk of my disks to create sdb1 and sdc1 partitions, and my array will be synched in 204 mins...:) Thanks to everyones help for pointing me in the right direction.. It was a trip I would have preferred not to have made, but like all trips like that, it's been illuminating. Thanks again. Graham - Original Message From: Neil Brown [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: Justin Piszcz [EMAIL PROTECTED]; linux-raid@vger.kernel.org Sent: Sunday, 25 February, 2007 11:40:26 PM Subject: Re: trouble creating array On Sunday February 25, [EMAIL PROTECTED] wrote: Any ideas how to find out what has it open? I can happily write all over the disk with dd... I can create and delete the partition, and it's all good... I will try deleting the sd{b,c}1 partitions, reboot, and see what happens. ls -l /sys/block/*/holders/* ?? NeilBrown ___ New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. http://uk.rd.yahoo.com/evt=44106/*http://mail.yahoo.net/uk - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Copy addresses and emails from any email account to Yahoo! Mail - quick, easy and free. http://uk.docs.yahoo.com/trueswitch2.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: trouble creating array
Just an update for everyone on this (and for the archives), Did some digging, and had a look at the device mapper config # dmsetup -C info Name Maj Min Stat Open Targ Event UUID mpath0 253 0 L--w11 0 VolGroup00-LogVol01 253 3 L--w11 0 LVM-2sahQifzg9s9k0xSkpnzpGCYd7JNVXzUof6XmQ0fhzJ6jIbQLciAbm6lJLwK0Aex VolGroup00-LogVol00 253 2 L--w11 0 LVM-2sahQifzg9s9k0xSkpnzpGCYd7JNVXzUt3sJ3nwvJF8cr6ebv5AdnY0j4Ajl6U0I mpath0p1 253 1 L--w01 0 Those 2 multipath devices shouldn't have been there, and indeed mpath0p1 were my 2 recalcitrant disks. # dmsetup -C remove mpath0p1 # dmsetup -C remove mpath0 And they were both gone (you need to start at the bottom of the tree when removing them). A quick fdisk of my disks to create sdb1 and sdc1 partitions, and my array will be synched in 204 mins...:) Thanks to everyones help for pointing me in the right direction.. It was a trip I would have preferred not to have made, but like all trips like that, it's been illuminating. Thanks again. Graham - Original Message From: Neil Brown [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: Justin Piszcz [EMAIL PROTECTED]; linux-raid@vger.kernel.org Sent: Sunday, 25 February, 2007 11:40:26 PM Subject: Re: trouble creating array On Sunday February 25, [EMAIL PROTECTED] wrote: Any ideas how to find out what has it open? I can happily write all over the disk with dd... I can create and delete the partition, and it's all good... I will try deleting the sd{b,c}1 partitions, reboot, and see what happens. ls -l /sys/block/*/holders/* ?? NeilBrown ___ New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. http://uk.rd.yahoo.com/evt=44106/*http://mail.yahoo.net/uk - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: trouble creating array
Ah ha # ls -l /sys/block/*/holders/* lrwxrwxrwx 1 root root 0 Feb 26 06:28 /sys/block/sdb/holders/dm-0 - ../../../block/dm-0 lrwxrwxrwx 1 root root 0 Feb 26 06:28 /sys/block/sdc/holders/dm-0 - ../../../block/dm-0 which I am assuming is dmraid? I did a quick check, and # dmraid -r No RAID disks Doesn't look like there is anything defined. I know that both sdb and sdc (as well as sda) are connected to SATA cards which support RAID under windows... However, I know there is no arrays defined on them on the cards (at least not by me)... I will bounce the server again and check. Graham - Original Message From: Neil Brown [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: Justin Piszcz [EMAIL PROTECTED]; linux-raid@vger.kernel.org Sent: Sunday, 25 February, 2007 11:40:26 PM Subject: Re: trouble creating array On Sunday February 25, [EMAIL PROTECTED] wrote: Any ideas how to find out what has it open? I can happily write all over the disk with dd... I can create and delete the partition, and it's all good... I will try deleting the sd{b,c}1 partitions, reboot, and see what happens. ls -l /sys/block/*/holders/* ?? NeilBrown ___ The all-new Yahoo! Mail goes wherever you go - free your email address from your Internet provider. http://uk.docs.yahoo.com/nowyoucan.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: trouble creating array
Rebooted and checked that there were no arrays defined. I have 2 cards in the server - one is a VIA based card, with a single Maxtor, the other this a Promise based card, with 2 Maxtors by Seagate on it. I've been into the config utilites on both cards, and none of them have arrays defined on them. - Original Message From: Neil Brown [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: Justin Piszcz [EMAIL PROTECTED]; linux-raid@vger.kernel.org Sent: Sunday, 25 February, 2007 11:40:26 PM Subject: Re: trouble creating array On Sunday February 25, [EMAIL PROTECTED] wrote: Any ideas how to find out what has it open? I can happily write all over the disk with dd... I can create and delete the partition, and it's all good... I will try deleting the sd{b,c}1 partitions, reboot, and see what happens. ls -l /sys/block/*/holders/* ?? NeilBrown ___ The all-new Yahoo! Mail goes wherever you go - free your email address from your Internet provider. http://uk.docs.yahoo.com/nowyoucan.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
trouble creating array
Just built a new FC6 machine, with 5x 320Gb drives and 1x 300Gb drive. Made a 300Gb partition on all the drives /dev/hd{c,d,e} and /dev/sd{a,b,c}... Trying to create an array gave me an error, since it thought there was already an array on some of the disks (and there was an old one). I decided to clear off the superblock on the drives with mdadm --zero-superblock on all the drives. It worked fine on all drives, except for /dev/sd{b,c)1, which returns an error mdadm: Couldn't open /dev/sdb1 for write - not zeroing. There doesn't seem to be a problem with the drive, as I've run a non destructive badblocks on it, and also done a dd if=/dev/zero of=/dev/sdb1 on it, and Ive written out 300Gb onto the partition. When I try and create an array using these 2 partitions, I get an error mdadm: Cannot open /dev/sdb1: Device or resource busy mdadm: Cannot open /dev/sdc1: Device or resource busy and it aborts. I've double checked that the drives aren't mounted anywhere. There's nothing in /var/log/messages either... Any suggestions where to check next? Graham ___ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: trouble creating array
Do you have an active md array? mdadm -S /dev/md0 Nothing was showing up in /proc/mdstat, but when I try and stop md0, I get this # mdadm -S /dev/md0 mdadm: stopped /dev/md0 lsof | egrep '(sdb|sdc)' I had tried that before, and nothing is showing. A reboot didn't help, but something is definately keeping it open. I tried an mkfs ]# mkfs.ext3 /dev/sdb1 mke2fs 1.39 (29-May-2006) /dev/sdb1 is apparently in use by the system; will not make a filesystem here! Any ideas how to find out what has it open? I can happily write all over the disk with dd... I can create and delete the partition, and it's all good... I will try deleting the sd{b,c}1 partitions, reboot, and see what happens. Graham - Original Message From: Justin Piszcz [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Sunday, 25 February, 2007 5:35:33 PM Subject: Re: trouble creating array On Sun, 25 Feb 2007, jahammonds prost wrote: Just built a new FC6 machine, with 5x 320Gb drives and 1x 300Gb drive. Made a 300Gb partition on all the drives /dev/hd{c,d,e} and /dev/sd{a,b,c}... Trying to create an array gave me an error, since it thought there was already an array on some of the disks (and there was an old one). I decided to clear off the superblock on the drives with mdadm --zero-superblock on all the drives. It worked fine on all drives, except for /dev/sd{b,c)1, which returns an error mdadm: Couldn't open /dev/sdb1 for write - not zeroing. There doesn't seem to be a problem with the drive, as I've run a non destructive badblocks on it, and also done a dd if=/dev/zero of=/dev/sdb1 on it, and Ive written out 300Gb onto the partition. When I try and create an array using these 2 partitions, I get an error mdadm: Cannot open /dev/sdb1: Device or resource busy mdadm: Cannot open /dev/sdc1: Device or resource busy and it aborts. I've double checked that the drives aren't mounted anywhere. There's nothing in /var/log/messages either... Any suggestions where to check next? Graham ___ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Do you have an active md array? mdadm -S /dev/md0 mdadm -S /dev/md1 mdadm -S /dev/md2 .. etc lsof | egrep '(sdb|sdc)' Something thinks its in use, that is why you cannot format it/make it part of a new array, a reboot would also fix the problem. Justin. ___ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm RAID5 array failure
mdadm -Af /dev/md0 should get it back for you. It did indeed... Thank you. But you really want to find out why it died. Well, it looks like I have a bad section on hde, which got tickled as I was copying files onto it... As the rebuild progressed, and hit around 6%, it hit the same spot on the disk again, and locked the box up solid. I ended up setting speed_limit_min and speed_limit_max to 0 so that the rebuild didn't happen, activated my LVM volume groups, and mounted the first of the logical volumes. I've just copied off all the files on that LV, and tomorrow I'll get the other 2 done. I do have a spare drive in the array... any idea why it wasn't being activated when hde went offline? What kernel version are you running? Kernel is 2.6.17-1.2142.FC4, and mdadm is V1.11.0 11 April 2005 I am assuming that the underlying RAID doesn't do any bad block handling? Once again, thank you for your help. Graham - Original Message From: Neil Brown [EMAIL PROTECTED] To: jahammonds prost [EMAIL PROTECTED] Cc: linux-raid@vger.kernel.org Sent: Wednesday, 7 February, 2007 10:57:47 PM Subject: Re: mdadm RAID5 array failure On Thursday February 8, [EMAIL PROTECTED] wrote: I'm running an FC4 system. I was copying some files on to the server this weekend, and the server locked up hard, and I had to power off. I rebooted the server, and the array came up fine, but when I tried to fsck the filesystem, fsck just locked up at about 40%. I left it sitting there for 12 hours, hoping it was going to come back, but I had to power off the server again. When I now reboot the server, it is failing to mount my raid5 array.. mdadm: /dev/md0 assembled from 3 drives and 1 spare - not enough to start the array. mdadm -Af /dev/md0 should get it back for you. But you really want to find out why it died. Where there any kernel messages at the time of the first failure? What kernel version are you running? I've added the output from the various files/commands at the bottom... I am a little confused at the output.. According to /dev/hd[cgh], there is only 1 failed disk in the array, so why does it think that there are 3 failed disks in the array? You need to look at the 'Event' count. md will look for the device with the highest event count and reject anything with an event count 2 or more less than that. NeilBrown ___ What kind of emailer are you? Find out today - get a free analysis of your email personality. Take the quiz at the Yahoo! Mail Championship. http://uk.rd.yahoo.com/evt=44106/*http://mail.yahoo.net/uk - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
mdadm RAID5 array failure
I'm running an FC4 system. I was copying some files on to the server this weekend, and the server locked up hard, and I had to power off. I rebooted the server, and the array came up fine, but when I tried to fsck the filesystem, fsck just locked up at about 40%. I left it sitting there for 12 hours, hoping it was going to come back, but I had to power off the server again. When I now reboot the server, it is failing to mount my raid5 array.. mdadm: /dev/md0 assembled from 3 drives and 1 spare - not enough to start the array. I've added the output from the various files/commands at the bottom... I am a little confused at the output.. According to /dev/hd[cgh], there is only 1 failed disk in the array, so why does it think that there are 3 failed disks in the array? It looks like there is only 1 failed disk I got an error from SMARTD about it when I got the server back into multiuser mode, so I know there is an issue with the disk (Device: /dev/hde, 8 Offline uncorrectable sectors), but there are still enough disks to bring up the array, and for the spare disk to start rebuilding. I've spent the last couple of days googling around, and I can't seem to find much on how to recover a failed md arrary. Is there any way to get the array back and working? Unfortunately I don't have a back up of this array, and I'd really like to try and get the data back (there are 3 LVM logical volumes on it). Thanks very much for any help. Graham My /etc/mdadm.conf looks like this ]# cat /etc/mdadm.conf DEVICE /dev/hd*[a-z] ARRAY /dev/md0 level=raid5 num-devices=6 UUID=96c7d78a:2113ea58:9dc237f1:79a60ddf devices=/dev/hdh,/dev/hdg,/dev/hdf,/dev/hde,/dev/hdd,/dev/hdc,/dev/hdb Looking at /proc/mdstat, I am getting this output # cat /proc/mdstat Personalities : [raid5] [raid4] md0 : inactive hdc[0] hdb[6] hdh[5] hdg[4] hdf[3] hde[2] hdd[1] 137832 blocks super non-persistent Here's the output when ran on the device that some think have failed. # mdadm -E /dev/hde /dev/hde: Magic : a92b4efc Version : 00.90.02 UUID : 96c7d78a:2113ea58:9dc237f1:79a60ddf Creation Time : Wed Feb 1 17:10:39 2006 Raid Level : raid5 Raid Devices : 6 Total Devices : 7 Preferred Minor : 0 Update Time : Sun Feb 4 17:29:53 2007 State : active Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Checksum : dcab70d - correct Events : 0.840944 Layout : left-symmetric Chunk Size : 128K Number Major Minor RaidDevice State this 2 3302 active sync /dev/hde 0 0 2200 active sync /dev/hdc 1 1 22 641 active sync /dev/hdd 2 2 3302 active sync /dev/hde 3 3 33 643 active sync /dev/hdf 4 4 3404 active sync /dev/hdg 5 5 34 645 active sync /dev/hdh 6 6 3 646 spare /dev/hdb Running an mdadm -E on /dev/hd[bcgh] gives this, Number Major Minor RaidDevice State this 6 3 646 spare /dev/hdb 0 0 2200 active sync /dev/hdc 1 1 22 641 active sync /dev/hdd 2 2 002 faulty removed 3 3 33 643 active sync /dev/hdf 4 4 3404 active sync /dev/hdg 5 5 34 645 active sync /dev/hdh 6 6 3 646 spare /dev/hdb And running mdadm -E on /dev/hd[def] Number Major Minor RaidDevice State this 3 33 643 active sync /dev/hdf 0 0 2200 active sync /dev/hdc 1 1 22 641 active sync /dev/hdd 2 2 3302 active sync /dev/hde 3 3 33 643 active sync /dev/hdf 4 4 3404 active sync /dev/hdg 5 5 34 645 active sync /dev/hdh 6 6 3 646 spare /dev/hdb Looking at /var/log/messages, shows the following Feb 6 12:36:42 file01bert kernel: md: bindhdd Feb 6 12:36:42 file01bert kernel: md: bindhde Feb 6 12:36:42 file01bert kernel: md: bindhdf Feb 6 12:36:42 file01bert kernel: md: bindhdg Feb 6 12:36:42 file01bert kernel: md: bindhdh Feb 6 12:36:42 file01bert kernel: md: bindhdb Feb 6 12:36:42 file01bert kernel: md: bindhdc Feb 6 12:36:42 file01bert kernel: md: kicking non-fresh hdf from array! Feb 6 12:36:42 file01bert kernel: md: unbindhdf Feb 6 12:36:42 file01bert kernel: md: export_rdev(hdf) Feb 6 12:36:42 file01bert kernel: md: kicking non-fresh hde from array! Feb 6 12:36:42