Re: mdadm create to existing raid5
The mdadm --create with missing instead of a drive is a good idea. Do you actually say missing or just leave out a drive? However doesn't it do a sync everytime you create? So wouldn't you run the risk of corrupting another drive each time? Or does it not sync because of the saying missing? To bad I am intent on learning things the hard way. /etc/mdadm.conf from before I recreated ARRAY /dev/md2 level=raid5 num-devices=4 spares=1 UUID=4f935928:2b7a1633:71d575d6:dab4d6bc /etc/mdadm.conf after I recreated ARRAY /dev/md1 level=raid5 num-devices=4 UUID=81bdd737:901c0a8f:af38cb94:41c4e3da Well before I heard back from you guys . I noticed this problem and in my fountain of infinite wisdom I did mdadm --zero-superblock to all my raid drives and created them again thinking if I got it to look the same it woud just fix it. Well they do look the same now, I am at work or I would give you the new mdadm.conf. I really need to learn patients :( David Greaves wrote: David Greaves wrote: For a simple 4 device array I there are 24 permutations - doable by hand, if you have 5 devices then it's 120, 6 is 720 - getting tricky ;) Oh, wait, for 4 devices there are 24 permutations - and you need to do it 4 times, substituting 'missing' for each device - so 96 trials. 4320 trials for a 6 device array. Hmm. I've got a 7 device raid 6 - I think I'll go an make a note of how it's put together... grin Have a look at this section and the linked script. I can't test it until later http://linux-raid.osdl.org/index.php/RAID_Recovery http://linux-raid.osdl.org/index.php/Permute_array.pl David - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 3ware 9650 tips
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance? http://www.enterprisenetworksandservers.com/monthly/art.php?1754 - 20% drop according to this article His 500GB WD drives are 7200RPM compared to the Raptors 10K. So his numbers will be slower. Justin what file system do you have running on the Raptors? I think thats an interesting point made by Joshua. Justin Piszcz wrote: On Fri, 13 Jul 2007, Joshua Baker-LePain wrote: My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5 x86_64. It's all running on a couple of Xeon 5130s on a Supermicro X7DBE motherboard w/ 4GB of RAM. Trying to stick with a supported config as much as possible, I need to run ext3. As per usual, though, initial ext3 numbers are less than impressive. Using bonnie++ to get a baseline, I get (after doing 'blockdev --setra 65536' on the device): Write: 136MB/s Read: 384MB/s Proving it's not the hardware, with XFS the numbers look like: Write: 333MB/s Read: 465MB/s How many folks are using these? Any tuning tips? Thanks. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University Let's try that again with the right address :) You are using HW RAID then? Those numbers seem pretty awful for that setup, including linux-raid@ even it though it appears you're running HW raid, this is rather peculiar. To give you an example I get 464MB/s write and 627MB/s with a 10 disk raptor software raid5. Justin. - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html