Re: raid problem: after every reboot /dev/sdb1 is removed?

2008-02-01 Thread Greg Cormier
I had the same problem. Re-doing the partition from ext2 to linux raid fixed my problem, but I see you're already using that FS type. Maybe it was the action of re-partitioning in general that fixed my problem? You could try deleting it and re-creating that partition, syncing, and rebooting?

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-18 Thread Greg Cormier
Also, don't use ext*, XFS can be up to 2-3x faster (in many of the benchmarks). I'm going to swap file systems and give it a shot right now! :) How is stability of XFS? I heard recovery is easier with ext2/3 due to more people using it, more tools available, etc? Greg - To unsubscribe from

Re: Raid over 48 disks ... for real now

2008-01-18 Thread Greg Cormier
I wonder how long it would take to run an fsck on one large filesystem? :) I would imagine you'd have time to order a new system, build it, and restore the backups before the fsck was done! - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-18 Thread Greg Cormier
Justin, thanks for the script. Here's my results. I ran it a few times with different tests, hence the small number of results you see here, I slowly trimmed out the obvious not-ideal sizes. System --- Athlon64 3500 2GB RAM 4x500GB WD Raid editions, raid 5. SDE is the old 4-platter version

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Greg Cormier
What sort of tools are you using to get these benchmarks, and can I used them for ext3? Very interested in running this on my server. Thanks, Greg On Jan 16, 2008 11:13 AM, Justin Piszcz [EMAIL PROTECTED] wrote: For these benchmarks I timed how long it takes to extract a standard 4.4 GiB

Change Stripe size?

2007-12-31 Thread Greg Cormier
So I've been slowly expanding my knowledge of mdadm/linux raid. I've got a 1 terabyte array which stores mostly large media files, and from my reading, increasing the stripe size should really help my performance Is there any way to do this to an existing array, or will I need to backup the

Re: Superblocks

2007-11-02 Thread Greg Cormier
Any reason 0.9 is the default? Should I be worried about using 1.0 superblocks? And can I upgrade my array from 0.9 to 1.0 superblocks? Thanks, Greg On 11/1/07, Neil Brown [EMAIL PROTECTED] wrote: On Tuesday October 30, [EMAIL PROTECTED] wrote: Which is the default type of superblock? 0.90 or

Re: Superblocks

2007-10-30 Thread Greg Cormier
Which is the default type of superblock? 0.90 or 1.0? On 10/30/07, Neil Brown [EMAIL PROTECTED] wrote: On Friday October 26, [EMAIL PROTECTED] wrote: Can someone help me understand superblocks and MD a little bit? I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1. --examine on

Superblocks

2007-10-26 Thread Greg Cormier
Can someone help me understand superblocks and MD a little bit? I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1. --examine on these 3 drives shows correct information. However, if I also examine the raw disk devices, sdb and sdd, they also appear to have superblocks with some semi valid