Re: mdadm create to existing raid5

2007-07-13 Thread David Greaves
Guy Watkins wrote: } [EMAIL PROTECTED] On Behalf Of Jon Collette } I wasn't thinking and did a mdadm --create to my existing raid5 instead } of --assemble. The syncing process ran and now its not mountable. Is } there anyway to recover from this? Maybe. Not really sure. But don't do anything

Re: mdadm create to existing raid5

2007-07-13 Thread David Greaves
David Greaves wrote: For a simple 4 device array I there are 24 permutations - doable by hand, if you have 5 devices then it's 120, 6 is 720 - getting tricky ;) Oh, wait, for 4 devices there are 24 permutations - and you need to do it 4 times, substituting 'missing' for each device - so 96

Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-07-13 Thread Ric Wheeler
Guy Watkins wrote: } -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] } Sent: Thursday, July 12, 2007 1:35 PM } To: [EMAIL PROTECTED] } Cc: Tejun Heo; [EMAIL PROTECTED]; Stefan Bader; Phillip Susi; device-mapper }

RE: Software based SATA RAID-5 expandable arrays?

2007-07-13 Thread Daniel Korstad
To run it manually; echo check /sys/block/md0/md/sync_action than you can check the status with; cat /proc/mdstat Or to continually watch it, if you want (kind of boring though :) ) watch cat /proc/mdstat This will refresh ever 2sec. In my original email I suggested to use a crontab so you

[GIT PULL] ioat fixes, raid5 acceleration, and the async_tx api

2007-07-13 Thread Dan Williams
Linus, please pull from git://lost.foo-projects.org/~dwillia2/git/iop ioat-md-accel-for-linus to receive: 1/ I/OAT performance tweaks and simple fixups. These patches have been in -mm for a few kernel releases as git-ioat.patch 2/ RAID5 acceleration and the async_tx api. These

Re: Software based SATA RAID-5 expandable arrays?

2007-07-13 Thread Bill Davidsen
Michael wrote: RESPONSE I had everything working, but it is evident that when I installed SuSe the first time check and repair where not included in the package:( I did not use the I used , as was incorrectly stated in many documentations I set up. Doesn't matter, either will work and

RE: Software based SATA RAID-5 expandable arrays?

2007-07-13 Thread Daniel Korstad
I can't speak for SuSe issues but I believe there is some confusion on the packages and command syntax. So hang on, we are going for a ride, step by step... Check and repair are not packages per say. You should have a package called echo. If you run this; echo 1 Should get a 1 echoed

Re: mdadm create to existing raid5

2007-07-13 Thread Jon Collette
The mdadm --create with missing instead of a drive is a good idea. Do you actually say missing or just leave out a drive? However doesn't it do a sync everytime you create? So wouldn't you run the risk of corrupting another drive each time? Or does it not sync because of the saying

Re: 3ware 9650 tips

2007-07-13 Thread Jon Collette
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance? http://www.enterprisenetworksandservers.com/monthly/art.php?1754 - 20% drop according to this article His 500GB WD drives are 7200RPM compared to the Raptors 10K. So his numbers will be slower. Justin what file

Re: 3ware 9650 tips

2007-07-13 Thread Justin Piszcz
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote: My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5 x86_64. It's all running on a couple of Xeon 5130s on a Supermicro X7DBE motherboard w/ 4GB of

Re-building an array

2007-07-13 Thread mail
Hi List, I am very new to raid, and I am having a problem. I made a raid10 array, but I only used 2 disks. Since then, one failed, and my system crashes with a kernel panic. I copied all the data, and I would like to start over. How can I start from scratch? I need to get rid of my /dev/md0,

Re: 3ware 9650 tips

2007-07-13 Thread Joshua Baker-LePain
On Fri, 13 Jul 2007 at 2:35pm, Justin Piszcz wrote On Fri, 13 Jul 2007, Joshua Baker-LePain wrote: My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5 x86_64. It's all running on a couple of Xeon

Re: Re-building an array

2007-07-13 Thread Justin Piszcz
On Fri, 13 Jul 2007, mail wrote: Hi List, I am very new to raid, and I am having a problem. I made a raid10 array, but I only used 2 disks. Since then, one failed, and my system crashes with a kernel panic. I copied all the data, and I would like to start over. How can I start from

[-mm PATCH 1/2] raid5: add the stripe_queue object for tracking raid io requests (take2)

2007-07-13 Thread Dan Williams
The raid5 stripe cache object, struct stripe_head, serves two purposes: 1/ frontend: queuing incoming requests 2/ backend: transitioning requests through the cache state machine to the backing devices The problem with this model is that queuing decisions are directly

Re: [-mm PATCH 0/2] 74% decrease in dispatched writes, stripe-queue take3

2007-07-13 Thread Andrew Morton
On Fri, 13 Jul 2007 15:35:42 -0700 Dan Williams [EMAIL PROTECTED] wrote: The following patches replace the stripe-queue patches currently in -mm. I have a little practical problem here: am presently unable to compile anything much due to all the git rejects coming out of git-md-accel.patch.

RE: [-mm PATCH 0/2] 74% decrease in dispatched writes, stripe-queue take3

2007-07-13 Thread Williams, Dan J
-Original Message- From: Andrew Morton [mailto:[EMAIL PROTECTED] The following patches replace the stripe-queue patches currently in -mm. I have a little practical problem here: am presently unable to compile anything much due to all the git rejects coming out of

Re: [-mm PATCH 0/2] 74% decrease in dispatched writes, stripe-queue take3

2007-07-13 Thread Andrew Morton
On Fri, 13 Jul 2007 15:57:26 -0700 Williams, Dan J [EMAIL PROTECTED] wrote: -Original Message- From: Andrew Morton [mailto:[EMAIL PROTECTED] The following patches replace the stripe-queue patches currently in -mm. I have a little practical problem here: am presently unable to

RE: [-mm PATCH 0/2] 74% decrease in dispatched writes, stripe-queue take3

2007-07-13 Thread Williams, Dan J
-Original Message- From: Andrew Morton [mailto:[EMAIL PROTECTED] But your ongoing maintenance activity will continue to be held in those trees, won't it? For now: git://lost.foo-projects.org/~dwillia2/git/iop ioat-md-accel-for-linus is where the latest combined tree is

Re: 3ware 9650 tips

2007-07-13 Thread Michael Tokarev
Joshua Baker-LePain wrote: [] Yep, hardware RAID -- I need the hot swappability (which, AFAIK, is still an issue with md). Just out of curiocity - what do you mean by swappability ? For many years we're using linux software raid, we had no problems with swappability of the component drives (in

Re: [-mm PATCH 0/2] 74% decrease in dispatched writes, stripe-queue take3

2007-07-13 Thread Andrew Morton
On Fri, 13 Jul 2007 16:28:30 -0700 Williams, Dan J [EMAIL PROTECTED] wrote: -Original Message- From: Andrew Morton [mailto:[EMAIL PROTECTED] But your ongoing maintenance activity will continue to be held in those trees, won't it? For now:

Re: Raid array is not automatically detected.

2007-07-13 Thread Zivago Lee
On Fri, 2007-07-13 at 15:36 -0500, Bryan Christ wrote: My apologies if this is not the right place to ask this question. Hopefully it is. I created a RAID5 array with: mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 mdadm -D

Re: 3ware 9650 tips

2007-07-13 Thread Andrew Klaassen
--- Justin Piszcz [EMAIL PROTECTED] wrote: To give you an example I get 464MB/s write and 627MB/s with a 10 disk raptor software raid5. Is that with the 9650? Andrew Fussy? Opinionated?

Re: Raid array is not automatically detected.

2007-07-13 Thread Bryan Christ
I would like for it to be the boot device. I have setup a raid5 mdraid array before and it was automatically accessible as /dev/md0 after every reboot. In this peculiar case, I am having to assemble the array manually before I can access it... mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1