Guy Watkins wrote:
} [EMAIL PROTECTED] On Behalf Of Jon Collette
} I wasn't thinking and did a mdadm --create to my existing raid5 instead
} of --assemble. The syncing process ran and now its not mountable. Is
} there anyway to recover from this?
Maybe. Not really sure. But don't do anything
David Greaves wrote:
For a simple 4 device array I there are 24 permutations - doable by
hand, if you have 5 devices then it's 120, 6 is 720 - getting tricky ;)
Oh, wait, for 4 devices there are 24 permutations - and you need to do it 4
times, substituting 'missing' for each device - so 96
Guy Watkins wrote:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
} Sent: Thursday, July 12, 2007 1:35 PM
} To: [EMAIL PROTECTED]
} Cc: Tejun Heo; [EMAIL PROTECTED]; Stefan Bader; Phillip Susi; device-mapper
}
To run it manually;
echo check /sys/block/md0/md/sync_action
than you can check the status with;
cat /proc/mdstat
Or to continually watch it, if you want (kind of boring though :) )
watch cat /proc/mdstat
This will refresh ever 2sec.
In my original email I suggested to use a crontab so you
Linus, please pull from
git://lost.foo-projects.org/~dwillia2/git/iop ioat-md-accel-for-linus
to receive:
1/ I/OAT performance tweaks and simple fixups. These patches have been
in -mm for a few kernel releases as git-ioat.patch
2/ RAID5 acceleration and the async_tx api. These
Michael wrote:
RESPONSE
I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:( I
did not use the I used , as was incorrectly stated in
many documentations I set up.
Doesn't matter, either will work and
I can't speak for SuSe issues but I believe there is some confusion on the
packages and command syntax.
So hang on, we are going for a ride, step by step...
Check and repair are not packages per say.
You should have a package called echo.
If you run this;
echo 1
Should get a 1 echoed
The mdadm --create with missing instead of a drive is a good idea. Do
you actually say missing or just leave out a drive? However doesn't it
do a sync everytime you create? So wouldn't you run the risk of
corrupting another drive each time? Or does it not sync because of the
saying
Wouldn't Raid 6 be slower than Raid 5 because of the extra fault tolerance?
http://www.enterprisenetworksandservers.com/monthly/art.php?1754 -
20% drop according to this article
His 500GB WD drives are 7200RPM compared to the Raptors 10K. So his
numbers will be slower.
Justin what file
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote:
My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD
drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5
x86_64. It's all running on a couple of Xeon 5130s on a Supermicro X7DBE
motherboard w/ 4GB of
Hi List,
I am very new to raid, and I am having a problem.
I made a raid10 array, but I only used 2 disks. Since then, one failed,
and my system crashes with a kernel panic.
I copied all the data, and I would like to start over. How can I start
from scratch? I need to get rid of my /dev/md0,
On Fri, 13 Jul 2007 at 2:35pm, Justin Piszcz wrote
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote:
My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD
drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS
5 x86_64. It's all running on a couple of Xeon
On Fri, 13 Jul 2007, mail wrote:
Hi List,
I am very new to raid, and I am having a problem.
I made a raid10 array, but I only used 2 disks. Since then, one failed,
and my system crashes with a kernel panic.
I copied all the data, and I would like to start over. How can I start
from
The raid5 stripe cache object, struct stripe_head, serves two purposes:
1/ frontend: queuing incoming requests
2/ backend: transitioning requests through the cache state machine
to the backing devices
The problem with this model is that queuing decisions are directly
On Fri, 13 Jul 2007 15:35:42 -0700
Dan Williams [EMAIL PROTECTED] wrote:
The following patches replace the stripe-queue patches currently in -mm.
I have a little practical problem here: am presently unable to compile
anything much due to all the git rejects coming out of git-md-accel.patch.
-Original Message-
From: Andrew Morton [mailto:[EMAIL PROTECTED]
The following patches replace the stripe-queue patches currently in
-mm.
I have a little practical problem here: am presently unable to compile
anything much due to all the git rejects coming out of
On Fri, 13 Jul 2007 15:57:26 -0700
Williams, Dan J [EMAIL PROTECTED] wrote:
-Original Message-
From: Andrew Morton [mailto:[EMAIL PROTECTED]
The following patches replace the stripe-queue patches currently in
-mm.
I have a little practical problem here: am presently unable to
-Original Message-
From: Andrew Morton [mailto:[EMAIL PROTECTED]
But your ongoing maintenance activity will continue to be held in
those
trees, won't it?
For now:
git://lost.foo-projects.org/~dwillia2/git/iop
ioat-md-accel-for-linus
is where the latest combined tree is
Joshua Baker-LePain wrote:
[]
Yep, hardware RAID -- I need the hot swappability (which, AFAIK, is
still an issue with md).
Just out of curiocity - what do you mean by swappability ?
For many years we're using linux software raid, we had no problems
with swappability of the component drives (in
On Fri, 13 Jul 2007 16:28:30 -0700
Williams, Dan J [EMAIL PROTECTED] wrote:
-Original Message-
From: Andrew Morton [mailto:[EMAIL PROTECTED]
But your ongoing maintenance activity will continue to be held in
those
trees, won't it?
For now:
On Fri, 2007-07-13 at 15:36 -0500, Bryan Christ wrote:
My apologies if this is not the right place to ask this question.
Hopefully it is.
I created a RAID5 array with:
mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1 /dev/sde1
mdadm -D
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the 9650?
Andrew
Fussy? Opinionated?
I would like for it to be the boot device. I have setup a raid5 mdraid
array before and it was automatically accessible as /dev/md0 after every
reboot. In this peculiar case, I am having to assemble the array
manually before I can access it...
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
23 matches
Mail list logo