- Message from [EMAIL PROTECTED] -
Date: Mon, 25 Feb 2008 00:10:07 +
From: Peter Grandi [EMAIL PROTECTED]
Reply-To: Peter Grandi [EMAIL PROTECTED]
Subject: Re: RAID5 to RAID6 reshape?
To: Linux RAID linux-raid@vger.kernel.org
On Sat, 23 Feb 2008 21:40:08 +0100
On Sat, 23 Feb 2008 21:40:08 +0100, Nagilum
[EMAIL PROTECTED] said:
[ ... ]
* Doing unaligned writes on a 13+1 or 12+2 is catastrophically
slow because of the RMW cycle. This is of course independent
of how one got to the something like 13+1 or a 12+2.
nagilum Changing a single byte in a
[ ... ]
* Suppose you have a 2+1 array which is full. Now you add a
disk and that means that almost all free space is on a single
disk. The MD subsystem has two options as to where to add
that lump of space, consider why neither is very pleasant.
No, only one, at the end of the md device
This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may
be some performance drop. Best to make sure that those chunks
are aligned together.
Interesting. I'm seeing a 20% performance drop too, with default
RAID and LVM chunk
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.
Interesting. I'm seeing a 20% performance drop too, with
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned
On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote:
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
$ hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 148 MB in 3.01 seconds = 49.13 MB/sec
$ hdparm -t /dev/dm-0
- Message from [EMAIL PROTECTED] -
Date: Mon, 18 Feb 2008 19:05:02 +
From: Peter Grandi [EMAIL PROTECTED]
Reply-To: Peter Grandi [EMAIL PROTECTED]
Subject: Re: RAID5 to RAID6 reshape?
To: Linux RAID linux-raid@vger.kernel.org
On Sun, 17 Feb 2008 07:45:26 -0700
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
huh? Make a benchmark. Do you really think that anyone would be using
it if there was
On 17:40, Mark Hahn wrote:
Question to other people here - what is the maximum partition size
that ext3 can handle, am I correct it 4 TB ?
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
[EMAIL
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700)
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
thanks. 16 makes sense (2^32 * 4k blocks).
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
tablespaces,
and there are contrary opinions as to that too). In your stated
applications it is hard to see why you'd want to split your
arrays into very many block devices or why you'd want to resize
them.
beolach And is a 14 drive RAID6 going to already have enough
beolach overhead that the additional
that is
syntactically valid is not necessarily the same thing as that
which is wise.
beolach But when I add the 5th or 6th drive, I'd like to switch
beolach from RAID5 to RAID6 for the extra redundancy.
Again, what may be possible is not necessarily what may be wise.
In particular it seems difficult
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five
I tried to create a raid6 with one missing member, but it fails.
It works fine to create a raid6 with two missing members. Is it supposed
to be like that ?
mdadm -C /dev/md0 -n5 -l6 -c256 /dev/sd[bcde]1 missing
raid5: failed to run raid set md0
mdadm: RUN_ARRAY failed: Input/output error
mdadm
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
Or would I be better off starting w/ 4 drives in RAID6?
oh, right - Sevrin Robstad has a good idea to solve your problem -
create raid6 with one missing member. And add this member, when you
have it, next year
On Sun, 17 Feb 2008 14:31:22 +0100
Janek Kozicki [EMAIL PROTECTED] wrote:
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and
raid6: eg: 5 HHDs
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five of them being extra - for safety/redundancy
purposes.
that's
Mark Hahn said: (by the date of Sun, 17 Feb 2008 17:40:12 -0500 (EST))
I'm also interested in hearing people's opinions about LVM / EVMS.
With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have
On Sunday February 17, [EMAIL PROTECTED] wrote:
On Sun, 17 Feb 2008 14:31:22 +0100
Janek Kozicki [EMAIL PROTECTED] wrote:
oh, right - Sevrin Robstad has a good idea to solve your problem -
create raid6 with one missing member. And add this member, when you
have it, next year
On Saturday February 16, [EMAIL PROTECTED] wrote:
found was a few months old. Is it likely that RAID5 to RAID6
reshaping will be implemented in the next 12 to 18 months (my rough
Certainly possible.
I won't say it is likely until it is actually done. And by then it
will be definite :-)
i.e
On Sunday February 17, [EMAIL PROTECTED] wrote:
I tried to create a raid6 with one missing member, but it fails.
It works fine to create a raid6 with two missing members. Is it supposed
to be like that ?
No, it isn't supposed to be like that, but currently it is.
The easiest approach
I
should put in 1 array expect to be safe from data loss)?
beolach But when I add the 5th or 6th drive, I'd like to switch
beolach from RAID5 to RAID6 for the extra redundancy.
Again, what may be possible is not necessarily what may be wise.
In particular it seems difficult to discern
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
huh? Make a benchmark. Do you really think that anyone would be using
it if there was any penalty bigger than 1-2% ? (random access, r/w).
I have no idea
the 5th or 6th drive, I'd like to switch from RAID5 to
RAID6 for the extra redundancy. As I've been researching RAID
options, I've seen that RAID5 to RAID6 migration is a planned feature,
but AFAIK it isn't implemented yet, and the most recent mention I
found was a few months old. Is it likely
Yuri Tikhonov wrote:
This patch implements support for the asynchronous computation of RAID-6
syndromes.
It provides an API to compute RAID-6 syndromes asynchronously in a format
conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum
functions are very similar to async_xor
:
Version : 00.90.03
Creation Time : Sun Dec 3 20:30:54 2006
Raid Level : raid6
Array Size : 39069696 (37.26 GiB 40.01 GB)
Used Dev Size : 9767424 (9.31 GiB 10.00 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 2
Persistence : Superblock is persistent
Update
On Monday December 17, [EMAIL PROTECTED] wrote:
My system has crashed a couple of times, each time the two drives have
dropped off of the RAID.
Previously I simply did the following, which would take all night:
mdadm -a --re-add /dev/md2 /dev/sde3
mdadm -a --re-add /dev/md2 /dev/sdf3
Dear Neil,
this thread has died out, but I'd prefer not to let it end without any
kind of result being reached. Therefore, I'm kindly asking you to draw
a conclusion from the arguments being exchanged:
Concerning the implementation of a 'repair' that can actually recover
data in some cases
On Wed, Dec 05, 2007 at 03:31:14PM -0500, Bill Davidsen wrote:
BTW: if this can be done in a user program, mdadm, rather than by code in
the kernel, that might well make everyone happy. Okay, realistically less
unhappy.
I start to like the idea. Of course you can't repair a running array
On 15:31, Bill Davidsen wrote:
Thiemo posted metacode which I find appears correct,
It assumes that _exactly_ one disk has bad data which is hard to verify
in practice. But yes, it's probably the best one can do if both P and
Q happen to be incorrect. IMHO mdadm shouldn't do this automatically
From: H. Peter Anvin [EMAIL PROTECTED]
Make both mktables.c and its output CodingStyle compliant. Update the
copyright notice.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/mktables.c | 43
From: H. Peter Anvin [EMAIL PROTECTED]
Date: Fri, 26 Oct 2007 11:22:42 -0700
Clean up the coding style in raid6test/test.c. Break it apart into
subfunctions to make the code more readable.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat
Peter Grandi wrote:
[ ... on RAID1, ... RAID6 error recovery ... ]
tn The use case for the proposed 'repair' would be occasional,
tn low-frequency corruption, for which many sources can be
tn imagined:
tn Any piece of hardware has a certain failure rate, which may
tn depend on things like age
[EMAIL PROTECTED] (Peter Grandi) writes:
ms I just want to give another suggestion. It may or may not be
ms possible to repair inconsistent arrays but in either way some
ms code there MUST at least warn the administrator that
ms something (may) went wrong.
tn Agreed.
That sounds instead
On Mon, Dec 03, 2007 at 09:36:32PM +0100, Janek Kozicki wrote:
Thiemo Nagel said: (by the date of Mon, 03 Dec 2007 20:59:21 +0100)
Dear Michael,
Michael Schmitt wrote:
Hi folks,
Probably erroneously, you have sent this mail only to me, not to the list...
I have a similar
The raid_run_ops routine uses the asynchronous offload api and
the stripe_operations member of a stripe_head to carry out xor+pqxor+copy
operations asynchronously, outside the lock.
The operations performed by RAID-6 are the same as in the RAID-5 case
except for no support of STRIPE_OP_PREXOR
/xor.h
+#include linux/async_tx.h
+
+#include ../drivers/md/raid6.h
+
+/**
+ * The following static variables are used in cases of synchronous
+ * zero sum to save the values to check.
+ */
+static spinlock_t spare_lock;
+struct page *spare_pages[2];
+
+/**
+ * do_async_pqxor - asynchronously calculate
, the RAID-6 recovery API is the wrappers which organize the
calculations algorithms using async_pqxor().
Please refer to the The mathematics of RAID-6 wtite-paper written by
H.Peter Anvin available at www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
for the theoretical basement of the algorithms
)
+ !test_bit(R5_Wantcompute, dev-flags)) {
if (test_bit(R5_Insync, dev-flags)) rcw++;
else {
pr_debug(raid6: must_compute:
@@ -3100,18 +3176,19 @@ static void
handle_issuing_new_write_requests6(raid5_conf_t *conf
Some clean-up of the replaced or already unnecessary functions.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9b6336f..1d45887 100644
--- a/drivers/md/raid5.c
+++
We utilize get_stripe_work() to find new work to run. This function is shared
with RAID-5. The only RAID-5 specific operation there is PREXOR. Then we call
raid_run_ops() to process the requests pending.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL
Support for expanding RAID-6 stripes asynchronously.
By setting STRIPE_OP_POSTXOR without setting STRIPE_OP_BIODRAIN the
completion path in handle stripe can differentiate expand operations
from normal write operations.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail
I/O submission requests were already handled outside of the stripe lock in
handle_stripe. Now that handle_stripe is only tasked with finding work,
this logic belongs in raid5_run_ops
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
diff
The RAID-6 driver shares handle_write_operations() with RAID-5. For this
purpose, some modifications to handle_write_operations5() had been made. The
function itself was renamed as well. The handle_write_operations() is being
triggered either from handle_stripe6() or handle_stripe5().
This
completes.
*/
- struct page *spare_page; /* Used when checking P/Q in raid6
*/
-
/*
* Free queue pool
*/
--
Yuri Tikhonov, Senior Software Engineer
Emcraft Systems, www.emcraft.com
-
To unsubscribe from this list
When a read bio is attached to the stripe and the corresponding block is
marked as R5_UPTODATE, then a biofill operation is scheduled to copy
the data from the stripe cache to the bio buffer.
Signed-off-by: Yuri Tikhonov [EMAIL PROTECTED]
Signed-off-by: Mikhail Cherkashin [EMAIL PROTECTED]
--
[ ... on RAID1, ... RAID6 error recovery ... ]
tn The use case for the proposed 'repair' would be occasional,
tn low-frequency corruption, for which many sources can be
tn imagined:
tn Any piece of hardware has a certain failure rate, which may
tn depend on things like age, temperature
On Tue, 4 Dec 2007, Peter Grandi wrote:
ms and linux-raid / mdadm did not complain or do anything.
The mystic version of Linux-RAID is in psi-test right now :-).
To me RAID does not seem the right abstraction level to deal with
this problem; and perhaps the file system level is not either,
Dear Michael,
Michael Schmitt wrote:
Hi folks,
Probably erroneously, you have sent this mail only to me, not to the list...
I just want to give another suggestion. It may or may not be possible
to repair inconsistent arrays but in either way some code there MUST
at least warn the
Thiemo Nagel said: (by the date of Mon, 03 Dec 2007 20:59:21 +0100)
Dear Michael,
Michael Schmitt wrote:
Hi folks,
Probably erroneously, you have sent this mail only to me, not to the list...
I have a similar problem all the time on this list. it would be
really nice to reconfigure
Hi all,
I am having trouble with creating a RAID 6 md device on a home-grown
Linux 2.6.20.11 SMP 64-bit build.
I first create the RAID6 without problems, and see the following
successful dump in /var/log/messages. If I check /proc/mdstat, the
RAID6 is doing the initial syncing as expected
Dear Neil,
The point that I'm trying to make is, that there does exist a specific
case, in which recovery is possible, and that implementing recovery for
that case will not hurt in any way.
Assuming that it true (maybe hpa got it wrong) what specific
conditions would lead to one drive having
Dear Neil and Eyal,
Eyal Lebedinsky wrote:
Neil Brown wrote:
It would seem that either you or Peter Anvin is mistaken.
On page 9 of
http://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
at the end of section 4 it says:
Finally, as a word of caution it should be noted that RAID
Neil Brown wrote:
On Thursday November 22, [EMAIL PROTECTED] wrote:
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks
Neil Brown wrote:
On Thursday November 22, [EMAIL PROTECTED] wrote:
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks is
wrong
On Thursday November 22, [EMAIL PROTECTED] wrote:
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks is
wrong
On Tuesday November 27, [EMAIL PROTECTED] wrote:
Thiemo Nagel wrote:
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks
Thiemo Nagel wrote:
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks is
wrong, it is *not* possible to deduce which block or blocks
Richard Michael [EMAIL PROTECTED] writes:
I'm considering my first SAS purchase. I'm planning to build a software
RAID6 array using a SAS JBOD attached to a linux box. I haven't decided
on any of the hardware specifics.
I'm leaning toward this PCI express LSI 3801e controller:
http
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks is
wrong, it is *not* possible to deduce which block or blocks are wrong
Dear Neal,
I have been looking a bit at the check/repair functionality in the
raid6 personality.
It seems that if an inconsistent stripe is found during repair, md
does not try to determine which block is corrupt (using e.g. the
method in section 4 of HPA's raid6 paper), but just recomputes
.
- Original Message -
From: Richard Michael [EMAIL PROTECTED]
Sent: Tue, 11/20/2007 10:08am
To: linux-raid@vger.kernel.org
Subject: Offtopic: hardware advice for SAS RAID6
On the heels of last week's post asking about hardware recommendations,
I'd like to ask a few questions too. :)
I'm considering my
On Wednesday November 21, [EMAIL PROTECTED] wrote:
Dear Neal,
I have been looking a bit at the check/repair functionality in the
raid6 personality.
It seems that if an inconsistent stripe is found during repair, md
does not try to determine which block is corrupt (using e.g
what would be causing your problems. The resync
thread makes a point of calling cond_resched() periodically so that it
will let other processes run even if it constantly has work to do.
If you have nothing that could write to the RAID6 arrays, then I
cannot see how the resync could affect the rest
On Tuesday 20 November 2007 06:55:52 Mark Hahn wrote:
I know this is a high end configuration, but no latency critical
component is at any limit, 4 CPUs are idling, PCI-X busses are
far away from being saturated.
yes, but what about memory? I speculate that this is an Intel-based
system
On the heels of last week's post asking about hardware recommendations,
I'd like to ask a few questions too. :)
I'm considering my first SAS purchase. I'm planning to build a software
RAID6 array using a SAS JBOD attached to a linux box. I haven't decided
on any of the hardware specifics.
I'm
yes, but what about memory? I speculate that this is an Intel-based
system that is relatively memory-starved.
Yes, its an intel system, since still has problems to deliver AMD quadcores.
Anyway, I don't believe the systems memory bandwidth is only
6 x 280 MB/s = 1680 MB/s (280 MB/s is the
On Tuesday 20 November 2007 18:16:43 Mark Hahn wrote:
yes, but what about memory? I speculate that this is an Intel-based
system that is relatively memory-starved.
Yes, its an intel system, since still has problems to deliver AMD
quadcores. Anyway, I don't believe the systems memory
I know this is a high end configuration, but no latency critical component
is at any limit, 4 CPUs are idling, PCI-X busses are
far away from being saturated.
yes, but what about memory? I speculate that this is an Intel-based
system that is relatively memory-starved.
-
To unsubscribe from
Neil Brown [EMAIL PROTECTED] writes:
On Monday November 12, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
However there is value in regularly updating the bitmap, so add code
to periodically pause while all pending sync requests complete, then
update the bitmap. Doing this only every few
Hi,
on raid-initialization or later on a re-sync our systems become
unresponsive. Ping still works, ssh won't succeed until the re-sync has
finished, on a serial or local connection one can still type, as with ssh,
whatever you request from the system won't be done until the raid-sync is
there are 8 CPUs with 4 of them
in idle state (almost 4 used for sync processes).
another interesting point, which is how many disks in that RAID6
array? What is the system bus and host adapter to the disk bus?
6 x Infortrend A16U-G2430 (hardware raid 6) on 3 dual-channel LSI22320 HBAs
in U320 mode
On Thursday November 15, [EMAIL PROTECTED] wrote:
Hi,
I have been looking a bit at the check/repair functionality in the
raid6 personality.
It seems that if an inconsistent stripe is found during repair, md
does not try to determine which block is corrupt (using e.g. the
method
Neil Brown wrote:
On Monday November 12, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
However there is value in regularly updating the bitmap, so add code
to periodically pause while all pending sync requests complete, then
update the bitmap. Doing this only every few seconds (the same
Neil Brown wrote:
On Thursday November 8, [EMAIL PROTECTED] wrote:
Hi,
I have created a new raid6:
md0 : active raid6 sdb1[0] sdl1[5] sdj1[4] sdh1[3] sdf1[2] sdd1[1]
6834868224 blocks level 6, 512k chunk, algorithm 2 [6/6] [UU]
[] resync = 21.5
On Monday November 12, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
However there is value in regularly updating the bitmap, so add code
to periodically pause while all pending sync requests complete, then
update the bitmap. Doing this only every few seconds (the same as the
bitmap
On Thursday November 8, [EMAIL PROTECTED] wrote:
Hi,
I have created a new raid6:
md0 : active raid6 sdb1[0] sdl1[5] sdj1[4] sdh1[3] sdf1[2] sdd1[1]
6834868224 blocks level 6, 512k chunk, algorithm 2 [6/6] [UU]
[] resync = 21.5% (368216964/1708717056
Hi,
I have created a new raid6:
md0 : active raid6 sdb1[0] sdl1[5] sdj1[4] sdh1[3] sdf1[2] sdd1[1]
6834868224 blocks level 6, 512k chunk, algorithm 2 [6/6] [UU]
[] resync = 21.5% (368216964/1708717056)
finish=448.5min speed=49808K/sec
bitmap: 204/204
;
- uint8_t exptbl[256], invtbl[256];
+ int i, j, k;
+ uint8_t v;
+ uint8_t exptbl[256], invtbl[256];
- printf(#include \raid6.h\\n);
+ printf(#include \raid6.h\\n);
- /* Compute multiplication table */
- printf(\nconst u8 __attribute__((aligned(256)))\n
-raid6_gfmul
Clean up the coding style in raid6test/test.c. Break it apart into
subfunctions to make the code more readable.
Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
---
drivers/md/raid6test/test.c | 117 +--
1 files changed, 69 insertions(+), 48 deletions(-)
I have a 10 disk raid6 with internal write intent bitmap here
(softraid, build and managed via mdadm). For whatever reason one of
the 3 promise SATA 300 TX controllers went offline and took 4 of
these 10 disks with it. After reboot the array was assembled with 8
out of 10 disks - sda2
Am 12.10.2007 um 17:51 schrieb Nagilum:
Then you can mark them as bad and linux will sync to a spare.
Because it is already running without redundancy - two disks marked
as failed - it will simply go offline.
As for your sdc, I'd test it outside of the raid (dd if=/dev/sdc
that drive (except the failing
sectors ofcourse).
I hope that helps.
- Message from [EMAIL PROTECTED] -
Date: Fri, 12 Oct 2007 16:42:07 +0200
From: Ralf Müller [EMAIL PROTECTED]
Reply-To: Ralf Müller [EMAIL PROTECTED]
Subject: RAID6 recover problem
To: linux-raid
On Wednesday September 12, [EMAIL PROTECTED] wrote:
Problem:
The mdadm --grow command fails when trying to add disk to a RAID6.
..
So far I have replicated this problem on RHEL5 and Ubuntu 7.04
running the latest official updates and patches. I have even tried it
with the most
Neil,
On RHEL5 the kernel is 2.6.18-8.1.8. On Ubuntu 7.04 the kernel is
2.6.20-16. Someone on the Arstechnica forums wrote they see the same
thing in Debian etch running kernel 2.6.18. Below is a messages log
from the RHEL5 system. I have only included the section for creating
the RAID6
have only included the section for creating
the RAID6, adding a spare and trying to grow it. There is a one line
error when I do the mdadm --grow command. It is md: couldn't
update array info. -22.
reshaping raid6 arrays was not supported until 2.6.21. So you'll need
a newer kernel
Problem:
The mdadm --grow command fails when trying to add disk to a RAID6.
The man page says it can do this.
GROW MODE
The GROW mode is used for changing the size or shape of an
active array. For this to work,
the kernel must support the necessary change. Various types
Any chance of being able to grow a degraded raid6 array?
(missing one drive) I tried to add a spare without
it immediately grabbing it and resyncing but couldn't.
Why? I made the array raid6 because eventually it'll have
8-10 drives. Currently you can't change from raid5 to 6 and
preserve your
Any chance of being able to grow a degraded raid6 array?
(missing one drive) I tried to add a spare without
it immediately grabbing it and resyncing but couldn't.
Why? I made the array raid6 because eventually it'll have
8-10 drives. Currently you can't change from raid5 to 6 and
preserve your
The recent changed to raid5 to allow offload of parity calculation etc
introduced some bugs in the code for growing (i.e. adding a disk to)
raid5 and raid6. This fixes them
Acked-by: Dan Williams [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
---
This is against 2.6.23-rc4
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Neil Brown
} Sent: Monday, June 04, 2007 2:59 AM
} To: Guy Watkins
} Cc: 'linux-raid'
} Subject: Re: RAID6 clean?
}
} On Monday June 4, [EMAIL PROTECTED] wrote:
} I have a RAID6 array. 1
apologise if it's too much,
or wrong.
So, here's the situation, as it stands:
# cat /proc/mdstat # BEFORE array restart
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid6 sdk1[7](F) sda1[8](F) sdf1[5] sde1[4] sdd1[3] sdc1[2]
sdb1[1]
1953053696
Neil Brown wrote:
Reshape won't restart while the array is auto-read-only.
You can start it simply by mounting the filesystem, or with
mdadm /dev/md0 --readwrite
Neil,
Thank you very much for your prompt response. I would have never figured
it out myself, with my incorrect assumption
On Monday June 4, [EMAIL PROTECTED] wrote:
I have a RAID6 array. 1 drive is bad and now un-plugged because the system
hangs waiting on the disk.
The system won't boot because / is not clean. I booted a rescue CD and
managed to start my arrays using --force. I tried to stop and start
On Friday May 4, [EMAIL PROTECTED] wrote:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Guy Watkins
} Sent: Saturday, April 28, 2007 8:52 PM
} To: linux-raid@vger.kernel.org
} Subject: RAID6 question
}
} I read in processor.com
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Guy Watkins
} Sent: Saturday, April 28, 2007 8:52 PM
} To: linux-raid@vger.kernel.org
} Subject: RAID6 question
}
} I read in processor.com that Adaptec has a RAID 6/60 that is patented
I read in processor.com that Adaptec has a RAID 6/60 that is patented.
Does Linux RAID6 have a conflict?
Thanks,
Guy
Adaptec also has announced a new family of Unified Serial (meaning 3Gbps
SAS/SATA) RAID controllers for PCI Express. Five models include cards with
four, eight, 12, and 16
On Thursday April 26, [EMAIL PROTECTED] wrote:
have a system with 12 SATA disks attached via SAS. When copying into the
array during re-sync I get filesystem errors and corruption for raid6 but not
for raid5. This problem is repeatable. I actually have 2 separate 12 disk
arrays and get
1 - 100 of 218 matches
Mail list logo