On 4/16/07, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
On 4/13/07, Neil Brown [EMAIL PROTECTED] wrote:
On Saturday March 31, [EMAIL PROTECTED] wrote:
4.
I am going to work on this with other configurations, such as raid5's
with more disks and raid50. I will be happy to hear your
On Thursday April 19, [EMAIL PROTECTED] wrote:
Neil Hello
I have been doing some thinking. I feel we should take a different path here.
In my tests I actually accumulate the user's buffers and when ready I submit
them, an elevator like algorithm.
The main problem is the amount of IO's
On 4/2/07, Dan Williams [EMAIL PROTECTED] wrote:
On 3/30/07, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a
Raz Ben-Jehuda(caro) wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked
On 3/31/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Raz Ben-Jehuda(caro) wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head
Raz Ben-Jehuda(caro) wrote:
On 3/31/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Raz Ben-Jehuda(caro) wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if not i am
setting the sh to prereadactive mode as .
This small
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if not i am
setting the sh to prereadactive mode as .
This small fix ( and in few other places in the code) reduced the
On Sunday July 2, [EMAIL PROTECTED] wrote:
Neil hello.
I have been looking at the raid5 code trying to understand why writes
performance is so poor.
raid5 write performance is expected to be poor, as you often need to
pre-read data or parity before the write can be issued.
If I am not
Carlos Carvalho wrote:
I think the demand for any solution to the unclean array is indeed low
because of the small probability of a double failure. Those that want
more reliability can use a spare drive that resyncs automatically or
raid6 (or both).
A spare disk would help, but note that
On Saturday November 19, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
The other is to use a filesystem that allows the problem to be avoided
by making sure that the only blocks that can be corrupted are dead
blocks.
This could be done with a copy-on-write filesystem that knows about the
Neil Brown wrote:
The other is to use a filesystem that allows the problem to be avoided
by making sure that the only blocks that can be corrupted are dead
blocks.
This could be done with a copy-on-write filesystem that knows about the
raid5 geometry, and only ever writes to a stripe when no
Neil Brown ([EMAIL PROTECTED]) wrote on 19 November 2005 16:54:
There are two solutions to this silent corruption problem (other than
'ignore it and hope it doesn't bite' which is a fair widely used
solution, and I haven't seen any bite marks myself).
It happened to me several years ago when
Would it really be that much slower to have a journal of RAID 5 writes?
On Fri, 2005-11-18 at 15:05 +0100, Jure Pečar wrote:
Hi all,
Currently zfs is a major news in the storage area. It is very interesting to
read various details about it on varios blogs of Sun employees. Among the
more
Moreover, and I'm sure Neil will chime in here, isn't the clean/unclean
thing designed to prevent this exact scenario?
The array is marked unclean immediately prior to write, then the write
and parity write happens, then the array is marked clean.
If you crash during the write but before parity
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-raid-
[EMAIL PROTECTED] On Behalf Of Mike Hardy
Sent: Friday, November 18, 2005 2:24 PM
To: Dan Stromberg
Cc: Jure Pečar; linux-raid@vger.kernel.org
Subject: Re: raid5 write performance
Moreover, and I'm sure Neil
Guy wrote:
It is not just a parity issue. If you have a 4 disk RAID 5, you can't be
sure which if any have written the stripe. Maybe the parity was updated,
but nothing else. Maybe the parity and 2 data disks, leaving 1 data disk
with old data.
Beyond that, md does write caching. I
On Friday November 18, [EMAIL PROTECTED] wrote:
So, I continue to believe silent corruption is mythical. I'm still open
to good explanation it's not though.
Silent corruption is not mythical, though it is probably talked about
more than it actually happens (but then as it is silent, I
-Original Message-
From: Mike Hardy [mailto:[EMAIL PROTECTED]
Sent: Friday, November 18, 2005 11:57 PM
To: Guy
Cc: 'Dan Stromberg'; 'Jure Pečar'; linux-raid@vger.kernel.org
Subject: Re: raid5 write performance
Guy wrote:
It is not just a parity issue. If you have a 4
20 matches
Mail list logo