Mattias Wadenstein wrote:
On Mon, 7 Jan 2008, Thiemo Nagel wrote:
What you call "pathologic" cases when it comes to real-world data are
very common. It is not at all unusual to find sectors filled with
only a constant (usually zero, but not always), in which case your
**512 b
Bytes are really
different on a case-by-case basis and correct the exponent accordingly,
and only perform the recovery when the corrected probability of
introducing an error is sufficiently low.
Kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid&
> Thiemo Nagel wrote:
>>>> For errors occurring on the level of hard disk blocks (signature: most
>>>> bytes of the block have D errors, all with same z), the probability
>>>> for
>>>> multidisc corruption to go undetected is ((n-1)/256)**512.
"check") also?
And would it help to use different Galois field generators at different
positions in a sector instead of using a uniform generator?
Kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EM
Dear hpa,
H. Peter Anvin wrote:
> I got a private email a while ago from Thiemo Nagel claiming that
> some of the conclusions in my RAID-6 paper was incorrect. This was
> combined with a "proof" which was plain wrong, and could easily be
> disproven using basic enthropy acc
>stripe_cache_size (currently raid5 only)
As far as I have understood, it applies to raid6, too.
Kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vg
Bill Davidsen wrote:
16k read64k write
chunk
sizeRAID 5RAID 6RAID 5RAID 6
128k492497268270
256k615530288270
512k625607230174
1024k 65062017075
What is your stripe cache size?
I didn't fiddle w
16k read64k write
chunk
sizeRAID 5 RAID 6 RAID 5 RAID 6
128k492 497 268 270
256k615 530 288 270
512k625 607 230 174
1024k 650 620 170 75
It strikes me that these numbers are meaningless without knowing if
that i
Performance of the raw device is fair:
# dd if=/dev/md2 of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
Somewhat less through ext3 (created with -E stride=64):
# dd if=largetestfile of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 26.4103
Dear Norman,
I'm not familiar with RocketRaid. Is it handling the RAID for you, or
are you using MD?
I'm using md. The controller is in a mode that exports all drives
individually.
Kind regards,
Thiemo
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of
o create a
filesystem larger than 8TB. The hard maximum is 16TB, so you will need
to create partitions, if your drives are larger than 350GB...)
Kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
Mo
data in some cases instead of just recalculating parity:
Do you
a) oppose the case (patches not accepted)
b) don't care (but potentially accept patches)
c) support it
Thank you very much and kind regards,
Thiemo Nagel
-
To unsubscribe from this list: send the line "unsubscrib
bug=405919
Kind regards,
Thiemo Nagel
I thought this suggestion was once noted in this thread but I am not
sure and I did not find it anymore, so please bare with me if I wrote
it again. I had an issue once where the chipset / mainboard was
broken so on one raid1 array I have diferent data w
Dear Neil,
The point that I'm trying to make is, that there does exist a specific
case, in which recovery is possible, and that implementing recovery for
that case will not hurt in any way.
Assuming that it true (maybe hpa got it wrong) what specific
conditions would lead to one drive having c
Dear Neil and Eyal,
Eyal Lebedinsky wrote:
> Neil Brown wrote:
>> It would seem that either you or Peter Anvin is mistaken.
>>
>> On page 9 of
>> http://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
>> at the end of section 4 it says:
>>
>> Finally, as a word of caution it should be no
Dear Neil,
thank you very much for your detailed answer.
Neil Brown wrote:
While it is possible to use the RAID6 P+Q information to deduce which
data block is wrong if it is known that either 0 or 1 datablocks is
wrong, it is *not* possible to deduce which block or blocks are wrong
if it is p
entionally
corrupt a sector in the first device of a set of 16, 'repair' copies the
corrupted data to the 15 remaining devices instead of restoring the
correct sector from one of the other fifteen devices to the first.
Thank you for your time.
Kind regards,
Thiemo Nagel
begin:
eaking against an improved
implementation of 'repair'?
BTW: I just checked, it's the same for RAID 1: When I intentionally
corrupt a sector in the first device of a set of 16, 'repair' copies the
corrupted data to the 15 remaining devices instead of restoring
18 matches
Mail list logo