Cody Yellan wrote:
I had a 4x500GB SATA2 array, md0. I added one 500GB drive and
reshaping began at ~2500K/sec. Changing
/proc/sys/dev/raid/speed_limit_m{in,ax} or
/sys/block/md0/md/sync_speed_m{in,ax} had no effect. I shut down all
unnecessary services and the array is offline (not mounted)
I got a private email a while ago from Thiemo Nagel claiming that some
of the conclusions in my RAID-6 paper was incorrect. This was combined
with a "proof" which was plain wrong, and could easily be disproven
using basic enthropy accounting (i.e. how much information is around to
play with.)
Yuri Tikhonov wrote:
This patch implements support for the asynchronous computation of RAID-6
syndromes.
It provides an API to compute RAID-6 syndromes asynchronously in a format
conforming to async_tx interfaces. The async_pxor and async_pqxor_zero_sum
functions are very similar to async_xor
Yuri Tikhonov wrote:
This patch adds support for asynchronous RAID-6 recovery operations.
An asynchronous implementation using async_tx API is provided to compute
two missing data blocks (async_r6_dd_recov) and to compute one missing data
block and one missing parity_block (async_r6_dp_recov).
Cody Yellan wrote:
I forgot the version information:
mdadm - v2.5.4 - 13 October 2006
kernel 2.6.18-53.el5 #1 SMP
Would anyone consider it unsafe to upgrade to the latest version of
mdadm on a production machine using Neil Brown's srpm?
I wouldn't expect any problems, although I don't thin
Cody Yellan wrote:
I had a 4x500GB SATA2 array, md0. I added one 500GB drive and
reshaping began at ~2500K/sec. Changing
/proc/sys/dev/raid/speed_limit_m{in,ax} or
/sys/block/md0/md/sync_speed_m{in,ax} had no effect. I shut down all
unnecessary services and the array is offline (not mounted).
On Thu, 27 Dec 2007, Justin Piszcz wrote:
> With that high of a stripe size the stripe_cache_size needs to be greater than
> the default to handle it.
i'd argue that any deadlock is a bug...
regardless i'm still seeing deadlocks with the default chunk_size of 64k
and stripe_cache_size of 256...
- Message from [EMAIL PROTECTED] -
Date: Wed, 26 Dec 2007 21:22:42 -0800
From: Cody Yellan <[EMAIL PROTECTED]>
I had a 4x500GB SATA2 array, md0. I added one 500GB drive and
reshaping began at ~2500K/sec. Changing
/proc/sys/dev/raid/speed_limit_m{in,ax} or
/sys/block/md0/md/
On Thu, 27 Dec 2007, dean gaudet wrote:
hey neil -- remember that raid5 hang which me and only one or two others
ever experienced and which was hard to reproduce? we were debugging it
well over a year ago (that box has 400+ day uptime now so at least that
long ago :) the workaround was to in
hmm this seems more serious... i just ran into it with chunksize 64KiB and
while just untarring a bunch of linux kernels in parallel... increasing
stripe_cache_size did the trick again.
-dean
On Thu, 27 Dec 2007, dean gaudet wrote:
> hey neil -- remember that raid5 hang which me and only one o
10 matches
Mail list logo