The recent changed to raid5 to allow offload of parity calculation etc
introduced some bugs in the code for growing (i.e. adding a disk to)
raid5 and raid6. This fixes them
Acked-by: Dan Williams [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
---
This is against 2.6.23-rc4. It
Hi Dan,
On Monday 27 August 2007 23:12, you wrote:
This still looks racy... I think the complete fix is to make the
R5_Wantfill and dev_q-toread accesses atomic. Please test the
following patch (also attached) and let me know if it fixes what you are
seeing:
Your approach doesn't help,
you are right, I've another question regarding the function
dma_wait_for_async_tx from async_tx.c, here is the body of the code:
/* poll through the dependency chain, return when tx is complete */
1.do {
2. iter = tx;
3. while (iter-cookie == -EBUSY)
On 8/30/07, Yuri Tikhonov [EMAIL PROTECTED] wrote:
Hi Dan,
On Monday 27 August 2007 23:12, you wrote:
This still looks racy... I think the complete fix is to make the
R5_Wantfill and dev_q-toread accesses atomic. Please test the
following patch (also attached) and let me know if it
Hi,
a while back I reported a bug for 2.6.21 where creating an MD raid array
with internal bitmap on a sparc64 system does not work. I have not yet
heard back (or I forget); has this been addressed yet?
(mdadm -C /dev/md0 -l 1 -n 2 -e 1.0 -b internal /dev/ram[01])
thanks,
Jan
On 8/30/07, saeed bishara [EMAIL PROTECTED] wrote:
you are right, I've another question regarding the function
dma_wait_for_async_tx from async_tx.c, here is the body of the code:
/* poll through the dependency chain, return when tx is complete */
1.do {
2. iter