On Friday December 8, [EMAIL PROTECTED] wrote:
> I have measured very slow write throughput for raid5 as well, though
> 2.6.18 does seem to have the same problem. I'll double check and do a
> git bisect and see what I can come up with.
Correction... it isn't 2.6.18 that fixes the problem. It is
On Monday December 4, [EMAIL PROTECTED] wrote:
>
> Here is where I step into supposition territory. Perhaps the
> discrepancy is related to the size of the requests going to the block
> layer. raid5 always makes page sized requests with the expectation
> that they will coalesce into larger reque
Hi -
I recently upgraded to the 2.6.17-2-686 SMP kernel image (debian).
Now, when I hot-add a disk (mdadm /dev/md1 --add /dev/hdc3) the array
resyncs to 100%, a bunch of errors appear and then the array resync
starts from 0% again.
mdadm --zero-superblock on the "new" disk doesn't help.
Any
Bill Davidsen wrote:
Dan Williams wrote:
On 12/1/06, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Thank you so much for verifying this. I do keep enough room on my drives
to run tests by creating any kind of whatever I need, but the point is
clear: with N drives striped the transfer rate is N x bas
Currently raid5 depends on clearing the BIO_UPTODATE flag to signal an
error to higher levels. While this should be sufficient, it is safer
to explicitly set the error code as well - less room for confusion.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c
As md devices a automatically created on first open, and automatically
destroyed on last close if they have no significant state, a loop can
be caused with udev.
If you open/close an md device that will generate add and remove
events to udev. udev will open the device, notice nothing is there
an
Fix few bugs that meant that:
- superblocks weren't alway written at exactly the right time (this
could show up if the array was not written to - writting to the array
causes lots of superblock updates and so hides these errors).
- restarting device recovery after a clean shutdown (ve
For each md device, we need a gendisk. As that gendisk has a name
that gets registered in sysfs, we need to make sure that when an md
device is shut down, we don't create it again until the shutdown is
complete and the gendisk has been deleted.
This patches utilises the disks_mutex to ensure the
Following are 5 patches for md in 2.6.19-rc6-mm2 that are suitable for 2.6.20.
Patch 4 might fix an outstanding bug against md which manifests as an
oops early in boot, but I don't have test results yet.
NeilBrown
[PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c
[PATCH 002 of
There are some vestiges of old code that was used for bypassing the
stripe cache on reads in raid5.c. This was never updated after the
change from buffer_heads to bios, but was left as a reminder.
That functionality has nowe been implemented in a completely different
way, so the old code can go.
Bodo Thiesen wrote:
Hi, I have a little problem:
Some hours ago the second of four disks were kicked out of my RAID5 thus
rendering it unusable. As of my current knowledge, the disks are still working
correctly (I assume a cable connection problem) but that's not the problem. The
real problem
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Neil!
I know you're going to hate me on this (please don't! ;-)...
We are using mdadm in our initial ramdisk on a lot of machines.
These systems all run LVM on top of Software RAID, which all
get's assembled together in an initial ramdisk init scr
Dan Williams wrote:
On 12/1/06, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Thank you so much for verifying this. I do keep enough room on my drives
to run tests by creating any kind of whatever I need, but the point is
clear: with N drives striped the transfer rate is N x base rate of one
drive; w
13 matches
Mail list logo