Hi Neil,
Neil Brown wrote:
For now, you will have to live with a smallish bitmap, which probably
isn't a real problem. With 19078 bits, you will still get a
several-thousand-fold increase it resync speed after a crash
(i.e. hours become seconds) and to some extent, fewer bits are better
and you have to update them less.
I've haven't made any measurements to see what size bitmap is
ideal... maybe someone should :-)
I've made some tries with a 4 250GB disks RAID-5 array and the write
speed is really ugly with the default internal bitmap size.
Setting a bigger bitmap chunk size (16 MB for example) creates a small
bitmap. The write speed is then almost the same as when there is no
bitmap, which is great. And as you said, the resync is a matter of
seconds (or minutes) instead of hours (without bitmap).
With such a setting, I've got both a nice write speed and a nice resync
speed. That's where I would look at to find MY ideal bitmap size.
Neil, is there an issue to use the --bitmap-chunk option while using an
internal bitmap ?
The man page does not clearly say to avoid using it and the mdadm source
code does not prevent setting this size in the case of an internal bitmap.
Regarding my measurements, it showed that the default bitmap size is 4x
slower than a small bitmap in v1.0 superblock. With v1.1 and v1.2
superblocks, the default is 2x slower. We are far from the 10% slowdown
claimed by http://linux-raid.osdl.org
I've also tried to play with the bitmap flush period (--delay option),
but I would need to understand how it works to see if this can have an
effect on performance.
Regards,
Hubert
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html