On Thursday April 20, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
What is the rationale for your position?
My rationale was that if md layer receives *write* requests not smaller
than a full stripe size, it is able to omit reading data to update, and
can just calculate new parity from
Neil Brown wrote:
On Tuesday April 18, [EMAIL PROTECTED] wrote:
[]
I mean, mergeing bios into larger requests makes alot of sense between
a filesystem and md levels, but it makes alot less sense to do that
between md and physical (fsvo physical anyway) disks.
This seems completely backwards
Neil Brown (NB) writes:
NB raid5 shouldn't need to merge small requests into large requests.
NB That is what the 'elevator' or io_scheduler algorithms are for. There
NB already merge multiple bio's into larger 'requests'. If they aren't
NB doing that, then something needs to be fixed.
Michael Tokarev (MT) writes:
MT Hmm. So where's the elevator level - before raid level (between e.g.
MT a filesystem and md), or after it (between md and physical devices) ?
in the both, because raid5 produces _new_ requests and send them
to elevator again.
MT I mean, mergeing bios into
On Tuesday April 18, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
[]
raid5 shouldn't need to merge small requests into large requests.
That is what the 'elevator' or io_scheduler algorithms are for. There
already merge multiple bio's into larger 'requests'. If they aren't
doing that,
On Wednesday April 19, [EMAIL PROTECTED] wrote:
Neil Brown (NB) writes:
NB raid5 shouldn't need to merge small requests into large requests.
NB That is what the 'elevator' or io_scheduler algorithms are for. There
NB already merge multiple bio's into larger 'requests'. If they aren't
Neil Brown wrote:
[]
raid5 shouldn't need to merge small requests into large requests.
That is what the 'elevator' or io_scheduler algorithms are for. There
already merge multiple bio's into larger 'requests'. If they aren't
doing that, then something needs to be fixed.
It is certainly
On Wednesday April 12, [EMAIL PROTECTED] wrote:
Neil Brown (NB) writes:
NB There are a number of aspects to this.
NB - When a write arrives we 'plug' the queue so the stripe goes onto a
NB'delayed' list which doesn't get processed until an unplug happens,
NBor until the
Neil Brown (NB) writes:
NB There are a number of aspects to this.
NB - When a write arrives we 'plug' the queue so the stripe goes onto a
NB'delayed' list which doesn't get processed until an unplug happens,
NBor until the stripe is full and not requiring any reads.
NB - If
Alex Tomas (AT) writes:
AT I see. though my point is a bit different:
AT say, there is an application that's doing big linear writes in order
AT to achieve good throughput. on the other hand, most of modern storages
AT are very sensible to request size and tend to suck serving zillions
AT
Mark Hahn (MH) writes:
MH don't you mean _3_ chunk-sized writes? if so, are you actually
MH asking about the case when you issue an aligned two-stripe write?
MH (which might get broken into 6 64K writes, not sure, rather than
MH three 2-chunk writes...)
actually, yes. I'm talking about 3
Neil Brown (NB) writes:
NB The raid5 code attempts to do this already, though I'm not sure how
NB successful it is. I think it is fairly successful, but not completely
NB successful.
hmm. could you tell me what the code should I look at?
NB There is a trade-off that raid5 has to make.
is there a way to batch explicitely write requests raid5 issues?
sort of like TCP_CORK?
for example, there is a raid5 built from 3 disks with chunk=64K.
one types dd if=/dev/zero of=/dev/md0 bs=128k count=1
OK, so this is an aligned, whole-stripe write.
and 128K
bio gets into the raid5.
On Saturday April 8, [EMAIL PROTECTED] wrote:
Good day all,
is there a way to batch explicitely write requests raid5 issues?
for example, there is a raid5 built from 3 disks with chunk=64K.
one types dd if=/dev/zero of=/dev/md0 bs=128k count=1 and 128K
bio gets into the raid5. raid5
14 matches
Mail list logo