Mark Hahn (MH) writes:
MH don't you mean _3_ chunk-sized writes? if so, are you actually
MH asking about the case when you issue an aligned two-stripe write?
MH (which might get broken into 6 64K writes, not sure, rather than
MH three 2-chunk writes...)
actually, yes. I'm talking about 3
Neil Brown (NB) writes:
NB The raid5 code attempts to do this already, though I'm not sure how
NB successful it is. I think it is fairly successful, but not completely
NB successful.
hmm. could you tell me what the code should I look at?
NB There is a trade-off that raid5 has to make.
Neil Brown (NB) writes:
NB There are a number of aspects to this.
NB - When a write arrives we 'plug' the queue so the stripe goes onto a
NB'delayed' list which doesn't get processed until an unplug happens,
NBor until the stripe is full and not requiring any reads.
NB - If
Alex Tomas (AT) writes:
AT I see. though my point is a bit different:
AT say, there is an application that's doing big linear writes in order
AT to achieve good throughput. on the other hand, most of modern storages
AT are very sensible to request size and tend to suck serving zillions
Neil Brown (NB) writes:
NB raid5 shouldn't need to merge small requests into large requests.
NB That is what the 'elevator' or io_scheduler algorithms are for. There
NB already merge multiple bio's into larger 'requests'. If they aren't
NB doing that, then something needs to be fixed.
Michael Tokarev (MT) writes:
MT Hmm. So where's the elevator level - before raid level (between e.g.
MT a filesystem and md), or after it (between md and physical devices) ?
in the both, because raid5 produces _new_ requests and send them
to elevator again.
MT I mean, mergeing bios into