On Thursday, 13 December 2001 at  3:06:14 +0100, Bernd Walter wrote:
> On Thu, Dec 13, 2001 at 10:54:13AM +1030, Greg Lehey wrote:
>> On Wednesday, 12 December 2001 at 12:53:37 +0100, Bernd Walter wrote:
>>> On Wed, Dec 12, 2001 at 04:22:05PM +1030, Greg Lehey wrote:
>>>> On Tuesday, 11 December 2001 at  3:11:21 +0100, Bernd Walter wrote:
>>>> 2.  Cache the parity blocks.  This is an optimization which I think
>>>>     would be very valuable, but which Vinum doesn't currently perform.
>>> I thought of connecting the parity to the wait lock.
>>> If there's a waiter for the same parity data it's not droped.
>>> This way we don't waste memory but still have an efect.
>> That's a possibility, though it doesn't directly address parity block
>> caching.  The problem is that by the time you find another lock,
>> you've already performed part of the parity calculation, and probably
>> part of the I/O transfer.  But it's an interesting consideration.
> I know that it doesn't do the best, but it's easy to implement.
> A more complex handling for the better results can still be done.

I don't have the time to work out an example, but I don't think it
would change anything until you had two lock waits.  I could be wrong,
though: you've certainly brought out something here that I hadn't
considered, so if you can write up a detailed example (preferably
after you've looked at the code and decided how to handle it), I'd
certainly be interested.

>>> I would guess it when the stripe size is bigger than the preread
>>> cache the drives uses.  This would mean we have a less chance to
>>> get parity data out of the drive cache.
>> Yes, this was one of the possibilities we considered.
> It should be measured and compared after I changed the looking.
> It will look different after that and may lead to other reasons,
> because we will have a different load characteristic on the drives.
> Currently if we have two writes in two stripes each, all initated before
> the first finished, the drive has to seek between the two stripes, as
> the second write to the same stripe has to wait.

I'm not sure I understand this.  The stripes are on different drives,
after all.

>>> Whenever a write hits a driver there is a waiter for it.
>>> Either a softdep, a memory freeing or an application doing an sync
>>> transfer.
>>> I'm almost shure delaying writes will harm performance in upper layers.
>> I'm not so sure.  Full stripe writes, where needed, are *much* faster
>> than partial strip writes.
> Hardware raid usually comes with NVRAM and can cache write data without
> delaying the acklowledge to the initiator.
> That option is not available to software raid.

It could be.  It's probably something worth investigating and

See complete headers for address and phone numbers

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to