On 7/13/18 2:48 PM, Martin Wilck wrote:
>> For all you know, the bug could be elsewhere and
>> we're just going to be hitting it differently some other way. The
>> head-in-the-sand approach is rarely a win long term.
>>
>> It's saving an allocation per IO, that's definitely measurable on
>> the faster storage. 
> 
> I an see that for the inline path, but is there still an advantage if
> we need to kmalloc() the biovec?

It's still one allocation instead of two. In the era of competing with
spdk on single digit latency devices, the answer is yes.

>>> However, so far I've only identified a minor problem, see below -
>>> it
>>> doesn't explain the data corruption we're seeing.
>>
>> What would help is trying to boil down a test case. So far it's a lot
>> of hand waving, and nothing that can really help narrow down what is
>> going on here.
> 
> It's not that we didn't try. We've run fio with verification on block
> devices with varying io sizes, block sizes, and alignments, but so far
> we haven't hit the issue. We've also tried to reproduce it by
> approximating the customer's VM setup, with no success up to now.

I ran some testing yesterday as well, but didn't trigger anything.
Didn't expect to either, as all the basic functionality was verified
when the patch was done. It's not really a path with a lot of corner
cases, so it's really weird that we're seeing anything at all. Which is
why I'm suspecting it's something else entirely, but it's really hard to
guesstimate on that with no clues at all.

> However, we're now much closer than we used to be, so I'm confident
> that we'll be able to present more concrete facts soon.

OK, sounds good.

-- 
Jens Axboe

Reply via email to