On 7/13/18 10:56 AM, Martin Wilck wrote:
> On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
>>
>> Hence the patch I sent is wrong, the code actually looks fine. Which
>> means we're back to trying to figure out what is going on here. It'd
>> be great with a test case...
> 
> We don't have an easy test case yet. But the customer has confirmed
> that the problem occurs with upstream 4.17.5, too. We also confirmed
> again that the problem occurs when the kernel uses the kmalloc() code
> path in __blkdev_direct_IO_simple().
> 
> My personal suggestion would be to ditch __blkdev_direct_IO_simple()
> altogether. After all, it's not _that_ much simpler thatn
> __blkdev_direct_IO(), and it seems to be broken in a subtle way.

That's not a great suggestion at all, we need to find out why we're
hitting the issue. For all you know, the bug could be elsewhere and
we're just going to be hitting it differently some other way. The
head-in-the-sand approach is rarely a win long term.

It's saving an allocation per IO, that's definitely measurable on
the faster storage. For reads, it's also not causing a context
switch for dirtying pages. I'm not a huge fan of multiple cases
in general, but this one is definitely warranted in an era where
1 usec is a lot of extra time for an IO.

> However, so far I've only identified a minor problem, see below - it
> doesn't explain the data corruption we're seeing.

What would help is trying to boil down a test case. So far it's a lot
of hand waving, and nothing that can really help narrow down what is
going on here.

-- 
Jens Axboe

Reply via email to