At 2024-09-24 19:30:44, "Kent Overstreet" <[email protected]> wrote:
>On Tue, Sep 24, 2024 at 07:08:07PM GMT, David Wang wrote:
>> Hi, 
>> 
>> At 2024-09-07 18:34:37, "David Wang" <[email protected]> wrote:
>> >At 2024-09-07 01:38:11, "Kent Overstreet" <[email protected]> wrote:
>> >>That's because checksums are at extent granularity, not block: if you're
>> >>doing O_DIRECT reads that are smaller than the writes the data was
>> >>written with, performance will be bad because we have to read the entire
>> >>extent to verify the checksum.
>> >
>> >
>> 
>> >Based on the result:
>> >1. The row with prepare-write size 4K stands out, here.
>> >When files were prepaired with write size 4K, the afterwards
>> > read performance is worse.  (I did double check the result,
>> >but it is possible that I miss some affecting factors.);
>> >2. Without O_DIRECT, read performance seems correlated with the difference
>> > between read size and prepare write size, but with O_DIRECT, correlation 
>> > is not obvious.
>> >
>> >And, to mention it again, if I overwrite the files **thoroughly** with fio 
>> >write test
>> >(using same size), the read performance afterwards would be very good:
>> >
>> 
>> Update some IO pattern (bio start address and size, in sectors, 
>> address&=-address),
>> between bcachefs and block layer:
>> 
>> 4K-Direct-Read a file created by loop of `write(fd, buf, 1024*4)`:
>
>You're still testing small reads to big extents. Flip off data
>checksumming if you want to test that, or wait for block granular
>checksums to land.
>
>I already explained what's going on, so this isn't very helpful.

Hi, 

I do understand it now, sorry for bothering.
Mostly I wanted to explain to myself why the difference.... 

Beside that, just want to mention there is some io size of '1 sector', feel 
strange about it...


David

Reply via email to