Dear Kim:
Thanks for your quick reply!

I applied the two bug fixes on my local branch and found that
I still couldn't pass my test of generating and reading a heavily
fragmented file.

The root cause is that current code will treat hole blocks as mapped
blocks as well and mistakenly increment read_pages_pending, resulting
task hung in readahead.

Inside f2fs_map_blocks():

/* DIO READ and hole case, should not map the blocks. */
if (!(flag == F2FS_GET_BLOCK_DIO && is_hole && !map->m_may_create))
        map->m_flags |= F2FS_MAP_MAPPED;

it will have map->m_flags marked with F2FS_MAP_MAPPED in non-DIO and
no blocks creation context for NULL_ADDR and NEW_ADDR, except for
holes mapped to an unallocated dnode.

Personally, I think a better fix is to add a helper function
f2fs_block_needs_zeroing(). The condition could be: return true if the
current blkaddr is NULL_ADDR or NEW_ADDR.

Then we can reverse the order of the checks under the got_it: label:
first `if (f2fs_block_needs_zeroing()) ...`, and then `else if
(map->m_flags & F2FS_MAP_MAPPED)`, while keeping all the logic inside
those statements unchanged.

For the parameters of f2fs_block_needs_zeroing(), I think we can pass
`struct f2fs_map_blocks` directly, because it already contains all the
information we need. Also, if we later want to support batching
contiguous physical block mappings and bio additions inside the loop,
this signature should be more extensible.

If you think this approach makes sense, I can send a patch to fix all
three bugs. Thank you.

Best regards,
Nanzhe Zhao
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to