Besides,I noticed now your convert fio->page to folio.I personally
thought it's a bold (maybe too bold) decision.
fio structure has a strong hypothesis that it performs write I/O in
single page unit.It only stores one old_blkaddr and
one new_blkaddr associated with the fio->page without awareness of the
page's index or file position.And all the f2fs's
submit write logic directly use bio_add_page to add fio->page to
bio.If we convert fio->page to fio->folio,then how do we know the
exact part of the folio we are performing write and add to bio? Maybe
we should alsp store the corresponding folio's subpage's index in
fio?Or did I miss something in your newest patch?


Nanzhe Zhao <nzzhao.si...@gmail.com> 于2025年7月12日周六 21:02写道:
>
> Oh,I'm sorry,I forgot to put my github link in last email.But I haven't 
> prepared an English document for describing my code design.Anyway,are you 
> interested in my work?
>
>
> Nanzhe Zhao <nzzhao.si...@gmail.com> 于2025年7月12日周六 19:39写道:
>>
>> Dear Mr Matthew
>> Hi! It's been a long time since we last discuss about f2fs supporting
>> large folios. I hope you're doing well!
>> Over the past three months, I've been working on large folios support
>> in my own fork of the f2fs tree. I've made some significant progress
>> and have a working implementation for:
>> - f2fs's own per folio struct f2fs_iomap_folio_state
>> - iomap-based buffered read and write.
>> - Large folios support for compressed files, including both buffered
>> I/O and page writeback.
>> My work is based on a several commits just after your "f2fs folio
>> conversions for 6.16" series on f2fs's dev-test branch(Not the
>> mainline) It can handle run on some simple read/write operations for
>> both normal and compressed files, but it is still largely untested.
>> You can find my WIP branch here:
>> I saw your recent series of patches for large folios support and was
>> excited to see the progress. I'm writing to you today to share an
>> update from my side and ask for some guidance.
>> Regarding our previous discussion about storing extra f2fs flags in
>> folio->private, I implemented a solution using a new
>> f2fs_iomap_folio_state struct and related APIs, which I placed in new
>> files (f2fs_ifs.c/.h). My design not only supports large folios but
>> also maintains compatibility with order-0 data and metadata folios
>> which storing the f2fs private flags directly in
>> folio->private.iomap_folio_state won't allocate for them. The memory
>> layout of my struct is also compatible with iomap's helpers.Now this
>> piece of code  conflicts with your latest patches that introduce
>> folio_set_f2fs_data. I assume the standard kernel development workflow
>> would be for me to rebase my local branch onto your latest commit and
>> then refactor my code to align with your new API. Is that correct? I
>> am more than happy to do so and adapt my implementation.
>> On a related note, I recently learned that storage engineers from vivo
>> also implemented iomap buffered write and page writeback conversions
>> for f2fs last year. (The latter shocks me, and I'll explain the reason
>> in a future conversation). Their work seemed doesn't include support
>> for compressed file large folios. It seems necessary for me to
>> coordinate with them.
>> Looking ahead, I understand that a feature of this size should be
>> submitted as a series of small, logical patches to make the review
>> process easier. I would be grateful for any thoughts you might have on
>> this approach as well.
>>
>> Any feedback on my work or advice on how to proceed would be greatly
>> appreciated.
>>
>> Thanks for your time.
>>
>> Best regards.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to