Hi Yunlei, Wei Fang, On 2018/1/31 15:32, Yunlei He wrote: > This patch including two changes: > 1. We introduce nat journal to cache nat blocks which has less > dirty nat entries, nat entry sets are ranked by dirty nat entry > count if the count less than NAT_JOURNAL_ENTRIES. This method > will break the continuity of nat blocks write back order, and > this patch make a change of that prior to flush continuous nat > blocks.
If we start to flush NAT blocks which is contiguous, but if there is very few entries are dirty, we are facing bigger WA which may cause worse effect than random 4K NAT block writes. Or do you have numbers? > > 2. Add read ahead nat blocks in case of a lot of serial synchronized > read as below: Looks LBAs are discrete? Can you check below comment? > + list_for_each_entry_safe(set, tmp, &sets, set_list) { > + block_t blk_addr; > + > + nats_to_journal += set->entry_cnt; > + > + if (nats_to_journal > NAT_JOURNAL_ENTRIES || > + set->continue_io) { > + blk_addr = current_nat_addr(sbi, > + set->set * NAT_ENTRY_PER_BLOCK); Actually, ra_meta_pages only accept nat block offset instead of blkaddr. Thanks, ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel