Hi Ju Hyung, On 2019/8/25 19:06, Ju Hyung Park wrote: > Hi Chao, > > On Sat, Aug 24, 2019 at 12:52 AM Chao Yu <[email protected]> wrote: >> It's not intentional, I failed to reproduce this issue, could you add some >> logs >> to track why we stop urgent GC even there are still dirty segments? > > I'm pretty sure you can reproduce this issue quite easily.
Oh, I just notice that my scope of data sample is too small. > > I can see this happening on multiple devices including my workstation, > laptop and my Android phone. > > Here's a simple reproduction step: > 1. Do `rm -rf * && git reset --hard` a few times under a Linux kernel Git > 2. Do a sync > 3. echo 1 > /sys/fs/f2fs/dev/gc_urgent_sleep_time > 4. echo 1 > /sys/fs/f2fs/dev/gc_urgent > 5. Once the number on "GC calls" doesn't change, look at "Dirty" under > /sys/kernel/debug/f2fs/status. It's close to 0. > 6. After doing a 'sync', "Dirty" increases a lot. > 7. Remember the number on "GC calls" and run 3 and 4 again. > 8. The number of "GC calls" increases by a few hundreds. Thank for provided test script. I found out that after data blocks migration, their parent dnodes will become dirty, so that once we execute step 6), some node segments become dirty... So after step 6), we can run 3), 4) and 6) again, "Dirty" will close to zero, that's because node blocks migration will not dirty their parent (indirect/didirect) nodes. Thanks, > > Thanks. > . > _______________________________________________ Linux-f2fs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
