This patches are to designed to optimize NAT/SIT flushing procedure:
patch 1) -- patch 3):
during flush_nat_entries, we do:
1) gang_lookup a radix tree, find all the dirty nat_entry_set;
2) sort nat_entry_set by nat_entry_set->entry_cnt, in order to
write to journal as much as possible to a
during flush_nat_entries, we do:
1) gang_lookup a radix tree, find all the dirty nat_entry_set;
2) sort nat_entry_set by nat_entry_set->entry_cnt, in order to
write to journal as much as possible to avoid unnessary IO.
This patch optimize the look_up & sort algorithm by introducing an array
Currently, we dynamicly alloc sit_entry_set from slab, and link all the
sit_entry_set
by sit_entry_set->set_list.
It is inefficient, since in add_sit_entry, we may need to travel all the list
to find
the target sit_entry_set.
This patch fixes this by introducing a static array:
f2fs_sm
There is no need to do __has_cursem_space checking everytime, this patch fixes
it.
Signed-off-by: Hou Pengyang
---
fs/f2fs/node.c | 27 ++-
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 340c33d..58b4f7a 100644
--- a
This patch recontructs sit flushing code for the afterward patch, with logic
unchanged.
Signed-off-by: Hou Pengyang
---
fs/f2fs/segment.c | 119 +-
1 file changed, 64 insertions(+), 55 deletions(-)
diff --git a/fs/f2fs/segment.c b/fs/f2fs/seg
Like we optimized nat flushing with bucket sort, in this patch, we introduce a
bucket
f2fs_sm_info->dirty to linked link all nat_entry_set to be flushed by their
cnt_count.
Signed-off-by: Hou Pengyang
---
fs/f2fs/f2fs.h| 1 +
fs/f2fs/segment.c | 53 +-