Add a zone device priority option in the mount options. When enabled, the
file system will prioritize using zone devices free space instead of
conventional devices when writing to the end of the storage space.
Signed-off-by: Liao Yuanhong
---
fs/f2fs/f2fs.h| 1 +
fs/f2fs/segment.c | 13 ++
For zoned-UFS, sector size may not aligned to pow2, so we need to remove
the pow2 limitation.
Signed-off-by: Liao Yuanhong
---
drivers/md/dm-table.c | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 41f1d731ae5a..823f2f6a2d53 100644
--- a
Currently, when we allocating a swap file on zone UFS, this file will
created on conventional UFS. If the swap file size is not aligned with the
zone size, the last extent will enter f2fs_migrate_blocks(), resulting in
significant additional I/O overhead and prolonged lock occupancy. In most
cases,
Currently, we are using a mix of traditional UFS and zone UFS to support
some functionalities that cannot be achieved on zone UFS alone. However,
there are some issues with this approach. There exists a significant
performance difference between traditional UFS and zone UFS. Under normal
usage,
Currently, we are using a mix of traditional UFS and zone UFS to support
some functionalities that cannot be achieved on zone UFS alone. However,
there are some issues with this approach. There exists a significant
performance difference between traditional UFS and zone UFS. Under normal
usage,
Currently, we are using a mix of traditional UFS and zone UFS to support
some functionalities that cannot be achieved on zone UFS alone. However,
there are some issues with this approach. There exists a significant
performance difference between traditional UFS and zone UFS. Under normal
usage,
Currently, we are using a mix of traditional UFS and zone UFS to support
some functionalities that cannot be achieved on zone UFS alone. However,
there are some issues with this approach. There exists a significant
performance difference between traditional UFS and zone UFS. Under normal
usage,
Currently, we are using a mix of traditional UFS and zone UFS to support
some functionalities that cannot be achieved on zone UFS alone. However,
there are some issues with this approach. There exists a significant
performance difference between traditional UFS and zone UFS. Under normal
usage,
Right now, when a zone UFS device gets close to running out of space and
starts FG_GC, the system continues to execute FG_GC even if there is a few
dirty space available for reclamation. This can make everything else slow
down or just hang.
Since the function for calculating remaining space operat
The f2fs-tools support manual configuration of rsvd and ovp rate. In cases
where only a small rsvd is set, the automatically calculated ovp rate can
be very large, resulting in the reserved space of the entire file system
being almost the same as before, failing to achieve the goal of reducing
spac
During the development process, we encounter the following two issues:
1.In a multi-device scenario, it's likely that two devices exhibit
inconsistent performance, causing fluctuations in performance and making
usage and testing inconvenient. Under normal circumstances, we hope to
prioritize the u
On 7/28/2025 3:56 PM, Chao Yu wrote:
On 7/23/25 16:49, Liao Yuanhong wrote:
During the development process, we encounter the following two issues:
1.In a multi-device scenario, it's likely that two devices exhibit
inconsistent performance, causing fluctuations in performance and making
usage
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that application installation and startup tests conducted after idling for
a few minutes t
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that application installation and startup tests conducted after idling for
a few minutes t
Introduces two new sys nodes: device_border_line and device_alloc_policy.
The device_border_line identifies the boundary between devices, measured
in sections; it defaults to the end of the device for single storage
setups, and the end of the first device for multiple storage setups. The
device_all
On 8/7/2025 4:38 PM, Chao Yu wrote:
On 8/6/25 15:09, Liao Yuanhong wrote:
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that applic
On 8/8/2025 4:51 PM, Chao Yu wrote:
On 8/8/2025 3:29 PM, Liao Yuanhong wrote:
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we
found
that ap
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that application installation and startup tests conducted after idling for
a few minutes t
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that application installation and startup tests conducted after idling for
a few minutes t
Currently, we have encountered some issues while testing ZUFS. In
situations near the storage limit (e.g., 50GB remaining), and after
simulating fragmentation by repeatedly writing and deleting data, we found
that application installation and startup tests conducted after idling for
a few minutes t
Incorporate a check using has_enough_dirty_blocks() to prevent redundant
background GC in Zoned UFS. When there are insufficient dirty segments,
continuous execution of background GC should be avoided, as it results in
unnecessary write operations and impacts device lifespan. The initial
threshold
When the proportion of dirty segments within a section exceeds the
valid_thresh_ratio, the gc_cost of that section is set to UINT_MAX,
indicating that these sections should not be released. However, if all
section costs within the scanning range of get_victim() are UINT_MAX,
background GC will stil
While testing Zoned UFS, I discovered that the background GC results in
excessive write operations. I wrote a script to capture the data, as shown
below:
TimestampFree_SectionsBG_GC_CallsDirty_Segment
2025/9/8 19:04 433 0 935 <-- begin
.
On 8/25/2025 11:56 AM, Chao Yu wrote:
On 8/25/25 11:42, Liao Yuanhong wrote:
On 8/25/2025 11:10 AM, Chao Yu wrote:
Yuanhong,
On 8/20/25 16:21, Liao Yuanhong wrote:
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the bounda
On 8/25/2025 11:10 AM, Chao Yu wrote:
Yuanhong,
On 8/20/25 16:21, Liao Yuanhong wrote:
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the boundary
between devices, measured in sections; it defaults to the end of the device
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the boundary
between devices, measured in sections; it defaults to the end of the device
for single storage setups, and the end of the first device for multiple
storage setups. The
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the boundary
between devices, measured in sections; it defaults to the end of the device
for single storage setups, and the end of the first device for multiple
storage setups. The
On 8/28/2025 10:10 AM, Chao Yu wrote:
On 8/26/25 22:05, Liao Yuanhong wrote:
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the boundary
between devices, measured in sections; it defaults to the end of the device
for single
Introduces two new sys nodes: allocate_section_hint and
allocate_section_policy. The allocate_section_hint identifies the boundary
between devices, measured in sections; it defaults to the end of the device
for single storage setups, and the end of the first device for multiple
storage setups. The
On 9/17/2025 3:57 PM, Chao Yu wrote:
On 9/17/25 15:08, Liao Yuanhong wrote:
On 9/15/2025 4:36 PM, Chao Yu wrote:
On 9/9/25 21:44, Liao Yuanhong wrote:
When the proportion of dirty segments within a section exceeds the
valid_thresh_ratio, the gc_cost of that section is set to UINT_MAX,
indicat
On 9/16/2025 10:28 AM, Jaegeuk Kim wrote:
Could you please share some trends of relation between has_enough_free_blocks()
vs. has_enough_dirty_blocks()? I'm wondering whethere there's a missing case
where has_enough_free_blocks() is not enough.
Sure. I will find some time to test the data and
On 9/15/2025 4:36 PM, Chao Yu wrote:
On 9/9/25 21:44, Liao Yuanhong wrote:
When the proportion of dirty segments within a section exceeds the
valid_thresh_ratio, the gc_cost of that section is set to UINT_MAX,
indicating that these sections should not be released. However, if all
section costs
On 9/18/2025 10:16 AM, Chao Yu wrote:
On 9/17/25 16:13, Liao Yuanhong wrote:
On 9/17/2025 3:57 PM, Chao Yu wrote:
On 9/17/25 15:08, Liao Yuanhong wrote:
On 9/15/2025 4:36 PM, Chao Yu wrote:
On 9/9/25 21:44, Liao Yuanhong wrote:
When the proportion of dirty segments within a section exceeds
33 matches
Mail list logo