Re: [f2fs-dev] [PATCH v2] common/quota: update keywords of quota feature in _require_prjquota() for f2fs

2024-04-16 Thread Zorro Lang
On Tue, Apr 16, 2024 at 06:16:50PM +0800, Chao Yu wrote:
> Previously, in f2fs, sysfile quota feature has different name:
> - "quota" in mkfs.f2fs
> - and "quota_ino" in dump.f2fs
> 
> Now, it has unified the name to "quota" since commit 92cc5edeb7
> ("f2fs-tools: reuse feature_table to clean up print_sb_state()").
> 
> It needs to update keywords "quota" in _require_prjquota() for f2fs,
> Otherwise, quota testcase will fail as below.
> 
> generic/383 1s ... [not run] quota sysfile not enabled in this device /dev/vdc
> 
> This patch keeps keywords "quota_ino" in _require_prjquota() to
> keep compatibility for old f2fs-tools.
> 
> Cc: Jaegeuk Kim 
> Signed-off-by: Chao Yu 
> ---
> v2:
> - keep keywords "quota_ino" for compatibility of old f2fs-tools
> suggested by Zorro Lang.

This version looks good to me, thanks

Reviewed-by: Zorro Lang 

>  common/quota | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/common/quota b/common/quota
> index 6b529bf4..4c1d3dcd 100644
> --- a/common/quota
> +++ b/common/quota
> @@ -145,7 +145,7 @@ _require_prjquota()
>  if [ "$FSTYP" == "f2fs" ]; then
>   dump.f2fs $_dev 2>&1 | grep -qw project_quota
>   [ $? -ne 0 ] && _notrun "Project quota not enabled in this device $_dev"
> - dump.f2fs $_dev 2>&1 | grep -qw quota_ino
> + dump.f2fs $_dev 2>&1 | grep -Eqw "quota|quota_ino"
>   [ $? -ne 0 ] && _notrun "quota sysfile not enabled in this device $_dev"
>   cat /sys/fs/f2fs/features/project_quota | grep -qw supported
>   [ $? -ne 0 ] && _notrun "Installed kernel does not support project 
> quotas"
> -- 
> 2.40.1
> 



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: zone: don't block IO if there is remained open zone

2024-04-16 Thread Chao Yu

On 2024/4/17 0:51, Jaegeuk Kim wrote:

On 04/16, Chao Yu wrote:

On 2024/4/15 22:01, Chao Yu wrote:

On 2024/4/15 11:26, Chao Yu wrote:

On 2024/4/14 23:19, Jaegeuk Kim wrote:

It seems this caused kernel hang. Chao, have you tested this patch enough?


Jaegeuk,

Oh, I've checked this patch w/ fsstress before submitting it, but missed
the SPO testcase... do you encounter kernel hang w/ SPO testcase?


I did see any hang issue w/ por_fsstress testcase, which testcase do you use?


Sorry, I mean I haven't reproduced it yet...


I'd prefer to check this patch later. Have you tested on Zoned device with
nullblk?


Yes, I enabled blkzoned feature w/ nullblk device, and set
/sys/kernel/config/nullb/nullb0/zone_max_open to six, so that it can
emulate ZUFS' configuration.

Thanks,





Thanks,



Thanks,



Anyway, let me test it more.

Thanks,



On 04/13, Chao Yu wrote:

On 2024/4/13 5:11, Jaegeuk Kim wrote:

On 04/07, Chao Yu wrote:

max open zone may be larger than log header number of f2fs, for
such case, it doesn't need to wait last IO in previous zone, let's
introduce available_open_zone semaphore, and reduce it once we
submit first write IO in a zone, and increase it after completion
of last IO in the zone.

Cc: Daeho Jeong 
Signed-off-by: Chao Yu 
---
v3:
- avoid race condition in between __submit_merged_bio()
and __allocate_new_segment().
    fs/f2fs/data.c    | 105 ++
    fs/f2fs/f2fs.h    |  34 ---
    fs/f2fs/iostat.c  |   7 
    fs/f2fs/iostat.h  |   2 +
    fs/f2fs/segment.c |  43 ---
    fs/f2fs/segment.h |  12 +-
    fs/f2fs/super.c   |   2 +
    7 files changed, 156 insertions(+), 49 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 0d88649c60a5..18a4ac0a06bc 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -373,11 +373,10 @@ static void f2fs_write_end_io(struct bio *bio)
    #ifdef CONFIG_BLK_DEV_ZONED
    static void f2fs_zone_write_end_io(struct bio *bio)
    {
-    struct f2fs_bio_info *io = (struct f2fs_bio_info *)bio->bi_private;
+    struct f2fs_sb_info *sbi = iostat_get_bio_private(bio);
-    bio->bi_private = io->bi_private;
-    complete(&io->zone_wait);
    f2fs_write_end_io(bio);
+    up(&sbi->available_open_zones);
    }
    #endif
@@ -531,6 +530,24 @@ static void __submit_merged_bio(struct f2fs_bio_info *io)
    if (!io->bio)
    return;
+#ifdef CONFIG_BLK_DEV_ZONED
+    if (io->open_zone) {
+    /*
+ * if there is no open zone, it will wait for last IO in
+ * previous zone before submitting new IO.
+ */
+    down(&fio->sbi->available_open_zones);
+    io->open_zone = false;
+    io->zone_openned = true;
+    }
+
+    if (io->close_zone) {
+    io->bio->bi_end_io = f2fs_zone_write_end_io;
+    io->zone_openned = false;
+    io->close_zone = false;
+    }
+#endif
+
    if (is_read_io(fio->op)) {
    trace_f2fs_prepare_read_bio(io->sbi->sb, fio->type, io->bio);
    f2fs_submit_read_bio(io->sbi, io->bio, fio->type);
@@ -601,9 +618,9 @@ int f2fs_init_write_merge_io(struct f2fs_sb_info *sbi)
    INIT_LIST_HEAD(&sbi->write_io[i][j].bio_list);
    init_f2fs_rwsem(&sbi->write_io[i][j].bio_list_lock);
    #ifdef CONFIG_BLK_DEV_ZONED
-    init_completion(&sbi->write_io[i][j].zone_wait);
-    sbi->write_io[i][j].zone_pending_bio = NULL;
-    sbi->write_io[i][j].bi_private = NULL;
+    sbi->write_io[i][j].open_zone = false;
+    sbi->write_io[i][j].zone_openned = false;
+    sbi->write_io[i][j].close_zone = false;
    #endif
    }
    }
@@ -634,6 +651,31 @@ static void __f2fs_submit_merged_write(struct f2fs_sb_info 
*sbi,
    f2fs_up_write(&io->io_rwsem);
    }
+void f2fs_blkzoned_submit_merged_write(struct f2fs_sb_info *sbi, int type)
+{
+#ifdef CONFIG_BLK_DEV_ZONED
+    struct f2fs_bio_info *io;
+
+    if (!f2fs_sb_has_blkzoned(sbi))
+    return;
+
+    io = sbi->write_io[PAGE_TYPE(type)] + type_to_temp(type);
+
+    f2fs_down_write(&io->io_rwsem);
+    if (io->zone_openned) {
+    if (io->bio) {
+    io->close_zone = true;
+    __submit_merged_bio(io);
+    } else if (io->zone_openned) {
+    up(&sbi->available_open_zones);
+    io->zone_openned = false;
+    }
+    }
+    f2fs_up_write(&io->io_rwsem);
+#endif
+
+}
+
    static void __submit_merged_write_cond(struct f2fs_sb_info *sbi,
    struct inode *inode, struct page *page,
    nid_t ino, enum page_type type, bool force)
@@ -918,22 +960,16 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
    }
    #ifdef CONFIG_BLK_DEV_ZONED
-static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
+static bool is_blkaddr_zone_boundary(struct f2fs_sb_info *sbi,
+    block_t blkaddr, bool start)
    {
-    int devi = 0;
+    if (!f2fs_blkaddr_in_seqzone(sbi, blkaddr))
+

[f2fs-dev] Very Urgent Reply!!!!

2024-04-16 Thread Dip Paul Enes
Attention Sir/Madam,

I hope this message finds you well. I am writing to inform you of some exciting 
news regarding the delivery of your consignment. Diplomat James Morgan, who has 
been mandated by our company to ensure the safe and prompt delivery of your 
consignment, has just arrived in your city.

I am delighted to inform you that Diplomat James Morgan has successfully 
completed all the necessary procedures and documentation for the swift release 
and delivery of your consignment. He brings with him years of experience and 
expertise in handling diplomatic deliveries, and we are confident that he will 
ensure the successful completion of this important transaction.

Given the significance of the consignment to you, we have arranged for Diplomat 
James Morgan to personally oversee its delivery to your designated location. 
His professionalism, attention to detail, and commitment to providing excellent 
customer service make him the ideal choice for this crucial task.

We understand that you have eagerly been awaiting the arrival of your 
consignment, and we apologize for any delays or inconveniences you may have 
experienced during this process. Rest assured that we are doing everything in 
our power to make sure your consignment arrives in perfect condition and within 
the shortest possible time frame.

Diplomat James Morgan will contact you directly to arrange a convenient 
delivery date and time. Please ensure that you are available to receive the 
consignment or designate a trusted representative who can accept it on your 
behalf. You may be required to provide valid identification upon delivery, as 
per standard protocol.

Should you have any questions or concerns, please do not hesitate to reach out 
to me directly at dipmorg...@gmail.com. Your satisfaction is our top priority, 
and we are committed to resolving any issues you may have promptly and 
effectively.

Thank you for choosing our services for the delivery of your consignment. We 
appreciate your trust in our company and assure you that we will continue to 
strive for excellence throughout this process.

Wishing you a seamless delivery experience and a pleasant day ahead.

Warm regards,

Dip James Morgan


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] f2fs:add zone device priority option to the mount options

2024-04-16 Thread Jaegeuk Kim
I don't see any point why we need this.

On 04/15, Liao Yuanhong wrote:
> Add a zone device priority option in the mount options. When enabled, the 
> file system will prioritize using zone devices free space instead of 
> conventional devices when writing to the end of the storage space.
> 
> Signed-off-by: Liao Yuanhong 
> ---
>  fs/f2fs/f2fs.h|  1 +
>  fs/f2fs/segment.c | 13 -
>  fs/f2fs/super.c   | 20 
>  3 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index fced2b7652f4..e2438f7d2e13 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -116,6 +116,7 @@ extern const char *f2fs_fault_name[FAULT_MAX];
>  #define  F2FS_MOUNT_GC_MERGE 0x0200
>  #define F2FS_MOUNT_COMPRESS_CACHE0x0400
>  #define F2FS_MOUNT_AGE_EXTENT_CACHE  0x0800
> +#define F2FS_MOUNT_PRIORITY_ZONED0x1000
>  
>  #define F2FS_OPTION(sbi) ((sbi)->mount_opt)
>  #define clear_opt(sbi, option)   (F2FS_OPTION(sbi).opt &= 
> ~F2FS_MOUNT_##option)
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 4fd76e867e0a..adbe68a11fa5 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -2697,7 +2697,18 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
>  find_other_zone:
>   secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
>   if (secno >= MAIN_SECS(sbi)) {
> - secno = find_first_zero_bit(free_i->free_secmap,
> + /* set hint to get section from zone device first */
> + if (test_opt(sbi, PRIORITY_ZONED)) {
> + hint = GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi));
> + secno = find_next_zero_bit(free_i->free_secmap,
> + MAIN_SECS(sbi), hint);
> +
> + /* get section from clu if exceeding the size limit */
> + if (secno >= MAIN_SECS(sbi))
> + secno = find_first_zero_bit(free_i->free_secmap,
> + MAIN_SECS(sbi));
> + } else
> + secno = find_first_zero_bit(free_i->free_secmap,
>   MAIN_SECS(sbi));
>   if (secno >= MAIN_SECS(sbi)) {
>   ret = -ENOSPC;
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index a4bc26dfdb1a..2742978a100a 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -126,6 +126,8 @@ enum {
>   Opt_inline_data,
>   Opt_inline_dentry,
>   Opt_noinline_dentry,
> + Opt_priority_zoned,
> + Opt_nopriority_zoned,
>   Opt_flush_merge,
>   Opt_noflush_merge,
>   Opt_barrier,
> @@ -204,6 +206,8 @@ static match_table_t f2fs_tokens = {
>   {Opt_inline_data, "inline_data"},
>   {Opt_inline_dentry, "inline_dentry"},
>   {Opt_noinline_dentry, "noinline_dentry"},
> + {Opt_priority_zoned, "priority_zoned"},
> + {Opt_nopriority_zoned, "nopriority_zoned"},
>   {Opt_flush_merge, "flush_merge"},
>   {Opt_noflush_merge, "noflush_merge"},
>   {Opt_barrier, "barrier"},
> @@ -805,6 +809,16 @@ static int parse_options(struct super_block *sb, char 
> *options, bool is_remount)
>   case Opt_noinline_dentry:
>   clear_opt(sbi, INLINE_DENTRY);
>   break;
> +#ifdef CONFIG_BLK_DEV_ZONED
> + case Opt_priority_zoned:
> + if (f2fs_sb_has_blkzoned(sbi))
> + set_opt(sbi, PRIORITY_ZONED);
> + break;
> + case Opt_nopriority_zoned:
> + if (f2fs_sb_has_blkzoned(sbi))
> + clear_opt(sbi, PRIORITY_ZONED);
> + break;
> +#endif
>   case Opt_flush_merge:
>   set_opt(sbi, FLUSH_MERGE);
>   break;
> @@ -1990,6 +2004,12 @@ static int f2fs_show_options(struct seq_file *seq, 
> struct dentry *root)
>   seq_puts(seq, ",inline_dentry");
>   else
>   seq_puts(seq, ",noinline_dentry");
> +#ifdef CONFIG_BLK_DEV_ZONED
> + if (test_opt(sbi, PRIORITY_ZONED))
> + seq_puts(seq, ",priority_zoned");
> + else
> + seq_puts(seq, ",nopriority_zoned");
> +#endif
>   if (test_opt(sbi, FLUSH_MERGE))
>   seq_puts(seq, ",flush_merge");
>   else
> -- 
> 2.25.1


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: zone: don't block IO if there is remained open zone

2024-04-16 Thread Jaegeuk Kim
On 04/16, Chao Yu wrote:
> On 2024/4/15 22:01, Chao Yu wrote:
> > On 2024/4/15 11:26, Chao Yu wrote:
> > > On 2024/4/14 23:19, Jaegeuk Kim wrote:
> > > > It seems this caused kernel hang. Chao, have you tested this patch 
> > > > enough?
> > > 
> > > Jaegeuk,
> > > 
> > > Oh, I've checked this patch w/ fsstress before submitting it, but missed
> > > the SPO testcase... do you encounter kernel hang w/ SPO testcase?
> > 
> > I did see any hang issue w/ por_fsstress testcase, which testcase do you 
> > use?
> 
> Sorry, I mean I haven't reproduced it yet...

I'd prefer to check this patch later. Have you tested on Zoned device with
nullblk?

> 
> Thanks,
> 
> > 
> > Thanks,
> > 
> > > 
> > > Anyway, let me test it more.
> > > 
> > > Thanks,
> > > 
> > > > 
> > > > On 04/13, Chao Yu wrote:
> > > > > On 2024/4/13 5:11, Jaegeuk Kim wrote:
> > > > > > On 04/07, Chao Yu wrote:
> > > > > > > max open zone may be larger than log header number of f2fs, for
> > > > > > > such case, it doesn't need to wait last IO in previous zone, let's
> > > > > > > introduce available_open_zone semaphore, and reduce it once we
> > > > > > > submit first write IO in a zone, and increase it after completion
> > > > > > > of last IO in the zone.
> > > > > > > 
> > > > > > > Cc: Daeho Jeong 
> > > > > > > Signed-off-by: Chao Yu 
> > > > > > > ---
> > > > > > > v3:
> > > > > > > - avoid race condition in between __submit_merged_bio()
> > > > > > > and __allocate_new_segment().
> > > > > > >    fs/f2fs/data.c    | 105 
> > > > > > > ++
> > > > > > >    fs/f2fs/f2fs.h    |  34 ---
> > > > > > >    fs/f2fs/iostat.c  |   7 
> > > > > > >    fs/f2fs/iostat.h  |   2 +
> > > > > > >    fs/f2fs/segment.c |  43 ---
> > > > > > >    fs/f2fs/segment.h |  12 +-
> > > > > > >    fs/f2fs/super.c   |   2 +
> > > > > > >    7 files changed, 156 insertions(+), 49 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> > > > > > > index 0d88649c60a5..18a4ac0a06bc 100644
> > > > > > > --- a/fs/f2fs/data.c
> > > > > > > +++ b/fs/f2fs/data.c
> > > > > > > @@ -373,11 +373,10 @@ static void f2fs_write_end_io(struct bio 
> > > > > > > *bio)
> > > > > > >    #ifdef CONFIG_BLK_DEV_ZONED
> > > > > > >    static void f2fs_zone_write_end_io(struct bio *bio)
> > > > > > >    {
> > > > > > > -    struct f2fs_bio_info *io = (struct f2fs_bio_info 
> > > > > > > *)bio->bi_private;
> > > > > > > +    struct f2fs_sb_info *sbi = iostat_get_bio_private(bio);
> > > > > > > -    bio->bi_private = io->bi_private;
> > > > > > > -    complete(&io->zone_wait);
> > > > > > >    f2fs_write_end_io(bio);
> > > > > > > +    up(&sbi->available_open_zones);
> > > > > > >    }
> > > > > > >    #endif
> > > > > > > @@ -531,6 +530,24 @@ static void __submit_merged_bio(struct 
> > > > > > > f2fs_bio_info *io)
> > > > > > >    if (!io->bio)
> > > > > > >    return;
> > > > > > > +#ifdef CONFIG_BLK_DEV_ZONED
> > > > > > > +    if (io->open_zone) {
> > > > > > > +    /*
> > > > > > > + * if there is no open zone, it will wait for last IO in
> > > > > > > + * previous zone before submitting new IO.
> > > > > > > + */
> > > > > > > +    down(&fio->sbi->available_open_zones);
> > > > > > > +    io->open_zone = false;
> > > > > > > +    io->zone_openned = true;
> > > > > > > +    }
> > > > > > > +
> > > > > > > +    if (io->close_zone) {
> > > > > > > +    io->bio->bi_end_io = f2fs_zone_write_end_io;
> > > > > > > +    io->zone_openned = false;
> > > > > > > +    io->close_zone = false;
> > > > > > > +    }
> > > > > > > +#endif
> > > > > > > +
> > > > > > >    if (is_read_io(fio->op)) {
> > > > > > >    trace_f2fs_prepare_read_bio(io->sbi->sb, fio->type, 
> > > > > > > io->bio);
> > > > > > >    f2fs_submit_read_bio(io->sbi, io->bio, fio->type);
> > > > > > > @@ -601,9 +618,9 @@ int f2fs_init_write_merge_io(struct 
> > > > > > > f2fs_sb_info *sbi)
> > > > > > >    INIT_LIST_HEAD(&sbi->write_io[i][j].bio_list);
> > > > > > >    
> > > > > > > init_f2fs_rwsem(&sbi->write_io[i][j].bio_list_lock);
> > > > > > >    #ifdef CONFIG_BLK_DEV_ZONED
> > > > > > > -    init_completion(&sbi->write_io[i][j].zone_wait);
> > > > > > > -    sbi->write_io[i][j].zone_pending_bio = NULL;
> > > > > > > -    sbi->write_io[i][j].bi_private = NULL;
> > > > > > > +    sbi->write_io[i][j].open_zone = false;
> > > > > > > +    sbi->write_io[i][j].zone_openned = false;
> > > > > > > +    sbi->write_io[i][j].close_zone = false;
> > > > > > >    #endif
> > > > > > >    }
> > > > > > >    }
> > > > > > > @@ -634,6 +651,31 @@ static void 
> > > > > > > __f2fs_submit_merged_write(struct f2fs_sb_info *sbi,
> > > > > > >    f2fs_up_write(&io->io_rwsem);
> > > > > > >    }
> > > > > > > +void f2fs_blkzoned_submit_merged_write(struct f2fs_sb_info *sbi,

Re: [f2fs-dev] [PATCH 2/3 v2] f2fs: clear writeback when compression failed

2024-04-16 Thread Jaegeuk Kim
Let's stop issuing compressed writes and clear their writeback flags.

Signed-off-by: Jaegeuk Kim 
---

 Now, I don't see any kernel hang for 24hours.

 Change log from v1:
  - fix bugs

 fs/f2fs/compress.c | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index d67c471ab5df..b12d3a49bfda 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1031,6 +1031,31 @@ static void set_cluster_writeback(struct compress_ctx 
*cc)
}
 }
 
+static void cancel_cluster_writeback(struct compress_ctx *cc,
+   struct compress_io_ctx *cic, int submitted)
+{
+   int i;
+
+   /* Wait for submitted IOs. */
+   if (submitted > 1) {
+   f2fs_submit_merged_write(F2FS_I_SB(cc->inode), DATA);
+   while (atomic_read(&cic->pending_pages) !=
+   (cc->valid_nr_cpages - submitted + 1))
+   f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT);
+   }
+
+   /* Cancel writeback and stay locked. */
+   for (i = 0; i < cc->cluster_size; i++) {
+   if (i < submitted) {
+   inode_inc_dirty_pages(cc->inode);
+   lock_page(cc->rpages[i]);
+   }
+   clear_page_private_gcing(cc->rpages[i]);
+   if (folio_test_writeback(page_folio(cc->rpages[i])))
+   end_page_writeback(cc->rpages[i]);
+   }
+}
+
 static void set_cluster_dirty(struct compress_ctx *cc)
 {
int i;
@@ -1232,7 +1257,6 @@ static int f2fs_write_compressed_pages(struct 
compress_ctx *cc,
.page = NULL,
.encrypted_page = NULL,
.compressed_page = NULL,
-   .submitted = 0,
.io_type = io_type,
.io_wbc = wbc,
.encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode) ?
@@ -1358,7 +1382,16 @@ static int f2fs_write_compressed_pages(struct 
compress_ctx *cc,
fio.compressed_page = cc->cpages[i - 1];
 
cc->cpages[i - 1] = NULL;
+   fio.submitted = 0;
f2fs_outplace_write_data(&dn, &fio);
+   if (unlikely(!fio.submitted)) {
+   cancel_cluster_writeback(cc, cic, i);
+
+   /* To call fscrypt_finalize_bounce_page */
+   i = cc->valid_nr_cpages;
+   *submitted = 0;
+   goto out_destroy_crypt;
+   }
(*submitted)++;
 unlock_continue:
inode_dec_dirty_pages(cc->inode);
@@ -1392,8 +1425,11 @@ static int f2fs_write_compressed_pages(struct 
compress_ctx *cc,
 out_destroy_crypt:
page_array_free(cc->inode, cic->rpages, cc->cluster_size);
 
-   for (--i; i >= 0; i--)
+   for (--i; i >= 0; i--) {
+   if (!cc->cpages[i])
+   continue;
fscrypt_finalize_bounce_page(&cc->cpages[i]);
+   }
 out_put_cic:
kmem_cache_free(cic_entry_slab, cic);
 out_put_dnode:
-- 
2.44.0.683.g7961c838ac-goog



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH 2/2] f2fs: remove unnecessary block size check in init_f2fs_fs()

2024-04-16 Thread Zhiguo Niu
On Tue, Apr 16, 2024 at 3:22 PM Chao Yu  wrote:
>
> After commit d7e9a9037de2 ("f2fs: Support Block Size == Page Size"),
> F2FS_BLKSIZE equals to PAGE_SIZE, remove unnecessary check condition.
>
> Signed-off-by: Chao Yu 
> ---
>  fs/f2fs/super.c | 6 --
>  1 file changed, 6 deletions(-)
>
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index 6d1e4fc629e2..32aa6d6fa871 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -4933,12 +4933,6 @@ static int __init init_f2fs_fs(void)
>  {
> int err;
>
> -   if (PAGE_SIZE != F2FS_BLKSIZE) {
> -   printk("F2FS not supported on PAGE_SIZE(%lu) != 
> BLOCK_SIZE(%lu)\n",
> -   PAGE_SIZE, F2FS_BLKSIZE);
> -   return -EINVAL;
> -   }
> -
> err = init_inodecache();
> if (err)
> goto fail;
Dear Chao,

Can you help modify the following  comment msg together with this patch?
They are also related to commit d7e9a9037de2 ("f2fs: Support Block
Size == Page Size").
If you think there is a more suitable description, please help modify
it directly.
thanks!

diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
index a357287..241e7b18 100644
--- a/include/linux/f2fs_fs.h
+++ b/include/linux/f2fs_fs.h
@@ -394,7 +394,8 @@ struct f2fs_nat_block {

 /*
  * F2FS uses 4 bytes to represent block address. As a result, supported size of
- * disk is 16 TB and it equals to 16 * 1024 * 1024 / 2 segments.
+ * disk is 16 TB for a 4K page size and 64 TB for a 16K page size and it equals
+ * to 16 * 1024 * 1024 / 2 segments.
  */
 #define F2FS_MAX_SEGMENT   ((16 * 1024 * 1024) / 2)

@@ -424,8 +425,10 @@ struct f2fs_sit_block {
 /*
  * For segment summary
  *
- * One summary block contains exactly 512 summary entries, which represents
- * exactly one segment by default. Not allow to change the basic units.
+ * One summary block with 4KB size contains exactly 512 summary entries, which
+ * represents exactly one segment with 2MB size.
+ * Similarly, in the case of 16k block size, it represents one
segment with 8MB size.
+ * Not allow to change the basic units.
  *
  * NOTE: For initializing fields, you must use set_summary
  *
@@ -556,6 +559,7 @@ struct f2fs_summary_block {

 /*
  * space utilization of regular dentry and inline dentry (w/o extra
reservation)
+ * when block size is 4KB.



> --
> 2.40.1
>
>
>
> ___
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] common/quota: fix keywords of quota feature in _require_prjquota() for f2fs

2024-04-16 Thread Chao Yu

On 2024/4/16 16:49, Zorro Lang wrote:

On Tue, Apr 16, 2024 at 03:18:19PM +0800, Chao Yu wrote:

Previously, in f2fs, sysfile quota feature has different name:
- "quota" in mkfs.f2fs
- and "quota_ino" in dump.f2fs

Now, it has unified the name to "quota" since commit 92cc5edeb7
("f2fs-tools: reuse feature_table to clean up print_sb_state()").

It needs to fix keywords in _require_prjquota() for f2fs, Otherwise,
quota testcase will fail.

generic/383 1s ... [not run] quota sysfile not enabled in this device /dev/vdc

Cc: Jaegeuk Kim 
Signed-off-by: Chao Yu 
---
  common/quota | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/quota b/common/quota
index 6b529bf4..cfe3276f 100644
--- a/common/quota
+++ b/common/quota
@@ -145,7 +145,7 @@ _require_prjquota()
  if [ "$FSTYP" == "f2fs" ]; then
dump.f2fs $_dev 2>&1 | grep -qw project_quota
[ $? -ne 0 ] && _notrun "Project quota not enabled in this device $_dev"
-   dump.f2fs $_dev 2>&1 | grep -qw quota_ino
+   dump.f2fs $_dev 2>&1 | grep -qw quota


This will _notrun on old f2fs-tools, due to `grep -w quota` doesn't match
old "quota_ino". So how about grep -Eqw "quota|quota_ino", or any better idea
you have.


Thanks for your suggestion, I fix this in v2, I've tested v2 w/ old f2fs-tools,
it works fine.

Thanks,



Thanks,
Zorro


[ $? -ne 0 ] && _notrun "quota sysfile not enabled in this device $_dev"
cat /sys/fs/f2fs/features/project_quota | grep -qw supported
[ $? -ne 0 ] && _notrun "Installed kernel does not support project 
quotas"
--
2.40.1







___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH v2] common/quota: update keywords of quota feature in _require_prjquota() for f2fs

2024-04-16 Thread Chao Yu
Previously, in f2fs, sysfile quota feature has different name:
- "quota" in mkfs.f2fs
- and "quota_ino" in dump.f2fs

Now, it has unified the name to "quota" since commit 92cc5edeb7
("f2fs-tools: reuse feature_table to clean up print_sb_state()").

It needs to update keywords "quota" in _require_prjquota() for f2fs,
Otherwise, quota testcase will fail as below.

generic/383 1s ... [not run] quota sysfile not enabled in this device /dev/vdc

This patch keeps keywords "quota_ino" in _require_prjquota() to
keep compatibility for old f2fs-tools.

Cc: Jaegeuk Kim 
Signed-off-by: Chao Yu 
---
v2:
- keep keywords "quota_ino" for compatibility of old f2fs-tools
suggested by Zorro Lang.
 common/quota | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/quota b/common/quota
index 6b529bf4..4c1d3dcd 100644
--- a/common/quota
+++ b/common/quota
@@ -145,7 +145,7 @@ _require_prjquota()
 if [ "$FSTYP" == "f2fs" ]; then
dump.f2fs $_dev 2>&1 | grep -qw project_quota
[ $? -ne 0 ] && _notrun "Project quota not enabled in this device $_dev"
-   dump.f2fs $_dev 2>&1 | grep -qw quota_ino
+   dump.f2fs $_dev 2>&1 | grep -Eqw "quota|quota_ino"
[ $? -ne 0 ] && _notrun "quota sysfile not enabled in this device $_dev"
cat /sys/fs/f2fs/features/project_quota | grep -qw supported
[ $? -ne 0 ] && _notrun "Installed kernel does not support project 
quotas"
-- 
2.40.1



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] common/quota: fix keywords of quota feature in _require_prjquota() for f2fs

2024-04-16 Thread Zorro Lang
On Tue, Apr 16, 2024 at 03:18:19PM +0800, Chao Yu wrote:
> Previously, in f2fs, sysfile quota feature has different name:
> - "quota" in mkfs.f2fs
> - and "quota_ino" in dump.f2fs
> 
> Now, it has unified the name to "quota" since commit 92cc5edeb7
> ("f2fs-tools: reuse feature_table to clean up print_sb_state()").
> 
> It needs to fix keywords in _require_prjquota() for f2fs, Otherwise,
> quota testcase will fail.
> 
> generic/383 1s ... [not run] quota sysfile not enabled in this device /dev/vdc
> 
> Cc: Jaegeuk Kim 
> Signed-off-by: Chao Yu 
> ---
>  common/quota | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/common/quota b/common/quota
> index 6b529bf4..cfe3276f 100644
> --- a/common/quota
> +++ b/common/quota
> @@ -145,7 +145,7 @@ _require_prjquota()
>  if [ "$FSTYP" == "f2fs" ]; then
>   dump.f2fs $_dev 2>&1 | grep -qw project_quota
>   [ $? -ne 0 ] && _notrun "Project quota not enabled in this device $_dev"
> - dump.f2fs $_dev 2>&1 | grep -qw quota_ino
> + dump.f2fs $_dev 2>&1 | grep -qw quota

This will _notrun on old f2fs-tools, due to `grep -w quota` doesn't match
old "quota_ino". So how about grep -Eqw "quota|quota_ino", or any better idea
you have.

Thanks,
Zorro

>   [ $? -ne 0 ] && _notrun "quota sysfile not enabled in this device $_dev"
>   cat /sys/fs/f2fs/features/project_quota | grep -qw supported
>   [ $? -ne 0 ] && _notrun "Installed kernel does not support project 
> quotas"
> -- 
> 2.40.1
> 
> 



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH 4/4] f2fs: convert f2fs__page tracepoint class to use folio

2024-04-16 Thread Chao Yu
Convert f2fs__page tracepoint class() and its instances to use folio
and related functionality, and rename it to f2fs__folio().

Signed-off-by: Chao Yu 
---
 fs/f2fs/checkpoint.c|  4 ++--
 fs/f2fs/data.c  | 10 -
 fs/f2fs/node.c  |  4 ++--
 include/trace/events/f2fs.h | 42 ++---
 4 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index eac698b8dd38..5d05a413f451 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -345,7 +345,7 @@ static int __f2fs_write_meta_page(struct page *page,
 {
struct f2fs_sb_info *sbi = F2FS_P_SB(page);
 
-   trace_f2fs_writepage(page, META);
+   trace_f2fs_writepage(page_folio(page), META);
 
if (unlikely(f2fs_cp_error(sbi))) {
if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
@@ -492,7 +492,7 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum 
page_type type,
 static bool f2fs_dirty_meta_folio(struct address_space *mapping,
struct folio *folio)
 {
-   trace_f2fs_set_page_dirty(&folio->page, META);
+   trace_f2fs_set_page_dirty(folio, META);
 
if (!folio_test_uptodate(folio))
folio_mark_uptodate(folio);
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 3eb90b9b0f8b..cf6d31e3e630 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2490,7 +2490,7 @@ static int f2fs_read_data_folio(struct file *file, struct 
folio *folio)
struct inode *inode = folio_file_mapping(folio)->host;
int ret = -EAGAIN;
 
-   trace_f2fs_readpage(&folio->page, DATA);
+   trace_f2fs_readpage(folio, DATA);
 
if (!f2fs_is_compress_backend_ready(inode)) {
folio_unlock(folio);
@@ -2739,7 +2739,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
} else {
set_inode_flag(inode, FI_UPDATE_WRITE);
}
-   trace_f2fs_do_write_data_page(fio->page, IPU);
+   trace_f2fs_do_write_data_page(page_folio(page), IPU);
return err;
}
 
@@ -2768,7 +2768,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
 
/* LFS mode write path */
f2fs_outplace_write_data(&dn, fio);
-   trace_f2fs_do_write_data_page(page, OPU);
+   trace_f2fs_do_write_data_page(page_folio(page), OPU);
set_inode_flag(inode, FI_APPEND_WRITE);
 out_writepage:
f2fs_put_dnode(&dn);
@@ -2815,7 +2815,7 @@ int f2fs_write_single_data_page(struct page *page, int 
*submitted,
.last_block = last_block,
};
 
-   trace_f2fs_writepage(page, DATA);
+   trace_f2fs_writepage(page_folio(page), DATA);
 
/* we should bypass data pages to proceed the kworker jobs */
if (unlikely(f2fs_cp_error(sbi))) {
@@ -3789,7 +3789,7 @@ static bool f2fs_dirty_data_folio(struct address_space 
*mapping,
 {
struct inode *inode = mapping->host;
 
-   trace_f2fs_set_page_dirty(&folio->page, DATA);
+   trace_f2fs_set_page_dirty(folio, DATA);
 
if (!folio_test_uptodate(folio))
folio_mark_uptodate(folio);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 3b9eb5693683..95cecf08cb37 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1624,7 +1624,7 @@ static int __write_node_page(struct page *page, bool 
atomic, bool *submitted,
};
unsigned int seq;
 
-   trace_f2fs_writepage(page, NODE);
+   trace_f2fs_writepage(page_folio(page), NODE);
 
if (unlikely(f2fs_cp_error(sbi))) {
/* keep node pages in remount-ro mode */
@@ -2171,7 +2171,7 @@ static int f2fs_write_node_pages(struct address_space 
*mapping,
 static bool f2fs_dirty_node_folio(struct address_space *mapping,
struct folio *folio)
 {
-   trace_f2fs_set_page_dirty(&folio->page, NODE);
+   trace_f2fs_set_page_dirty(folio, NODE);
 
if (!folio_test_uptodate(folio))
folio_mark_uptodate(folio);
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 7ed0fc430dc6..371ba28415f5 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -1304,11 +1304,11 @@ TRACE_EVENT(f2fs_write_end,
__entry->copied)
 );
 
-DECLARE_EVENT_CLASS(f2fs__page,
+DECLARE_EVENT_CLASS(f2fs__folio,
 
-   TP_PROTO(struct page *page, int type),
+   TP_PROTO(struct folio *folio, int type),
 
-   TP_ARGS(page, type),
+   TP_ARGS(folio, type),
 
TP_STRUCT__entry(
__field(dev_t,  dev)
@@ -1321,14 +1321,14 @@ DECLARE_EVENT_CLASS(f2fs__page,
),
 
TP_fast_assign(
-   __entry->dev= page_file_mapping(page)->host->i_sb->s_dev;
-   __entry->ino= page_file_mapping(page)->host->i_ino;
+   __entry->dev= folio_file_mapping(folio)->host->i_sb->s_dev;
+   __entry->ino= folio_file_mapping(folio)->host->i_ino;
_

[f2fs-dev] [PATCH 1/4] f2fs: convert f2fs_mpage_readpages() to use folio

2024-04-16 Thread Chao Yu
Convert f2fs_mpage_readpages() to use folio and related
functionality.

Signed-off-by: Chao Yu 
---
 fs/f2fs/data.c | 80 +-
 1 file changed, 40 insertions(+), 40 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 9c5512be1a1b..14dcd621acaa 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2374,7 +2374,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct 
bio **bio_ret,
  * Major change was from block_size == page_size in f2fs by default.
  */
 static int f2fs_mpage_readpages(struct inode *inode,
-   struct readahead_control *rac, struct page *page)
+   struct readahead_control *rac, struct folio *folio)
 {
struct bio *bio = NULL;
sector_t last_block_in_bio = 0;
@@ -2394,6 +2394,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 #endif
unsigned nr_pages = rac ? readahead_count(rac) : 1;
unsigned max_nr_pages = nr_pages;
+   pgoff_t index;
int ret = 0;
 
map.m_pblk = 0;
@@ -2407,64 +2408,63 @@ static int f2fs_mpage_readpages(struct inode *inode,
 
for (; nr_pages; nr_pages--) {
if (rac) {
-   page = readahead_page(rac);
-   prefetchw(&page->flags);
+   folio = readahead_folio(rac);
+   prefetchw(&folio->flags);
}
 
-#ifdef CONFIG_F2FS_FS_COMPRESSION
-   if (f2fs_compressed_file(inode)) {
-   /* there are remained compressed pages, submit them */
-   if (!f2fs_cluster_can_merge_page(&cc, page->index)) {
-   ret = f2fs_read_multi_pages(&cc, &bio,
-   max_nr_pages,
-   &last_block_in_bio,
-   rac != NULL, false);
-   f2fs_destroy_compress_ctx(&cc, false);
-   if (ret)
-   goto set_error_page;
-   }
-   if (cc.cluster_idx == NULL_CLUSTER) {
-   if (nc_cluster_idx ==
-   page->index >> cc.log_cluster_size) {
-   goto read_single_page;
-   }
-
-   ret = f2fs_is_compressed_cluster(inode, 
page->index);
-   if (ret < 0)
-   goto set_error_page;
-   else if (!ret) {
-   nc_cluster_idx =
-   page->index >> 
cc.log_cluster_size;
-   goto read_single_page;
-   }
+   index = folio_index(folio);
 
-   nc_cluster_idx = NULL_CLUSTER;
-   }
-   ret = f2fs_init_compress_ctx(&cc);
+#ifdef CONFIG_F2FS_FS_COMPRESSION
+   if (!f2fs_compressed_file(inode))
+   goto read_single_page;
+
+   /* there are remained compressed pages, submit them */
+   if (!f2fs_cluster_can_merge_page(&cc, index)) {
+   ret = f2fs_read_multi_pages(&cc, &bio,
+   max_nr_pages,
+   &last_block_in_bio,
+   rac != NULL, false);
+   f2fs_destroy_compress_ctx(&cc, false);
if (ret)
goto set_error_page;
+   }
+   if (cc.cluster_idx == NULL_CLUSTER) {
+   if (nc_cluster_idx == index >> cc.log_cluster_size)
+   goto read_single_page;
 
-   f2fs_compress_ctx_add_page(&cc, page);
+   ret = f2fs_is_compressed_cluster(inode, index);
+   if (ret < 0)
+   goto set_error_page;
+   else if (!ret) {
+   nc_cluster_idx =
+   index >> cc.log_cluster_size;
+   goto read_single_page;
+   }
 
-   goto next_page;
+   nc_cluster_idx = NULL_CLUSTER;
}
+   ret = f2fs_init_compress_ctx(&cc);
+   if (ret)
+   goto set_error_page;
+
+   f2fs_compress_ctx_add_page(&cc, &folio->page);
+
+   goto next_page;
 read_single_page:
 #endif
 
-   ret = f2fs_read_single_page(inode, page, max_nr_pages, &map,
+   ret = f2fs_read_single_page(inode, &folio->page, max_nr_pages, 
&map,
   

[f2fs-dev] [PATCH 3/4] f2fs: convert f2fs_read_inline_data() to use folio

2024-04-16 Thread Chao Yu
Convert f2fs_read_inline_data() to use folio and related
functionality, and also convert its caller to use folio.

Signed-off-by: Chao Yu 
---
 fs/f2fs/data.c   | 11 +--
 fs/f2fs/f2fs.h   |  4 ++--
 fs/f2fs/inline.c | 34 +-
 3 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index c35107657c97..3eb90b9b0f8b 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2487,20 +2487,19 @@ static int f2fs_mpage_readpages(struct inode *inode,
 
 static int f2fs_read_data_folio(struct file *file, struct folio *folio)
 {
-   struct page *page = &folio->page;
-   struct inode *inode = page_file_mapping(page)->host;
+   struct inode *inode = folio_file_mapping(folio)->host;
int ret = -EAGAIN;
 
-   trace_f2fs_readpage(page, DATA);
+   trace_f2fs_readpage(&folio->page, DATA);
 
if (!f2fs_is_compress_backend_ready(inode)) {
-   unlock_page(page);
+   folio_unlock(folio);
return -EOPNOTSUPP;
}
 
/* If the file has inline data, try to read it directly */
if (f2fs_has_inline_data(inode))
-   ret = f2fs_read_inline_data(inode, page);
+   ret = f2fs_read_inline_data(inode, folio);
if (ret == -EAGAIN)
ret = f2fs_mpage_readpages(inode, NULL, folio);
return ret;
@@ -3429,7 +3428,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
 
if (f2fs_has_inline_data(inode)) {
if (pos + len <= MAX_INLINE_DATA(inode)) {
-   f2fs_do_read_inline_data(page, ipage);
+   f2fs_do_read_inline_data(page_folio(page), ipage);
set_inode_flag(inode, FI_DATA_EXIST);
if (inode->i_nlink)
set_page_private_inline(ipage);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 34acd791c198..13dee521fbe8 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -4153,10 +4153,10 @@ extern struct kmem_cache *f2fs_inode_entry_slab;
 bool f2fs_may_inline_data(struct inode *inode);
 bool f2fs_sanity_check_inline_data(struct inode *inode);
 bool f2fs_may_inline_dentry(struct inode *inode);
-void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
+void f2fs_do_read_inline_data(struct folio *folio, struct page *ipage);
 void f2fs_truncate_inline_inode(struct inode *inode,
struct page *ipage, u64 from);
-int f2fs_read_inline_data(struct inode *inode, struct page *page);
+int f2fs_read_inline_data(struct inode *inode, struct folio *folio);
 int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
 int f2fs_convert_inline_inode(struct inode *inode);
 int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 3d3218a4b29d..7638d0d7b7ee 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -61,22 +61,22 @@ bool f2fs_may_inline_dentry(struct inode *inode)
return true;
 }
 
-void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
+void f2fs_do_read_inline_data(struct folio *folio, struct page *ipage)
 {
-   struct inode *inode = page->mapping->host;
+   struct inode *inode = folio_file_mapping(folio)->host;
 
-   if (PageUptodate(page))
+   if (folio_test_uptodate(folio))
return;
 
-   f2fs_bug_on(F2FS_P_SB(page), page->index);
+   f2fs_bug_on(F2FS_I_SB(inode), folio_index(folio));
 
-   zero_user_segment(page, MAX_INLINE_DATA(inode), PAGE_SIZE);
+   folio_zero_segment(folio, MAX_INLINE_DATA(inode), folio_size(folio));
 
/* Copy the whole inline data block */
-   memcpy_to_page(page, 0, inline_data_addr(inode, ipage),
+   memcpy_to_folio(folio, 0, inline_data_addr(inode, ipage),
   MAX_INLINE_DATA(inode));
-   if (!PageUptodate(page))
-   SetPageUptodate(page);
+   if (!folio_test_uptodate(folio))
+   folio_mark_uptodate(folio);
 }
 
 void f2fs_truncate_inline_inode(struct inode *inode,
@@ -97,13 +97,13 @@ void f2fs_truncate_inline_inode(struct inode *inode,
clear_inode_flag(inode, FI_DATA_EXIST);
 }
 
-int f2fs_read_inline_data(struct inode *inode, struct page *page)
+int f2fs_read_inline_data(struct inode *inode, struct folio *folio)
 {
struct page *ipage;
 
ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
if (IS_ERR(ipage)) {
-   unlock_page(page);
+   folio_unlock(folio);
return PTR_ERR(ipage);
}
 
@@ -112,15 +112,15 @@ int f2fs_read_inline_data(struct inode *inode, struct 
page *page)
return -EAGAIN;
}
 
-   if (page->index)
-   zero_user_segment(page, 0, PAGE_SIZE);
+   if (folio_index(folio))
+   folio_zero_segment(folio, 0, folio_size(folio));
el

[f2fs-dev] [PATCH 2/4] f2fs: convert f2fs_read_single_page() to use folio

2024-04-16 Thread Chao Yu
Convert f2fs_read_single_page() to use folio and related
functionality.

Signed-off-by: Chao Yu 
---
 fs/f2fs/data.c | 27 ++-
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 14dcd621acaa..c35107657c97 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2092,7 +2092,7 @@ static inline loff_t f2fs_readpage_limit(struct inode 
*inode)
return i_size_read(inode);
 }
 
-static int f2fs_read_single_page(struct inode *inode, struct page *page,
+static int f2fs_read_single_page(struct inode *inode, struct folio *folio,
unsigned nr_pages,
struct f2fs_map_blocks *map,
struct bio **bio_ret,
@@ -2105,9 +2105,10 @@ static int f2fs_read_single_page(struct inode *inode, 
struct page *page,
sector_t last_block;
sector_t last_block_in_file;
sector_t block_nr;
+   pgoff_t index = folio_index(folio);
int ret = 0;
 
-   block_in_file = (sector_t)page_index(page);
+   block_in_file = (sector_t)index;
last_block = block_in_file + nr_pages;
last_block_in_file = bytes_to_blks(inode,
f2fs_readpage_limit(inode) + blocksize - 1);
@@ -2138,7 +2139,7 @@ static int f2fs_read_single_page(struct inode *inode, 
struct page *page,
 got_it:
if ((map->m_flags & F2FS_MAP_MAPPED)) {
block_nr = map->m_pblk + block_in_file - map->m_lblk;
-   SetPageMappedToDisk(page);
+   folio_set_mappedtodisk(folio);
 
if (!f2fs_is_valid_blkaddr(F2FS_I_SB(inode), block_nr,
DATA_GENERIC_ENHANCE_READ)) {
@@ -2147,15 +2148,15 @@ static int f2fs_read_single_page(struct inode *inode, 
struct page *page,
}
} else {
 zero_out:
-   zero_user_segment(page, 0, PAGE_SIZE);
-   if (f2fs_need_verity(inode, page->index) &&
-   !fsverity_verify_page(page)) {
+   folio_zero_segment(folio, 0, folio_size(folio));
+   if (f2fs_need_verity(inode, index) &&
+   !fsverity_verify_folio(folio)) {
ret = -EIO;
goto out;
}
-   if (!PageUptodate(page))
-   SetPageUptodate(page);
-   unlock_page(page);
+   if (!folio_test_uptodate(folio))
+   folio_mark_uptodate(folio);
+   folio_unlock(folio);
goto out;
}
 
@@ -2165,14 +2166,14 @@ static int f2fs_read_single_page(struct inode *inode, 
struct page *page,
 */
if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio,
   *last_block_in_bio, block_nr) ||
-   !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
+   !f2fs_crypt_mergeable_bio(bio, inode, index, NULL))) {
 submit_and_realloc:
f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA);
bio = NULL;
}
if (bio == NULL) {
bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
-   is_readahead ? REQ_RAHEAD : 0, page->index,
+   is_readahead ? REQ_RAHEAD : 0, index,
false);
if (IS_ERR(bio)) {
ret = PTR_ERR(bio);
@@ -2187,7 +2188,7 @@ static int f2fs_read_single_page(struct inode *inode, 
struct page *page,
 */
f2fs_wait_on_block_writeback(inode, block_nr);
 
-   if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+   if (!bio_add_folio(bio, folio, blocksize, 0))
goto submit_and_realloc;
 
inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA);
@@ -2453,7 +2454,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 read_single_page:
 #endif
 
-   ret = f2fs_read_single_page(inode, &folio->page, max_nr_pages, 
&map,
+   ret = f2fs_read_single_page(inode, folio, max_nr_pages, &map,
&bio, &last_block_in_bio, rac);
if (ret) {
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-- 
2.40.1



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH 2/2] f2fs: remove unnecessary block size check in init_f2fs_fs()

2024-04-16 Thread Chao Yu
After commit d7e9a9037de2 ("f2fs: Support Block Size == Page Size"),
F2FS_BLKSIZE equals to PAGE_SIZE, remove unnecessary check condition.

Signed-off-by: Chao Yu 
---
 fs/f2fs/super.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 6d1e4fc629e2..32aa6d6fa871 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -4933,12 +4933,6 @@ static int __init init_f2fs_fs(void)
 {
int err;
 
-   if (PAGE_SIZE != F2FS_BLKSIZE) {
-   printk("F2FS not supported on PAGE_SIZE(%lu) != 
BLOCK_SIZE(%lu)\n",
-   PAGE_SIZE, F2FS_BLKSIZE);
-   return -EINVAL;
-   }
-
err = init_inodecache();
if (err)
goto fail;
-- 
2.40.1



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH 1/2] f2fs: fix comment in sanity_check_raw_super()

2024-04-16 Thread Chao Yu
Commit d7e9a9037de2 ("f2fs: Support Block Size == Page Size") missed to
adjust comment in sanity_check_raw_super(), fix it.

Signed-off-by: Chao Yu 
---
 fs/f2fs/super.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 0a34c8746782..6d1e4fc629e2 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3456,7 +3456,7 @@ static int sanity_check_raw_super(struct f2fs_sb_info 
*sbi,
}
}
 
-   /* Currently, support only 4KB block size */
+   /* only support block_size equals to PAGE_SIZE */
if (le32_to_cpu(raw_super->log_blocksize) != F2FS_BLKSIZE_BITS) {
f2fs_info(sbi, "Invalid log_blocksize (%u), supports only %u",
  le32_to_cpu(raw_super->log_blocksize),
-- 
2.40.1



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH] common/quota: fix keywords of quota feature in _require_prjquota() for f2fs

2024-04-16 Thread Chao Yu
Previously, in f2fs, sysfile quota feature has different name:
- "quota" in mkfs.f2fs
- and "quota_ino" in dump.f2fs

Now, it has unified the name to "quota" since commit 92cc5edeb7
("f2fs-tools: reuse feature_table to clean up print_sb_state()").

It needs to fix keywords in _require_prjquota() for f2fs, Otherwise,
quota testcase will fail.

generic/383 1s ... [not run] quota sysfile not enabled in this device /dev/vdc

Cc: Jaegeuk Kim 
Signed-off-by: Chao Yu 
---
 common/quota | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/quota b/common/quota
index 6b529bf4..cfe3276f 100644
--- a/common/quota
+++ b/common/quota
@@ -145,7 +145,7 @@ _require_prjquota()
 if [ "$FSTYP" == "f2fs" ]; then
dump.f2fs $_dev 2>&1 | grep -qw project_quota
[ $? -ne 0 ] && _notrun "Project quota not enabled in this device $_dev"
-   dump.f2fs $_dev 2>&1 | grep -qw quota_ino
+   dump.f2fs $_dev 2>&1 | grep -qw quota
[ $? -ne 0 ] && _notrun "quota sysfile not enabled in this device $_dev"
cat /sys/fs/f2fs/features/project_quota | grep -qw supported
[ $? -ne 0 ] && _notrun "Installed kernel does not support project 
quotas"
-- 
2.40.1



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel