> -----Original Message-----
> From: [email protected] <[email protected]> On Behalf Of
> Chao Yu
> Sent: Tuesday, November 9, 2021 9:46 PM
> To: 常凤楠 <[email protected]>; [email protected]
> Cc: [email protected]
> Subject: Re: Do we need serial io for compress file?
>
> On 2021/11/9 11:18, 常凤楠 wrote:
> >
> >
> >> -----Original Message-----
> >> From: [email protected] <[email protected]> On Behalf
> Of Chao
> >> Yu
> >> Sent: Monday, November 8, 2021 10:30 PM
> >> To: 常凤楠 <[email protected]>; [email protected]
> >> Cc: [email protected]
> >> Subject: Re: Do we need serial io for compress file?
> >>
> >> On 2021/11/8 16:56, 常凤楠 wrote:
> >>> Anyway, I did some modify to verify my idea, and did some test, not
> >> found problem for now.
> >>
> >> Could you please consider:
> >> 1. pin file
> >> 2. fallocate file w/ filesize keeped
> >> - it will preallocate physical blocks aligned to segments 3. unpin file
> 4.
> >> overwrite compressed file
> >
> > So you means after step 1-3, it will make compressed file fragmentation
> worse ?
>
> Oh, I'm trying to find a way to avoid write performance regression due to
> race condition on writepages lock meanwhile avoiding fragmentation of
> compressed file.
Yep, that's what I want to do, looking forward your idea.
And how about the modify as below ?
Thanks.
> But I'm out of my mind, above case wouldn't help, please ignore this.
>
> Thanks,
>
> >
> > Thanks.
> >>
> >> Thanks,
> >>
> >>>
> >>> The modify as follows:
> >>>
> >>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
> >>> f4fd6c246c9a..0ed677efe820 100644
> >>> --- a/fs/f2fs/data.c
> >>> +++ b/fs/f2fs/data.c
> >>> @@ -3165,8 +3165,6 @@ static inline bool
> >>> __should_serialize_io(struct
> >> inode *inode,
> >>> if (IS_NOQUOTA(inode))
> >>> return false;
> >>>
> >>> - if (f2fs_need_compress_data(inode))
> >>> - return true;
> >>> if (wbc->sync_mode != WB_SYNC_ALL)
> >>> return true;
> >>> if (get_dirty_pages(inode) >=
> >>> SM_I(F2FS_I_SB(inode))->min_seq_blocks)
> >>> @@ -3218,11 +3216,16 @@ static int __f2fs_write_data_pages(struct
> >> address_space *mapping,
> >>> mutex_lock(&sbi->writepages);
> >>> locked = true;
> >>> }
> >>> + if (f2fs_need_compress_data(inode))
> >>> + mutex_lock(&(F2FS_I(inode)->compress_lock));
> >>>
> >>> blk_start_plug(&plug);
> >>> ret = f2fs_write_cache_pages(mapping, wbc, io_type);
> >>> blk_finish_plug(&plug);
> >>>
> >>> + if (f2fs_need_compress_data(inode))
> >>> + mutex_unlock(&(F2FS_I(inode)->compress_lock));
> >>> +
> >>> if (locked)
> >>> mutex_unlock(&sbi->writepages);
> >>>
> >>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index
> >>> 039a229e11c9..3a6587f13d2f 100644
> >>> --- a/fs/f2fs/f2fs.h
> >>> +++ b/fs/f2fs/f2fs.h
> >>> @@ -763,6 +763,7 @@ struct f2fs_inode_info {
> >>> struct list_head inmem_pages; /* inmemory pages managed by
> >> f2fs */
> >>> struct task_struct *inmem_task; /* store inmemory task */
> >>> struct mutex inmem_lock; /* lock for inmemory pages */
> >>> + struct mutex compress_lock; /* lock for compress file */
> >>> struct extent_tree *extent_tree; /* cached extent_tree
> >>> entry */
> >>>
> >>> /* avoid racing between foreground op and gc */ diff --git
> >>> a/fs/f2fs/super.c b/fs/f2fs/super.c index a133932333c5..8566e9c34540
> >>> 100644
> >>> --- a/fs/f2fs/super.c
> >>> +++ b/fs/f2fs/super.c
> >>> @@ -1323,6 +1323,7 @@ static struct inode *f2fs_alloc_inode(struct
> >> super_block *sb)
> >>> INIT_LIST_HEAD(&fi->inmem_ilist);
> >>> INIT_LIST_HEAD(&fi->inmem_pages);
> >>> mutex_init(&fi->inmem_lock);
> >>> + mutex_init(&fi->compress_lock);
> >>> init_rwsem(&fi->i_gc_rwsem[READ]);
> >>> init_rwsem(&fi->i_gc_rwsem[WRITE]);
> >>> init_rwsem(&fi->i_xattr_sem);
> >>> --
> >>>
> >>>> -----Original Message-----
> >>>> From: 常凤楠
> >>>> Sent: Monday, November 8, 2021 11:55 AM
> >>>> To: [email protected]; [email protected]
> >>>> Cc: [email protected]
> >>>> Subject: Do we need serial io for compress file?
> >>>>
> >>>> In my test, serial io for compress file will make multithread small
> >>>> write performance drop a lot.
> >>>>
> >>>> I'm try to fingure out why we need __should_serialize_io, IMO, we
> >>>> use __should_serialize_io to avoid deadlock or try to improve
> >>>> sequential performance, but I don't understand why we should do
> >>>> this for compressed file. In my test, if we just remove this, write
> >>>> same file in multithread will have problem, but parallel write
> >>>> different files in multithread is ok. So I think maybe we should
> >>>> use another lock to allow write different files in multithread.
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel