[f2fs-dev] [Bug 206057] 5.5.0-rc2-next: f2fs is extremely slow, with ext4 system works well

2020-01-02 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=206057

--- Comment #4 from Chao Yu (c...@kernel.org) ---
Thanks for the help, I've bisected the bad commit ("f2fs: cover f2fs_lock_op in
expand_inode_data case"), could you revert it and do the test again?

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [RFC PATCH v5] f2fs: support data compression

2020-01-02 Thread Chao Yu
On 2020/1/3 3:00, Jaegeuk Kim wrote:
> On 01/02, Jaegeuk Kim wrote:
>> On 12/31, Chao Yu wrote:
>>> On 2019/12/31 8:46, Jaegeuk Kim wrote:
 On 12/23, Chao Yu wrote:
> Hi Jaegeuk,
>
> Sorry for the delay.
>
> On 2019/12/19 5:46, Jaegeuk Kim wrote:
>> Hi Chao,
>>
>> I still see some diffs from my latest testing version, so please check 
>> anything
>> that you made additionally from here.
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev=25d18e19a91e60837d36368ee939db13fd16dc64
>
> I've checked the diff and picked up valid parts, could you please check 
> and
> comment on it?
>
> ---
>  fs/f2fs/compress.c |  8 
>  fs/f2fs/data.c | 18 +++---
>  fs/f2fs/f2fs.h |  3 +++
>  fs/f2fs/file.c |  1 -
>  4 files changed, 22 insertions(+), 8 deletions(-)
>
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index af23ed6deffd..1bc86a54ad71 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -593,7 +593,7 @@ static int prepare_compress_overwrite(struct 
> compress_ctx *cc,
>   fgp_flag, GFP_NOFS);
>   if (!page) {
>   ret = -ENOMEM;
> - goto unlock_pages;
> + goto release_pages;
>   }
>
>   if (PageUptodate(page))
> @@ -608,13 +608,13 @@ static int prepare_compress_overwrite(struct 
> compress_ctx *cc,
>   ret = f2fs_read_multi_pages(cc, , cc->cluster_size,
>   _block_in_bio, false);
>   if (ret)
> - goto release_pages;
> + goto unlock_pages;
>   if (bio)
>   f2fs_submit_bio(sbi, bio, DATA);
>
>   ret = f2fs_init_compress_ctx(cc);
>   if (ret)
> - goto release_pages;
> + goto unlock_pages;
>   }
>
>   for (i = 0; i < cc->cluster_size; i++) {
> @@ -762,7 +762,7 @@ static int f2fs_write_compressed_pages(struct 
> compress_ctx *cc,
>   if (err)
>   goto out_unlock_op;
>
> - psize = (cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
> + psize = (loff_t)(cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
>
>   err = f2fs_get_node_info(fio.sbi, dn.nid, );
>   if (err)
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 19cd03450066..f1f5c701228d 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -184,13 +184,18 @@ static void f2fs_decompress_work(struct 
> bio_post_read_ctx *ctx)
>  }
>
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> +void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size)
> +{
> + f2fs_decompress_end_io(rpages, cluster_size, false, true);
> +}
> +
>  static void f2fs_verify_bio(struct bio *bio)
>  {
>   struct page *page = bio_first_page_all(bio);
>   struct decompress_io_ctx *dic =
>   (struct decompress_io_ctx *)page_private(page);
>
> - f2fs_decompress_end_io(dic->rpages, dic->cluster_size, false, true);
> + f2fs_verify_pages(dic->rpages, dic->cluster_size);
>   f2fs_free_dic(dic);
>  }
>  #endif
> @@ -507,10 +512,16 @@ static bool __has_merged_page(struct bio *bio, 
> struct inode *inode,
>   bio_for_each_segment_all(bvec, bio, iter_all) {
>   struct page *target = bvec->bv_page;
>
> - if (fscrypt_is_bounce_page(target))
> + if (fscrypt_is_bounce_page(target)) {
>   target = fscrypt_pagecache_page(target);
> - if (f2fs_is_compressed_page(target))
> + if (IS_ERR(target))
> + continue;
> + }
> + if (f2fs_is_compressed_page(target)) {
>   target = f2fs_compress_control_page(target);
> + if (IS_ERR(target))
> + continue;
> + }
>
>   if (inode && inode == target->mapping->host)
>   return true;
> @@ -2039,6 +2050,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, 
> struct bio **bio_ret,
>   if (ret)
>   goto out;
>
> + /* cluster was overwritten as normal cluster */
>   if (dn.data_blkaddr != COMPRESS_ADDR)
>   goto out;
>
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index 5d55cef66410..17d2af4eeafb 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -2719,6 +2719,7 @@ static inline void set_compress_context(struct 
> inode *inode)
>   1 << F2FS_I(inode)->i_log_cluster_size;
>   F2FS_I(inode)->i_flags |= F2FS_COMPR_FL;
>   set_inode_flag(inode, FI_COMPRESSED_FILE);

[f2fs-dev] [PATCH] f2fs: show the CP_PAUSE reason in checkpoint traces

2020-01-02 Thread Sahitya Tummala
Remove the duplicate CP_UMOUNT enum and add the new CP_PAUSE
enum to show the checkpoint reason in the trace prints.

Signed-off-by: Sahitya Tummala 
---
 include/trace/events/f2fs.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 1796ff9..3a17252 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -49,6 +49,7 @@
 TRACE_DEFINE_ENUM(CP_RECOVERY);
 TRACE_DEFINE_ENUM(CP_DISCARD);
 TRACE_DEFINE_ENUM(CP_TRIMMED);
+TRACE_DEFINE_ENUM(CP_PAUSE);
 
 #define show_block_type(type)  \
__print_symbolic(type,  \
@@ -124,7 +125,7 @@
{ CP_SYNC,  "Sync" },   \
{ CP_RECOVERY,  "Recovery" },   \
{ CP_DISCARD,   "Discard" },\
-   { CP_UMOUNT,"Umount" }, \
+   { CP_PAUSE, "Pause" },  \
{ CP_TRIMMED,   "Trimmed" })
 
 #define show_fsync_cpreason(type)  \
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [RFC PATCH v5] f2fs: support data compression

2020-01-02 Thread Chao Yu
On 2020/1/3 2:18, Jaegeuk Kim wrote:
> On 12/31, Chao Yu wrote:
>> On 2019/12/31 8:46, Jaegeuk Kim wrote:
>>> On 12/23, Chao Yu wrote:
 Hi Jaegeuk,

 Sorry for the delay.

 On 2019/12/19 5:46, Jaegeuk Kim wrote:
> Hi Chao,
>
> I still see some diffs from my latest testing version, so please check 
> anything
> that you made additionally from here.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev=25d18e19a91e60837d36368ee939db13fd16dc64

 I've checked the diff and picked up valid parts, could you please check and
 comment on it?

 ---
  fs/f2fs/compress.c |  8 
  fs/f2fs/data.c | 18 +++---
  fs/f2fs/f2fs.h |  3 +++
  fs/f2fs/file.c |  1 -
  4 files changed, 22 insertions(+), 8 deletions(-)

 diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
 index af23ed6deffd..1bc86a54ad71 100644
 --- a/fs/f2fs/compress.c
 +++ b/fs/f2fs/compress.c
 @@ -593,7 +593,7 @@ static int prepare_compress_overwrite(struct 
 compress_ctx *cc,
fgp_flag, GFP_NOFS);
if (!page) {
ret = -ENOMEM;
 -  goto unlock_pages;
 +  goto release_pages;
}

if (PageUptodate(page))
 @@ -608,13 +608,13 @@ static int prepare_compress_overwrite(struct 
 compress_ctx *cc,
ret = f2fs_read_multi_pages(cc, , cc->cluster_size,
_block_in_bio, false);
if (ret)
 -  goto release_pages;
 +  goto unlock_pages;
if (bio)
f2fs_submit_bio(sbi, bio, DATA);

ret = f2fs_init_compress_ctx(cc);
if (ret)
 -  goto release_pages;
 +  goto unlock_pages;
}

for (i = 0; i < cc->cluster_size; i++) {
 @@ -762,7 +762,7 @@ static int f2fs_write_compressed_pages(struct 
 compress_ctx *cc,
if (err)
goto out_unlock_op;

 -  psize = (cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
 +  psize = (loff_t)(cc->rpages[last_index]->index + 1) << PAGE_SHIFT;

err = f2fs_get_node_info(fio.sbi, dn.nid, );
if (err)
 diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
 index 19cd03450066..f1f5c701228d 100644
 --- a/fs/f2fs/data.c
 +++ b/fs/f2fs/data.c
 @@ -184,13 +184,18 @@ static void f2fs_decompress_work(struct 
 bio_post_read_ctx *ctx)
  }

  #ifdef CONFIG_F2FS_FS_COMPRESSION
 +void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size)
 +{
 +  f2fs_decompress_end_io(rpages, cluster_size, false, true);
 +}
 +
  static void f2fs_verify_bio(struct bio *bio)
  {
struct page *page = bio_first_page_all(bio);
struct decompress_io_ctx *dic =
(struct decompress_io_ctx *)page_private(page);

 -  f2fs_decompress_end_io(dic->rpages, dic->cluster_size, false, true);
 +  f2fs_verify_pages(dic->rpages, dic->cluster_size);
f2fs_free_dic(dic);
  }
  #endif
 @@ -507,10 +512,16 @@ static bool __has_merged_page(struct bio *bio, 
 struct inode *inode,
bio_for_each_segment_all(bvec, bio, iter_all) {
struct page *target = bvec->bv_page;

 -  if (fscrypt_is_bounce_page(target))
 +  if (fscrypt_is_bounce_page(target)) {
target = fscrypt_pagecache_page(target);
 -  if (f2fs_is_compressed_page(target))
 +  if (IS_ERR(target))
 +  continue;
 +  }
 +  if (f2fs_is_compressed_page(target)) {
target = f2fs_compress_control_page(target);
 +  if (IS_ERR(target))
 +  continue;
 +  }

if (inode && inode == target->mapping->host)
return true;
 @@ -2039,6 +2050,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, 
 struct bio **bio_ret,
if (ret)
goto out;

 +  /* cluster was overwritten as normal cluster */
if (dn.data_blkaddr != COMPRESS_ADDR)
goto out;

 diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
 index 5d55cef66410..17d2af4eeafb 100644
 --- a/fs/f2fs/f2fs.h
 +++ b/fs/f2fs/f2fs.h
 @@ -2719,6 +2719,7 @@ static inline void set_compress_context(struct inode 
 *inode)
1 << F2FS_I(inode)->i_log_cluster_size;
F2FS_I(inode)->i_flags |= F2FS_COMPR_FL;
set_inode_flag(inode, FI_COMPRESSED_FILE);
 +  stat_inc_compr_inode(inode);
  }

  static inline unsigned int addrs_per_inode(struct inode 

[f2fs-dev] [Bug 206057] 5.5.0-rc2-next: f2fs is extremely slow, with ext4 system works well

2020-01-02 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=206057

--- Comment #3 from David Heidelberg (okias) (da...@ixit.cz) ---
(also patch didn't helped)

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [RFC PATCH v5] f2fs: support data compression

2020-01-02 Thread Jaegeuk Kim
On 01/02, Jaegeuk Kim wrote:
> On 12/31, Chao Yu wrote:
> > On 2019/12/31 8:46, Jaegeuk Kim wrote:
> > > On 12/23, Chao Yu wrote:
> > >> Hi Jaegeuk,
> > >>
> > >> Sorry for the delay.
> > >>
> > >> On 2019/12/19 5:46, Jaegeuk Kim wrote:
> > >>> Hi Chao,
> > >>>
> > >>> I still see some diffs from my latest testing version, so please check 
> > >>> anything
> > >>> that you made additionally from here.
> > >>>
> > >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev=25d18e19a91e60837d36368ee939db13fd16dc64
> > >>
> > >> I've checked the diff and picked up valid parts, could you please check 
> > >> and
> > >> comment on it?
> > >>
> > >> ---
> > >>  fs/f2fs/compress.c |  8 
> > >>  fs/f2fs/data.c | 18 +++---
> > >>  fs/f2fs/f2fs.h |  3 +++
> > >>  fs/f2fs/file.c |  1 -
> > >>  4 files changed, 22 insertions(+), 8 deletions(-)
> > >>
> > >> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> > >> index af23ed6deffd..1bc86a54ad71 100644
> > >> --- a/fs/f2fs/compress.c
> > >> +++ b/fs/f2fs/compress.c
> > >> @@ -593,7 +593,7 @@ static int prepare_compress_overwrite(struct 
> > >> compress_ctx *cc,
> > >>  fgp_flag, 
> > >> GFP_NOFS);
> > >>  if (!page) {
> > >>  ret = -ENOMEM;
> > >> -goto unlock_pages;
> > >> +goto release_pages;
> > >>  }
> > >>
> > >>  if (PageUptodate(page))
> > >> @@ -608,13 +608,13 @@ static int prepare_compress_overwrite(struct 
> > >> compress_ctx *cc,
> > >>  ret = f2fs_read_multi_pages(cc, , cc->cluster_size,
> > >>  _block_in_bio, 
> > >> false);
> > >>  if (ret)
> > >> -goto release_pages;
> > >> +goto unlock_pages;
> > >>  if (bio)
> > >>  f2fs_submit_bio(sbi, bio, DATA);
> > >>
> > >>  ret = f2fs_init_compress_ctx(cc);
> > >>  if (ret)
> > >> -goto release_pages;
> > >> +goto unlock_pages;
> > >>  }
> > >>
> > >>  for (i = 0; i < cc->cluster_size; i++) {
> > >> @@ -762,7 +762,7 @@ static int f2fs_write_compressed_pages(struct 
> > >> compress_ctx *cc,
> > >>  if (err)
> > >>  goto out_unlock_op;
> > >>
> > >> -psize = (cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
> > >> +psize = (loff_t)(cc->rpages[last_index]->index + 1) << 
> > >> PAGE_SHIFT;
> > >>
> > >>  err = f2fs_get_node_info(fio.sbi, dn.nid, );
> > >>  if (err)
> > >> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> > >> index 19cd03450066..f1f5c701228d 100644
> > >> --- a/fs/f2fs/data.c
> > >> +++ b/fs/f2fs/data.c
> > >> @@ -184,13 +184,18 @@ static void f2fs_decompress_work(struct 
> > >> bio_post_read_ctx *ctx)
> > >>  }
> > >>
> > >>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> > >> +void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size)
> > >> +{
> > >> +f2fs_decompress_end_io(rpages, cluster_size, false, true);
> > >> +}
> > >> +
> > >>  static void f2fs_verify_bio(struct bio *bio)
> > >>  {
> > >>  struct page *page = bio_first_page_all(bio);
> > >>  struct decompress_io_ctx *dic =
> > >>  (struct decompress_io_ctx *)page_private(page);
> > >>
> > >> -f2fs_decompress_end_io(dic->rpages, dic->cluster_size, false, 
> > >> true);
> > >> +f2fs_verify_pages(dic->rpages, dic->cluster_size);
> > >>  f2fs_free_dic(dic);
> > >>  }
> > >>  #endif
> > >> @@ -507,10 +512,16 @@ static bool __has_merged_page(struct bio *bio, 
> > >> struct inode *inode,
> > >>  bio_for_each_segment_all(bvec, bio, iter_all) {
> > >>  struct page *target = bvec->bv_page;
> > >>
> > >> -if (fscrypt_is_bounce_page(target))
> > >> +if (fscrypt_is_bounce_page(target)) {
> > >>  target = fscrypt_pagecache_page(target);
> > >> -if (f2fs_is_compressed_page(target))
> > >> +if (IS_ERR(target))
> > >> +continue;
> > >> +}
> > >> +if (f2fs_is_compressed_page(target)) {
> > >>  target = f2fs_compress_control_page(target);
> > >> +if (IS_ERR(target))
> > >> +continue;
> > >> +}
> > >>
> > >>  if (inode && inode == target->mapping->host)
> > >>  return true;
> > >> @@ -2039,6 +2050,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, 
> > >> struct bio **bio_ret,
> > >>  if (ret)
> > >>  goto out;
> > >>
> > >> +/* cluster was overwritten as normal cluster */
> > >>  if 

Re: [f2fs-dev] [PATCH 1/4 v2] f2fs: convert inline_dir early before starting rename

2020-01-02 Thread Jaegeuk Kim
If we hit an error during rename, we'll get two dentries in different
directories.

Chao adds to check the room in inline_dir which can avoid needless
inversion. This should be done by inode_lock(_dir).

Signed-off-by: Chao Yu 
Signed-off-by: Jaegeuk Kim 
---
 fs/f2fs/dir.c| 14 ++
 fs/f2fs/f2fs.h   |  3 +++
 fs/f2fs/inline.c | 42 --
 fs/f2fs/namei.c  | 37 ++---
 4 files changed, 71 insertions(+), 25 deletions(-)

diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index c967cacf979e..b56f6060c1a6 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -578,6 +578,20 @@ int f2fs_room_for_filename(const void *bitmap, int slots, 
int max_slots)
goto next;
 }
 
+bool f2fs_has_enough_room(struct inode *dir, struct page *ipage,
+   struct fscrypt_name *fname)
+{
+   struct f2fs_dentry_ptr d;
+   unsigned int bit_pos;
+   int slots = GET_DENTRY_SLOTS(fname_len(fname));
+
+   make_dentry_ptr_inline(dir, , inline_data_addr(dir, ipage));
+
+   bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
+
+   return bit_pos < d.max;
+}
+
 void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
const struct qstr *name, f2fs_hash_t name_hash,
unsigned int bit_pos)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 740e4f11bd1f..0164c8279037 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3121,6 +3121,8 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct 
qstr *qstr,
struct page **page);
 void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
struct page *page, struct inode *inode);
+bool f2fs_has_enough_room(struct inode *dir, struct page *ipage,
+   struct fscrypt_name *fname);
 void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
const struct qstr *name, f2fs_hash_t name_hash,
unsigned int bit_pos);
@@ -3663,6 +3665,7 @@ void f2fs_truncate_inline_inode(struct inode *inode,
 int f2fs_read_inline_data(struct inode *inode, struct page *page);
 int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
 int f2fs_convert_inline_inode(struct inode *inode);
+int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry);
 int f2fs_write_inline_data(struct inode *inode, struct page *page);
 bool f2fs_recover_inline_data(struct inode *inode, struct page *npage);
 struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 52f85ed07a15..4167e5408151 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -530,7 +530,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, 
struct page *ipage,
return err;
 }
 
-static int f2fs_convert_inline_dir(struct inode *dir, struct page *ipage,
+static int do_convert_inline_dir(struct inode *dir, struct page *ipage,
void *inline_dentry)
 {
if (!F2FS_I(dir)->i_dir_level)
@@ -539,6 +539,44 @@ static int f2fs_convert_inline_dir(struct inode *dir, 
struct page *ipage,
return f2fs_move_rehashed_dirents(dir, ipage, inline_dentry);
 }
 
+int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry)
+{
+   struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+   struct page *ipage;
+   struct fscrypt_name fname;
+   void *inline_dentry = NULL;
+   int err = 0;
+
+   if (!f2fs_has_inline_dentry(dir))
+   return 0;
+
+   f2fs_lock_op(sbi);
+
+   err = fscrypt_setup_filename(dir, >d_name, 0, );
+   if (err)
+   goto out;
+
+   ipage = f2fs_get_node_page(sbi, dir->i_ino);
+   if (IS_ERR(ipage)) {
+   err = PTR_ERR(ipage);
+   goto out;
+   }
+
+   if (f2fs_has_enough_room(dir, ipage, )) {
+   f2fs_put_page(ipage, 1);
+   goto out;
+   }
+
+   inline_dentry = inline_data_addr(dir, ipage);
+
+   err = do_convert_inline_dir(dir, ipage, inline_dentry);
+   if (!err)
+   f2fs_put_page(ipage, 1);
+out:
+   f2fs_unlock_op(sbi);
+   return err;
+}
+
 int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
const struct qstr *orig_name,
struct inode *inode, nid_t ino, umode_t mode)
@@ -562,7 +600,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct 
qstr *new_name,
 
bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
if (bit_pos >= d.max) {
-   err = f2fs_convert_inline_dir(dir, ipage, inline_dentry);
+   err = do_convert_inline_dir(dir, ipage, inline_dentry);
if (err)
return err;
err = -EAGAIN;
diff 

Re: [f2fs-dev] [RFC PATCH v5] f2fs: support data compression

2020-01-02 Thread Jaegeuk Kim
On 12/31, Chao Yu wrote:
> On 2019/12/31 8:46, Jaegeuk Kim wrote:
> > On 12/23, Chao Yu wrote:
> >> Hi Jaegeuk,
> >>
> >> Sorry for the delay.
> >>
> >> On 2019/12/19 5:46, Jaegeuk Kim wrote:
> >>> Hi Chao,
> >>>
> >>> I still see some diffs from my latest testing version, so please check 
> >>> anything
> >>> that you made additionally from here.
> >>>
> >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=dev=25d18e19a91e60837d36368ee939db13fd16dc64
> >>
> >> I've checked the diff and picked up valid parts, could you please check and
> >> comment on it?
> >>
> >> ---
> >>  fs/f2fs/compress.c |  8 
> >>  fs/f2fs/data.c | 18 +++---
> >>  fs/f2fs/f2fs.h |  3 +++
> >>  fs/f2fs/file.c |  1 -
> >>  4 files changed, 22 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> >> index af23ed6deffd..1bc86a54ad71 100644
> >> --- a/fs/f2fs/compress.c
> >> +++ b/fs/f2fs/compress.c
> >> @@ -593,7 +593,7 @@ static int prepare_compress_overwrite(struct 
> >> compress_ctx *cc,
> >>fgp_flag, GFP_NOFS);
> >>if (!page) {
> >>ret = -ENOMEM;
> >> -  goto unlock_pages;
> >> +  goto release_pages;
> >>}
> >>
> >>if (PageUptodate(page))
> >> @@ -608,13 +608,13 @@ static int prepare_compress_overwrite(struct 
> >> compress_ctx *cc,
> >>ret = f2fs_read_multi_pages(cc, , cc->cluster_size,
> >>_block_in_bio, false);
> >>if (ret)
> >> -  goto release_pages;
> >> +  goto unlock_pages;
> >>if (bio)
> >>f2fs_submit_bio(sbi, bio, DATA);
> >>
> >>ret = f2fs_init_compress_ctx(cc);
> >>if (ret)
> >> -  goto release_pages;
> >> +  goto unlock_pages;
> >>}
> >>
> >>for (i = 0; i < cc->cluster_size; i++) {
> >> @@ -762,7 +762,7 @@ static int f2fs_write_compressed_pages(struct 
> >> compress_ctx *cc,
> >>if (err)
> >>goto out_unlock_op;
> >>
> >> -  psize = (cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
> >> +  psize = (loff_t)(cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
> >>
> >>err = f2fs_get_node_info(fio.sbi, dn.nid, );
> >>if (err)
> >> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> >> index 19cd03450066..f1f5c701228d 100644
> >> --- a/fs/f2fs/data.c
> >> +++ b/fs/f2fs/data.c
> >> @@ -184,13 +184,18 @@ static void f2fs_decompress_work(struct 
> >> bio_post_read_ctx *ctx)
> >>  }
> >>
> >>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> >> +void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size)
> >> +{
> >> +  f2fs_decompress_end_io(rpages, cluster_size, false, true);
> >> +}
> >> +
> >>  static void f2fs_verify_bio(struct bio *bio)
> >>  {
> >>struct page *page = bio_first_page_all(bio);
> >>struct decompress_io_ctx *dic =
> >>(struct decompress_io_ctx *)page_private(page);
> >>
> >> -  f2fs_decompress_end_io(dic->rpages, dic->cluster_size, false, true);
> >> +  f2fs_verify_pages(dic->rpages, dic->cluster_size);
> >>f2fs_free_dic(dic);
> >>  }
> >>  #endif
> >> @@ -507,10 +512,16 @@ static bool __has_merged_page(struct bio *bio, 
> >> struct inode *inode,
> >>bio_for_each_segment_all(bvec, bio, iter_all) {
> >>struct page *target = bvec->bv_page;
> >>
> >> -  if (fscrypt_is_bounce_page(target))
> >> +  if (fscrypt_is_bounce_page(target)) {
> >>target = fscrypt_pagecache_page(target);
> >> -  if (f2fs_is_compressed_page(target))
> >> +  if (IS_ERR(target))
> >> +  continue;
> >> +  }
> >> +  if (f2fs_is_compressed_page(target)) {
> >>target = f2fs_compress_control_page(target);
> >> +  if (IS_ERR(target))
> >> +  continue;
> >> +  }
> >>
> >>if (inode && inode == target->mapping->host)
> >>return true;
> >> @@ -2039,6 +2050,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, 
> >> struct bio **bio_ret,
> >>if (ret)
> >>goto out;
> >>
> >> +  /* cluster was overwritten as normal cluster */
> >>if (dn.data_blkaddr != COMPRESS_ADDR)
> >>goto out;
> >>
> >> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >> index 5d55cef66410..17d2af4eeafb 100644
> >> --- a/fs/f2fs/f2fs.h
> >> +++ b/fs/f2fs/f2fs.h
> >> @@ -2719,6 +2719,7 @@ static inline void set_compress_context(struct inode 
> >> *inode)
> >>1 << F2FS_I(inode)->i_log_cluster_size;
> >>F2FS_I(inode)->i_flags |= F2FS_COMPR_FL;
> >>set_inode_flag(inode, FI_COMPRESSED_FILE);
> >> +  stat_inc_compr_inode(inode);
> >>  }
> >>
> >>  static inline unsigned int addrs_per_inode(struct inode *inode)
> >> @@ -3961,6 +3962,8 @@ static 

[f2fs-dev] [Bug 206057] 5.5.0-rc2-next: f2fs is extremely slow, with ext4 system works well

2020-01-02 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=206057

--- Comment #2 from David Heidelberg (okias) (da...@ixit.cz) ---
I'm not sure I can try bisect (since using custom patches to run device),
anyway if it helps, F2FS filesystem has been created by TWRP (kernel 3.1).

I'll try get complete f2fs image created by recent kernel and retest.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] f2fs: remove unneeded check for error allocating bio_post_read_ctx

2020-01-02 Thread Chao Yu
On 2020/1/1 2:14, Eric Biggers wrote:
> From: Eric Biggers 
> 
> Since allocating an object from a mempool never fails when
> __GFP_DIRECT_RECLAIM (which is included in GFP_NOFS) is set, the check
> for failure to allocate a bio_post_read_ctx is unnecessary.  Remove it.
> 
> Signed-off-by: Eric Biggers 

Reviewed-by: Chao Yu 

Thanks,


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] f2fs: fix deadlock allocating bio_post_read_ctx from mempool

2020-01-02 Thread Chao Yu
On 2020/1/1 2:14, Eric Biggers wrote:
> From: Eric Biggers 
> 
> Without any form of coordination, any case where multiple allocations
> from the same mempool are needed at a time to make forward progress can
> deadlock under memory pressure.
> 
> This is the case for struct bio_post_read_ctx, as one can be allocated
> to decrypt a Merkle tree page during fsverity_verify_bio(), which itself
> is running from a post-read callback for a data bio which has its own
> struct bio_post_read_ctx.
> 
> Fix this by freeing first bio_post_read_ctx before calling
> fsverity_verify_bio().  This works because verity (if enabled) is always
> the last post-read step.
> 
> This deadlock can be reproduced by trying to read from an encrypted
> verity file after reducing NUM_PREALLOC_POST_READ_CTXS to 1 and patching
> mempool_alloc() to pretend that pool->alloc() always fails.
> 
> Note that since NUM_PREALLOC_POST_READ_CTXS is actually 128, to actually
> hit this bug in practice would require reading from lots of encrypted
> verity files at the same time.  But it's theoretically possible, as N
> available objects doesn't guarantee forward progress when > N/2 threads
> each need 2 objects at a time.
> 
> Fixes: 95ae251fe828 ("f2fs: add fs-verity support")
> Signed-off-by: Eric Biggers 

Reviewed-by: Chao Yu 

Thanks,


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel