[f2fs-dev] [PATCH] f2fs: protect new segment allocation in expand_inode_data

2020-05-26 Thread Daeho Jeong
From: Daeho Jeong 

Found a new segemnt allocation without f2fs_lock_op() in
expand_inode_data(). So, when we do fallocate() for a pinned file
and trigger checkpoint very frequently and simultaneously. F2FS gets
stuck in the below code of do_checkpoint() forever.

  f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
  /* Wait for all dirty meta pages to be submitted for IO */
<= if fallocate() here,
  f2fs_wait_on_all_pages(sbi, F2FS_DIRTY_META); <= it'll wait forever.

Signed-off-by: Daeho Jeong 
---
 fs/f2fs/file.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index f7de2a1da528..14ace885baa9 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -1660,7 +1660,11 @@ static int expand_inode_data(struct inode *inode, loff_t 
offset,
 
down_write(>pin_sem);
map.m_seg_type = CURSEG_COLD_DATA_PINNED;
+
+   f2fs_lock_op(sbi);
f2fs_allocate_new_segments(sbi, CURSEG_COLD_DATA);
+   f2fs_unlock_op(sbi);
+
err = f2fs_map_blocks(inode, , 1, F2FS_GET_BLOCK_PRE_DIO);
up_write(>pin_sem);
 
-- 
2.27.0.rc0.183.gde8f92d652-goog



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH] f2fs_io: add randread

2020-05-26 Thread Daeho Jeong
From: Daeho Jeong 

I've added a new command to evaluate random read.

Signed-off-by: Daeho Jeong 
---
 tools/f2fs_io/f2fs_io.c | 64 +
 1 file changed, 64 insertions(+)

diff --git a/tools/f2fs_io/f2fs_io.c b/tools/f2fs_io/f2fs_io.c
index d1889ff..30544c1 100644
--- a/tools/f2fs_io/f2fs_io.c
+++ b/tools/f2fs_io/f2fs_io.c
@@ -551,6 +551,69 @@ static void do_read(int argc, char **argv, const struct 
cmd_desc *cmd)
exit(0);
 }
 
+#define randread_desc "random read data from file"
+#define randread_help  \
+"f2fs_io randread [chunk_size in 4kb] [count] [IO] [file_path]\n\n"\
+"Do random read data in file_path\n"   \
+"IO can be\n"  \
+"  buffered : buffered IO\n"   \
+"  dio  : direct IO\n" \
+
+static void do_randread(int argc, char **argv, const struct cmd_desc *cmd)
+{
+   u64 buf_size = 0, ret = 0, read_cnt = 0;
+   u64 idx, end_idx, aligned_size;
+   char *buf = NULL;
+   unsigned bs, count, i;
+   int flags = 0;
+   int fd;
+   time_t t;
+   struct stat stbuf;
+
+   if (argc != 5) {
+   fputs("Excess arguments\n\n", stderr);
+   fputs(cmd->cmd_help, stderr);
+   exit(1);
+   }
+
+   bs = atoi(argv[1]);
+   if (bs > 1024)
+   die("Too big chunk size - limit: 4MB");
+   buf_size = bs * 4096;
+
+   buf = aligned_xalloc(4096, buf_size);
+
+   count = atoi(argv[2]);
+   if (!strcmp(argv[3], "dio"))
+   flags |= O_DIRECT;
+   else if (strcmp(argv[3], "buffered"))
+   die("Wrong IO type");
+
+   fd = xopen(argv[4], O_RDONLY | flags, 0);
+
+   if (fstat(fd, ) != 0)
+   die_errno("fstat of source file failed");
+
+   aligned_size = (u64)stbuf.st_size & ~((u64)(4096 - 1));
+   if (aligned_size < buf_size)
+   die("File is too small to random read");
+   end_idx = (u64)(aligned_size - buf_size) / (u64)4096 + 1;
+
+   srand((unsigned) time());
+
+   for (i = 0; i < count; i++) {
+   idx = rand() % end_idx;
+
+   ret = pread(fd, buf, buf_size, 4096 * idx);
+   if (ret != buf_size)
+   break;
+
+   read_cnt += ret;
+   }
+   printf("Read %"PRIu64" bytes\n", read_cnt);
+   exit(0);
+}
+
 struct file_ext {
__u32 f_pos;
__u32 start_blk;
@@ -841,6 +904,7 @@ const struct cmd_desc cmd_list[] = {
CMD(fallocate),
CMD(write),
CMD(read),
+   CMD(randread),
CMD(fiemap),
CMD(gc_urgent),
CMD(defrag_file),
-- 
2.27.0.rc0.183.gde8f92d652-goog



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-26 Thread Chao Yu
On 2020/5/26 9:56, Jaegeuk Kim wrote:
> On 05/26, Chao Yu wrote:
>> On 2020/5/26 9:11, Chao Yu wrote:
>>> On 2020/5/25 23:06, Jaegeuk Kim wrote:
 On 05/25, Chao Yu wrote:
> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>> Shutdown test is somtimes hung, since it keeps trying to flush dirty 
>> node pages

71.07% 0.01%  kworker/u256:1+  [kernel.kallsyms]  [k] wb_writeback
|
 --71.06%--wb_writeback
   |
   |--68.96%--__writeback_inodes_wb
   |  |
   |   --68.95%--writeback_sb_inodes
   | |
   | |--65.08%--__writeback_single_inode
   | |  |
   | |   --64.35%--do_writepages
   | | |
   | | 
|--59.83%--f2fs_write_node_pages
   | | |  |
   | | |   
--59.74%--f2fs_sync_node_pages
   | | |
 |
   | | |
 |--27.91%--pagevec_lookup_range_tag
   | | |
 |  |
   | | |
 |   --27.90%--find_get_pages_range_tag

Before umount, kworker will always hold one core, that looks not reasonable,
to avoid that, could we just allow node write, since it's out-place-update,
and cp is not allowed, we don't need to worry about its effect on data on
previous checkpoint, and it can decrease memory footprint cost by node pages.

Thanks,

>
> IMO, for umount case, we should drop dirty reference and dirty pages on 
> meta/data
> pages like we change for node pages to avoid potential dead loop...

 I believe we're doing for them. :P
>>>
>>> Actually, I mean do we need to drop dirty meta/data pages explicitly as 
>>> below:
>>>
>>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
>>> index 3dc3ac6fe143..4c08fd0a680a 100644
>>> --- a/fs/f2fs/checkpoint.c
>>> +++ b/fs/f2fs/checkpoint.c
>>> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
>>>
>>> trace_f2fs_writepage(page, META);
>>>
>>> -   if (unlikely(f2fs_cp_error(sbi)))
>>> +   if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   dec_page_count(sbi, F2FS_DIRTY_META);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>> goto redirty_out;
>>> +   }
>>> if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>>> goto redirty_out;
>>> if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 48a622b95b76..94b342802513 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
>>> int *submitted,
>>>
>>> /* we should bypass data pages to proceed the kworkder jobs */
>>> if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   inode_dec_dirty_pages(inode);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>
>> Oh, I notice previously, we will drop non-directory inode's dirty pages 
>> directly,
>> however, during umount, we'd better drop directory inode's dirty pages as 
>> well, right?
> 
> Hmm, I remember I dropped them before. Need to double check.
> 
>>
>>> mapping_set_error(page->mapping, -EIO);
>>> /*
>>>  * don't drop any dirty dentry pages for keeping lastest
>>>

>
> Thanks,
>
>> in an inifinite loop. Let's drop dirty pages at umount in that case.
>>
>> Signed-off-by: Jaegeuk Kim 
>> ---
>> v3:
>>  - fix wrong unlock
>>
>> v2:
>>  - fix typos
>>
>>  fs/f2fs/node.c | 9 -
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>> index e632de10aedab..e0bb0f7e0506e 100644
>> --- a/fs/f2fs/node.c
>> +++ b/fs/f2fs/node.c
>> @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
>> bool atomic, bool *submitted,
>>  
>>  trace_f2fs_writepage(page, NODE);
>>  
>> -if (unlikely(f2fs_cp_error(sbi)))
>> +if 

[f2fs-dev] [PATCH] f2fs: fix retry logic in f2fs_write_cache_pages()

2020-05-26 Thread Sahitya Tummala
In case a compressed file is getting overwritten, the current retry
logic doesn't include the current page to be retried now as it sets
the new start index as 0 and new end index as writeback_index - 1.
This causes the corresponding cluster to be uncompressed and written
as normal pages without compression. Fix this by allowing writeback to
be retried for the current page as well (in case of compressed page
getting retried due to index mismatch with cluster index). So that
this cluster can be written compressed in case of overwrite.

Signed-off-by: Sahitya Tummala 
---
 fs/f2fs/data.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 4af5fcd..bfd1df4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3024,7 +3024,7 @@ static int f2fs_write_cache_pages(struct address_space 
*mapping,
if ((!cycled && !done) || retry) {
cycled = 1;
index = 0;
-   end = writeback_index - 1;
+   end = retry ? -1 : writeback_index - 1;
goto retry;
}
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] Discard issue

2020-05-26 Thread Chao Yu
On 2020/5/27 9:58, Jaegeuk Kim wrote:
> On 05/27, Chao Yu wrote:
>> On 2020/5/26 15:44, Chao Yu wrote:
>>> On 2020/5/26 10:26, Jaegeuk Kim wrote:
 On 05/26, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>> Hi Chao,
>>
>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with 
>> error
>
> (1000s) do you mean time in single round or total time of multi rounds?
>
>> injection. Do you have any issue from your side?
>
> I haven't hit that before, in my test, in single round, fsstress won't 
> last long
> time (normally about 10s+ for each round).
>
> Below is por_fsstress() implementation in my code base:
>
> por_fsstress()
> {
> _fs_opts
>
> while true; do
> ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 
> -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 
> -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f 
> insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 20 -d $TESTDIR/test &
> sleep 10
> src/godown $TESTDIR
> killall fsstress
> sleep 5
> umount $TESTDIR
> if [ $? -ne 0 ]; then
> for i in `seq 1 50`
> do
> umount $TESTDIR
> if [ $? -eq 0]; then
> break
> fi
> sleep 5
> done
> fi
> echo 3 > /proc/sys/vm/drop_caches
> _fsck
> _mount f2fs
> rm $TESTDIR/testfile
> touch $TESTDIR/testfile
> umount $TESTDIR
> _fsck
> _mount f2fs
> _rm_50
> done
> }
>
> Did you update this code?
>
> Could you share more test configuration, like mkfs option, device size, 
> mount option,
> new por_fsstress() implementation if it exists? I can try to reproduce 
> this issue
> in my env.

 I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.

 https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249

 ./run.sh por_fsstress
>>>
>>> Reproducing...
>>
>> After one night reproducing, the issue still not occur..
>>
>> BTW, I enabled below features in image:
>>
>> extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime 
>> compression
>>
>> and tagged compression flag on root inode.
> 
> Could you check disk supports discard? I didn't set compression to the root
> inode.

I start to review discard support codes from yesterday, however I have not found
anything suspectable yet.

> 
> I set _mkfs with "f2fs":
> mkfs.f2fs -f -O encrypt -O extra_attr -O quota -O inode_checksum /dev/$DEV;;

Let me update test configs.

Thanks,

> 
> # run.sh reload
> # run.sh por_fsstress
> 
>>
>>>
>>> Thanks,
>>>

>
> Thanks,
>
>>
>> Thanks,
>> .
>>
 .

>>>
>>>
>>> ___
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>> .
>>>
> .
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] Discard issue

2020-05-26 Thread Jaegeuk Kim
On 05/27, Chao Yu wrote:
> On 2020/5/26 15:44, Chao Yu wrote:
> > On 2020/5/26 10:26, Jaegeuk Kim wrote:
> >> On 05/26, Chao Yu wrote:
> >>> Hi Jaegeuk,
> >>>
> >>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>  Hi Chao,
> 
>  I'm hitting segment.c:1065 when running longer fsstress (1000s) with 
>  error
> >>>
> >>> (1000s) do you mean time in single round or total time of multi rounds?
> >>>
>  injection. Do you have any issue from your side?
> >>>
> >>> I haven't hit that before, in my test, in single round, fsstress won't 
> >>> last long
> >>> time (normally about 10s+ for each round).
> >>>
> >>> Below is por_fsstress() implementation in my code base:
> >>>
> >>> por_fsstress()
> >>> {
> >>> _fs_opts
> >>>
> >>> while true; do
> >>> ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 
> >>> -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 
> >>> -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f 
> >>> insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 20 -d $TESTDIR/test &
> >>> sleep 10
> >>> src/godown $TESTDIR
> >>> killall fsstress
> >>> sleep 5
> >>> umount $TESTDIR
> >>> if [ $? -ne 0 ]; then
> >>> for i in `seq 1 50`
> >>> do
> >>> umount $TESTDIR
> >>> if [ $? -eq 0]; then
> >>> break
> >>> fi
> >>> sleep 5
> >>> done
> >>> fi
> >>> echo 3 > /proc/sys/vm/drop_caches
> >>> _fsck
> >>> _mount f2fs
> >>> rm $TESTDIR/testfile
> >>> touch $TESTDIR/testfile
> >>> umount $TESTDIR
> >>> _fsck
> >>> _mount f2fs
> >>> _rm_50
> >>> done
> >>> }
> >>>
> >>> Did you update this code?
> >>>
> >>> Could you share more test configuration, like mkfs option, device size, 
> >>> mount option,
> >>> new por_fsstress() implementation if it exists? I can try to reproduce 
> >>> this issue
> >>> in my env.
> >>
> >> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
> >>
> >> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
> >>
> >> ./run.sh por_fsstress
> > 
> > Reproducing...
> 
> After one night reproducing, the issue still not occur..
> 
> BTW, I enabled below features in image:
> 
> extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime 
> compression
> 
> and tagged compression flag on root inode.

Could you check disk supports discard? I didn't set compression to the root
inode.

I set _mkfs with "f2fs":
mkfs.f2fs -f -O encrypt -O extra_attr -O quota -O inode_checksum /dev/$DEV;;

# run.sh reload
# run.sh por_fsstress

> 
> > 
> > Thanks,
> > 
> >>
> >>>
> >>> Thanks,
> >>>
> 
>  Thanks,
>  .
> 
> >> .
> >>
> > 
> > 
> > ___
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] [PATCH] f2fs: code cleanup by removing ifdef macro surrounding

2020-05-26 Thread Chao Yu
On 2020/5/26 17:05, Chengguang Xu wrote:
> Define f2fs_listxattr and to NULL when CONFIG_F2FS_FS_XATTR is not
> enabled, then we can remove many ugly ifdef macros in the code.
> 
> Signed-off-by: Chengguang Xu 

Reviewed-by: Chao Yu 

Thanks,


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] Discard issue

2020-05-26 Thread Chao Yu
On 2020/5/26 15:44, Chao Yu wrote:
> On 2020/5/26 10:26, Jaegeuk Kim wrote:
>> On 05/26, Chao Yu wrote:
>>> Hi Jaegeuk,
>>>
>>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
 Hi Chao,

 I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
>>>
>>> (1000s) do you mean time in single round or total time of multi rounds?
>>>
 injection. Do you have any issue from your side?
>>>
>>> I haven't hit that before, in my test, in single round, fsstress won't last 
>>> long
>>> time (normally about 10s+ for each round).
>>>
>>> Below is por_fsstress() implementation in my code base:
>>>
>>> por_fsstress()
>>> {
>>> _fs_opts
>>>
>>> while true; do
>>> ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 
>>> -r -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 
>>> -f bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f 
>>> insert=1 -f resvsp=0 -f unresvsp=0 -S t -p 20 -n 20 -d $TESTDIR/test &
>>> sleep 10
>>> src/godown $TESTDIR
>>> killall fsstress
>>> sleep 5
>>> umount $TESTDIR
>>> if [ $? -ne 0 ]; then
>>> for i in `seq 1 50`
>>> do
>>> umount $TESTDIR
>>> if [ $? -eq 0]; then
>>> break
>>> fi
>>> sleep 5
>>> done
>>> fi
>>> echo 3 > /proc/sys/vm/drop_caches
>>> _fsck
>>> _mount f2fs
>>> rm $TESTDIR/testfile
>>> touch $TESTDIR/testfile
>>> umount $TESTDIR
>>> _fsck
>>> _mount f2fs
>>> _rm_50
>>> done
>>> }
>>>
>>> Did you update this code?
>>>
>>> Could you share more test configuration, like mkfs option, device size, 
>>> mount option,
>>> new por_fsstress() implementation if it exists? I can try to reproduce this 
>>> issue
>>> in my env.
>>
>> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
>>
>> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
>>
>> ./run.sh por_fsstress
> 
> Reproducing...

After one night reproducing, the issue still not occur..

BTW, I enabled below features in image:

extra_attr project_quota inode_checksum flexible_inline_xattr inode_crtime 
compression

and tagged compression flag on root inode.

> 
> Thanks,
> 
>>
>>>
>>> Thanks,
>>>

 Thanks,
 .

>> .
>>
> 
> 
> ___
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[f2fs-dev] [PATCH] f2fs: code cleanup by removing ifdef macro surrounding

2020-05-26 Thread Chengguang Xu
Define f2fs_listxattr and to NULL when CONFIG_F2FS_FS_XATTR is not
enabled, then we can remove many ugly ifdef macros in the code.

Signed-off-by: Chengguang Xu 
---
 fs/f2fs/file.c  | 2 --
 fs/f2fs/namei.c | 8 
 fs/f2fs/xattr.h | 6 +-
 3 files changed, 1 insertion(+), 15 deletions(-)

diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 6ab8f621a3c5..330397a2fc12 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -968,9 +968,7 @@ const struct inode_operations f2fs_file_inode_operations = {
.setattr= f2fs_setattr,
.get_acl= f2fs_get_acl,
.set_acl= f2fs_set_acl,
-#ifdef CONFIG_F2FS_FS_XATTR
.listxattr  = f2fs_listxattr,
-#endif
.fiemap = f2fs_fiemap,
 };
 
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
index f54119da2217..2091d17ff26b 100644
--- a/fs/f2fs/namei.c
+++ b/fs/f2fs/namei.c
@@ -1287,9 +1287,7 @@ const struct inode_operations 
f2fs_encrypted_symlink_inode_operations = {
.get_link   = f2fs_encrypted_get_link,
.getattr= f2fs_getattr,
.setattr= f2fs_setattr,
-#ifdef CONFIG_F2FS_FS_XATTR
.listxattr  = f2fs_listxattr,
-#endif
 };
 
 const struct inode_operations f2fs_dir_inode_operations = {
@@ -1307,9 +1305,7 @@ const struct inode_operations f2fs_dir_inode_operations = 
{
.setattr= f2fs_setattr,
.get_acl= f2fs_get_acl,
.set_acl= f2fs_set_acl,
-#ifdef CONFIG_F2FS_FS_XATTR
.listxattr  = f2fs_listxattr,
-#endif
.fiemap = f2fs_fiemap,
 };
 
@@ -1317,9 +1313,7 @@ const struct inode_operations 
f2fs_symlink_inode_operations = {
.get_link   = f2fs_get_link,
.getattr= f2fs_getattr,
.setattr= f2fs_setattr,
-#ifdef CONFIG_F2FS_FS_XATTR
.listxattr  = f2fs_listxattr,
-#endif
 };
 
 const struct inode_operations f2fs_special_inode_operations = {
@@ -1327,7 +1321,5 @@ const struct inode_operations 
f2fs_special_inode_operations = {
.setattr= f2fs_setattr,
.get_acl= f2fs_get_acl,
.set_acl= f2fs_set_acl,
-#ifdef CONFIG_F2FS_FS_XATTR
.listxattr  = f2fs_listxattr,
-#endif
 };
diff --git a/fs/f2fs/xattr.h b/fs/f2fs/xattr.h
index 938fcd20565d..d43c0761302d 100644
--- a/fs/f2fs/xattr.h
+++ b/fs/f2fs/xattr.h
@@ -136,6 +136,7 @@ extern void f2fs_destroy_xattr_caches(struct f2fs_sb_info 
*);
 #else
 
 #define f2fs_xattr_handlersNULL
+#define f2fs_listxattr NULL
 static inline int f2fs_setxattr(struct inode *inode, int index,
const char *name, const void *value, size_t size,
struct page *page, int flags)
@@ -148,11 +149,6 @@ static inline int f2fs_getxattr(struct inode *inode, int 
index,
 {
return -EOPNOTSUPP;
 }
-static inline ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer,
-   size_t buffer_size)
-{
-   return -EOPNOTSUPP;
-}
 static inline int f2fs_init_xattr_caches(struct f2fs_sb_info *sbi) { return 0; 
}
 static inline void f2fs_destroy_xattr_caches(struct f2fs_sb_info *sbi) { }
 #endif
-- 
2.20.1




___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: [f2fs-dev] Discard issue

2020-05-26 Thread Chao Yu
On 2020/5/26 10:26, Jaegeuk Kim wrote:
> On 05/26, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2020/5/26 9:59, Jaegeuk Kim wrote:
>>> Hi Chao,
>>>
>>> I'm hitting segment.c:1065 when running longer fsstress (1000s) with error
>>
>> (1000s) do you mean time in single round or total time of multi rounds?
>>
>>> injection. Do you have any issue from your side?
>>
>> I haven't hit that before, in my test, in single round, fsstress won't last 
>> long
>> time (normally about 10s+ for each round).
>>
>> Below is por_fsstress() implementation in my code base:
>>
>> por_fsstress()
>> {
>> _fs_opts
>>
>> while true; do
>> ltp/fsstress -x "echo 3 > /proc/sys/vm/drop_caches" -X 10 -r 
>> -f fsync=8 -f sync=0 -f write=4 -f dwrite=2 -f truncate=6 -f allocsp=0 -f 
>> bulkstat=0 -f bulkstat1=0 -f freesp=0 -f zero=1 -f collapse=1 -f insert=1 -f 
>> resvsp=0 -f unresvsp=0 -S t -p 20 -n 20 -d $TESTDIR/test &
>> sleep 10
>> src/godown $TESTDIR
>> killall fsstress
>> sleep 5
>> umount $TESTDIR
>> if [ $? -ne 0 ]; then
>> for i in `seq 1 50`
>> do
>> umount $TESTDIR
>> if [ $? -eq 0]; then
>> break
>> fi
>> sleep 5
>> done
>> fi
>> echo 3 > /proc/sys/vm/drop_caches
>> _fsck
>> _mount f2fs
>> rm $TESTDIR/testfile
>> touch $TESTDIR/testfile
>> umount $TESTDIR
>> _fsck
>> _mount f2fs
>> _rm_50
>> done
>> }
>>
>> Did you update this code?
>>
>> Could you share more test configuration, like mkfs option, device size, 
>> mount option,
>> new por_fsstress() implementation if it exists? I can try to reproduce this 
>> issue
>> in my env.
> 
> I just changed, in __run_godown_fsstress(), sleep 1000 instead of 10.
> 
> https://github.com/jaegeuk/xfstests-f2fs/blob/f2fs/run.sh#L249
> 
> ./run.sh por_fsstress

Reproducing...

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>> .
>>>
> .
> 


___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel