When write total cluster, all pages is uptodate, there is not need to call
f2fs_prepare_compress_overwrite, intorduce f2fs_all_cluster_page_ready
to avoid this.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 11 ---
fs/f2fs/data.c | 9 +++--
fs/f2fs/f2fs.h | 4 ++--
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compre
Try to support compressed file write and amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 19 +++
fs/f2fs/debug.c | 7 +--
fs/f2fs/f2fs.h | 34 ++
3 files changed, 54 insertions(+), 6 deletions(-)
diff --git a/fs/f
Optimise f2fs_write_cache_pages, and support compressed file write/read
amplifiction accounting.
Fengnan Chang (3):
f2fs: intorduce f2fs_all_cluster_page_ready
f2fs: use onstack pages instead of pvec
f2fs: support compressed file write/read amplifiction
fs/f2fs/compress.c | 15 ++--
Optimise f2fs_write_cache_pages, and support compressed file write
amplifiction accounting.
Fengnan Chang (3):
f2fs: intorduce f2fs_all_cluster_page_uptodate
f2fs: use onstack pages instead of pvec
f2fs: support compressed file write amplifiction accounting
fs/f2fs/compress.c | 27
Try to support compressed file write amplifiction accounting.
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c | 14 ++
fs/f2fs/debug.c | 5 +++--
fs/f2fs/f2fs.h | 17 +
3 files changed, 30 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
Intorduce f2fs_all_cluster_page_uptodate, try to reduce call
f2fs_prepare_compress_overwrite.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compress.c | 23 ++-
fs/f2fs/data.c | 5 +
fs/f2fs/f2fs.h | 2 ++
3 files changed, 29 insertions(+), 1 deletion(-)
diff --git
Since pvec have 15 pages, it not a multiple of 4, when write compressed
pages, write in 64K as a unit, it will call pagevec_lookup_range_tag
agagin, sometimes this will take a lot of time.
Use onstack pages instead of pvec to mitigate this problem.
Signed-off-by: Fengnan Chang
---
fs/f2fs/compre
Try support forword recovery for compressed files, this is a rough version,
need more test to improve it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/node.c | 7 +++
fs/f2fs/recovery.c | 9 +
2 files changed, 16 insertions(+)
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index c280f
Try support follword recovery for compressed files, this is a rough
version, need more test to improve it.
Signed-off-by: Fengnan Chang
---
fs/f2fs/node.c | 7 +++
fs/f2fs/recovery.c | 10 +-
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/node.c b/fs/f2fs
Notify when mount filesystem with -o inlinecrypt option, but the device
not support inlinecrypt.
Signed-off-by: Fengnan Chang
---
fs/f2fs/f2fs.h | 18 ++
fs/f2fs/super.c | 7 +++
2 files changed, 25 insertions(+)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 38cbed0f5
Notify when mount filesystem with -o inlinecrypt option, but the device
not support inlinecrypt.
Signed-off-by: Fengnan Chang
---
fs/ext4/super.c | 12
1 file changed, 12 insertions(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 81749eaddf4c..f91454d3a877 100644
--- a/fs/e
Introduce blk_crypto_supported, Filesystems may use this to check wheather
storage device support inline encryption.
Signed-off-by: Fengnan Chang
---
block/blk-crypto.c | 6 +-
include/linux/blk-crypto.h | 5 +
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/block/
When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.
This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , cal
When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.
This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , cal
When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.
This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , cal
When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.
This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , cal
When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
but compressed flag will be remained in inode. If write partial compreseed
cluster and commit atomic write will cause data corruption.
This is the reproduction process:
Step 1:
create a compressed file ,write 64K data , cal
when overwrite only first block of cluster, since cluster is not full, it
will call f2fs_write_raw_pages when f2fs_write_multi_pages, and cause the
whole cluster become uncompressed eventhough data can be compressed.
this may will make random write bench score reduce a lot.
root# dd if=/dev/zero o
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 4 +++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 files c
When mount with whint_mode option, it doesn't work, Fix it.
Fixes: d0b9e42ab615 (f2fs: introduce inmem curseg)
Reported-by: tanghuan
Signed-off-by: Fengnan Chang
---
fs/f2fs/super.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 78ebc
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 4 +++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 files c
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
Reviewed-by: Chao
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 7 ++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 file
For now, overwrite file with direct io use inplace policy, but
not counted, fix it. And use stat_add_inplace_blocks(sbi, 1, )
instead of stat_inc_inplace_blocks(sb, ).
Signed-off-by: Fengnan Chang
---
fs/f2fs/data.c| 7 ++-
fs/f2fs/f2fs.h| 8
fs/f2fs/segment.c | 2 +-
3 file
separate buffer and direct io in block allocation statistics.
New output will like this:
buffer direct segments
IPU:0 0N/A
SSR:0 0 0
LFS:0 0 0
Signed-off-by: Fengnan Chang
Reviewed-by: Chao
For now, when overwrite compressed file, we need read old data to page
cache first and update pages.
But when we need overwrite whole cluster, we don't need old data
anymore.
So, remove read data process in this case, I have made
some simple changes to test, tests have shown that this can lead to
s
When we create a directory with enable compression, all file write into
directory will try to compress.But sometimes we may know, new file
cannot meet compression ratio requirements.
We need a nocompress extension to skip those files to avoid unnecessary
compress page test.
After add nocompress_ex
30 matches
Mail list logo