Hi,

On 2019-12-8 21:15, Hongwei Qin wrote:
Hi,

On Sun, Dec 8, 2019 at 12:01 PM Chao Yu <c...@kernel.org> wrote:

Hello,

On 2019-12-7 18:10, 红烧的威化饼 wrote:
Hi F2FS experts,
The following confuses me:

A typical fsync() goes like this:
1) Issue data block IOs
2) Wait for completion
3) Issue chained node block IOs
4) Wait for completion
5) Issue flush command

In order to preserve data consistency under sudden power failure, it requires 
that the storage device persists data blocks prior to node blocks.
Otherwise, under sudden power failure, it's possible that the persisted node 
block points to NULL data blocks.

Firstly it doesn't break POSIX semantics, right? since fsync() didn't return
successfully before sudden power-cut, so we can not guarantee that data is fully
persisted in such condition.

However, what you want looks like atomic write semantics, which mostly database
want to guarantee during db file update.

F2FS has support atomic_write via ioctl, which is used by SQLite officially, I
guess you can check its implementation detail.

Thanks,


Thanks for your kind reply.
It's true that if we meet power failure before fsync() completes,
POSIX doen't require FS to recover the file. However, consider the
following situation:

1) Data block IOs (Not persisted)
2) Node block IOs (All Persisted)
3) Power failure

Since the node blocks are all persisted before power failure, the node
chain isn't broken. Note that this file's new data is not properly
persisted before crash. So the recovery process should be able to
recognize this situation and avoid recover this file. However, since
the node chain is not broken, perhaps the recovery process will regard
this file as recoverable?

So this is why atomic write submission will tag PREFLUSH & FUA in last node bio to keep all data IO being persisted before node IO, and recovery flag is only tagged in last node block of node chain, if the last node block is not be persisted, all atomic write data will not be recovered. With this mechanism, we can guarantee atomic write semantics.

__write_node_page()
{
...
        if (atomic && !test_opt(sbi, NOBARRIER))
                fio.op_flags |= REQ_PREFLUSH | REQ_FUA;
...
}

f2fs_fsync_node_page()
{
...
                        if (!atomic || page == last_page) {
                                set_fsync_mark(page, 1);
                                if (IS_INODE(page)) {
                                        if (is_inode_flag_set(inode,
                                                                FI_DIRTY_INODE))
                                                f2fs_update_inode(inode, page);
                                        set_dentry_mark(page,
                                                f2fs_need_dentry_mark(sbi, 
ino));
                                }
                                /*  may be written by other thread */
                                if (!PageDirty(page))
                                        set_page_dirty(page);
                        }
...
}


Thanks!


However, according to this study 
(https://www.usenix.org/conference/fast18/presentation/won), the persistent 
order of requests doesn't necessarily equals to the request finish order (due 
to device volatile caches). This means that its possible that the node blocks 
get persisted prior to data blocks.

Does F2FS have other mechanisms to prevent such inconsistency? Or does it 
require the device to persist data without reordering?

Thanks!

Hongwei
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to