On 15 Nov 2012, at 4:14pm, 杨苏立 Yang Su Li <s...@cs.wisc.edu> wrote:

> 1. fsync actually does two things at the same time: ordering writes (in a
> barrier-like manner), and forcing cached writes to disk. This makes it very
> difficult to implement fsync efficiently. However, logically they are two
> distinctive functionalities, and user might want one but not the other.
> Particularly, users might want a barrier, but doesn't care about durability
> (much). I have no idea why ordering and durability, which seem quite
> different, ended up being bundled together in a single fsync call.
> 
> 2. fsync semantic in POSIX is a bit vague (at least to me) in a concurrent
> setting. What is the expected behavior when more than one thread write to
> the same file descriptor, or different file descriptor associated with the
> same file?

And, as has been posted many times here and elsewhere, on many systems fsync 
does nothing at all.  It is literally implemented as a 'noop'.  So you cannot 
use it as a basis for barriers.

> In modern file system we do all kind of stuff to ensure ordering, and I
> think I can see how leveraging ordered commands (when it is available from
> hardware) could potentially boost performance.

Similarly, on many hard disk subsystems (the circuit board and firmware 
provided with the hard disk), the 'wait until cache has been written' operation 
does nothing.  So even if you /could/ depend on fsync you still couldn't depend 
on the hardware.  Read the manufactuer's documentation: they don't hide it, 
they boast about it because it makes the hard drive far faster.  If you really 
want this feature to work you have to buy expensive server-quality hard drives 
and set the jumpers in the right positions.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to