On 02-03-18 17:06, Maxim Dounin wrote:
The question here is - why you want the file to be on disk, and
not just in a buffer?  Because you expect the server to die in a
few seconds without flushing the file to disk?  How probable it
is, compared to the probability of the disk to die?  A more
reliable server can make this probability negligible, hence the
suggestion.
Because the files I upload to nginx servers are important to me. Please
step back a little and forget that we are talking about nginx or an HTTP
server.


If file are indeed important to you, you have to keep a second
copy in a different location, or even in multiple different
locations.  Trying to do fsync() won't save your data in a lot of
quite realistic scenarios, but certainly will imply performance
(and complexity, from nginx code point of view) costs.

But do you understand that even in a replicated setup the time interval when data reaches permanent storage might be significantly long and according to your assumptions is random and unpredictable.

In other words, without fsync() it's not possible to make any judgments about consistency of your data, consequently it's not possible to implement a program, that tells if your data is consistent or not.

Don't you think that your arguments are fundamentally flawed because you insist on probabilistic nature of the problem, while it is actually deterministic?

By the way, even LevelDB has options for synchronous writes:

https://github.com/google/leveldb/blob/master/doc/index.md#synchronous-writes

and it implements them with fsync()

Bitcoin Core varies these options depending on operation mode (see src/validation.cpp, src/txdb.cpp, src/dbwrapper.cpp).

Oh, I forgot, Bitcoin it's nonsense...

val
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to