nickva commented on issue #4459: URL: https://github.com/apache/couchdb/issues/4459#issuecomment-1462482448
We go through https://www.erlang.org/doc/man/file.html#rename-2 Erlang's wrapper and we rely on POSIX rename atomicity guarantees during compaction file swapping. However, at the Erlang API level it doesn't mentioned that API provides any rename atomicity. But we know on unix-y systems it would patch to the rename(2) system call as evidenced [here](https://github.com/erlang/otp/blob/master/erts/emulator/nifs/unix/unix_prim_file.c#L943). It would of course matter that the file system itself have POSIX semantics. For Windows it's not clear if atomicity holds based on [win_prim_file.c](https://github.com/erlang/otp/blob/master/erts/emulator/nifs/win32/win_prim_file.c#L1300-L1360) ```c /* This is pretty iffy; the public documentation says that the * operation may EACCES on some systems when either file is open, * which gives us room to use MOVEFILE_REPLACE_EXISTING and be done * with it, but the old implementation simulated Unix semantics and * there's a lot of code that relies on that. * * The simulation renames the destination to a scratch name to get * around the fact that it's impossible to open (and by extension * rename) a file that's been deleted while open. It has a few * drawbacks though; * * 1) It's not atomic as there's a small window where there's no * file at all on the destination path. * 2) It will confuse applications that subscribe to folder * changes. * 3) It will fail if we lack general permission to write in the * same folder. */ ``` So at least as far as all of Windows usage is concerned for us we probably should not recommend or encourage production usage of it, just like we don't for network mounted file system. It's great for testing and trying things out though, of course. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
