Am 22.07.2012 17:21, schrieb Alan McKinnon:
> On Sun, 22 Jul 2012 10:59:46 -0400
> Michael Mol <[email protected]> wrote:
> 
>> On Sun, Jul 22, 2012 at 9:53 AM, Florian Philipp
>> <[email protected]> wrote:
>>> Hi list!
>>>
>>> This is more a general POSIX question but I guess here I have the
>>> best chance to get a definite answer.
>>>
>>> If I want to replace a file with another file without removing the
>>> first one and without having a moment in time at which the file
>>> name does not exist, I can use the following sequence:
>>>
>>> # swap $file1 with $file2 on the same file system
>>> dir=$(dirname "$file1")
>>> tmp=$(mktemp -u -d "$dir") # [1]
>>> ln "$file1" "$tmp"
>>> mv "$file2" "$file1"
>>> mv "$tmp" "$file2"
>>>
>>> This works because mv does not affect the hardlink to $file1 and a
>>> rename operation on a single file system is atomic. This is a handy
>>> procedure when you have a background process which occasionally
>>> opens $file1 and you don't want it to fail just because of bad
>>> timing.
>>>
>>> Now my question is: How can I do a similar thing for a directory? I
>>> cannot usually create hardlinks on directories (are there file
>>> systems where this actually works?) and I cannot use mv to
>>> overwrite one directory with another.
>>>
>>> The only idea I currently have is to create a level of indirection
>>> via symlinks and then atomically overwrite the symlink. Is there
>>> any other way?
>>
>> I'd be very, very wary of doing something like this without knowing
>> exactly what programs might be accessing files inside the folder
>> you're swapping out. In order to avoid a race where some poor process
>> winds up with open file handles to some content in both your old and
>> new folders, you'd really need a way to:
>>
>> 1) lock the folder so no programs can gain new handles on it or any
>> file or folder inside
>> 2) wait until all other open file handles to the folder and its
>> contents are closed
>> 3) swap out the folder
>> 4) unlock
>>
>> (1) might be doable with flock() on a parent directory.
>> (2) you'll need to use fuser to find the processes which have open
>> handles and get them to release them.
>> (3) mv a a_tmp; mv b a; mv a_tmp b
>> (4) flock -u
>>
> 
> 
> I'd argue that what the OP wants is fundamentally impossible - there is
> no such thing to my knowledge as a directory locking mechanism that is
> guaranteed to always work in all cases - it would have to be
> kernel-based to do that and I've personally never heard of such a
> mechanism.
> 
> The fact is, that directory operations are not atomic wrt the directory
> contents, and the OP needs to find a different way to solve his problem.
> 
> As for the OPs question re: hard-linking directories, Linux never allows
> this as a user action - it can cause un-solveable loops when traversing
> directory trees. Most other Unixes have the same rule for the same
> reason. Some do allow it; but I forget which ones and I'm too lazy on a
> Sunday afternoon to Google it :-)
> 
> The kernel is of course perfectly able to hard-link directories - it
> has to to be able to create . and .. at all, but that code only runs as
> part of what mkdir does. In all other cases the kernel is hard-coded to
> just refuse to do it.
> 

Good points. Thanks, Michael, and Alan!

I guess my approach would only work when the application consistently
used openat() and friends instead of open().  Similarly, Michael's
approach only works when the application honors advisory locks.

I guess I can live with that and add both to my toolbox for cases where
I can guarantee one or the other.

Regards,
Florian Philipp

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to