Hi *,

[...]
>> - The partition has to be mounted on boot.
>> - It has to be unmounted before the nightly copy job, so that an fsck
>>   can be performed.
>> - After that it has to be mounted read only, so that during the copy
>>   job no other machine can write to it.
>> - After finishing the copy job, the partition has to be remounted read
>>   write again.
>>

>Isn't that commonly done using LVM? If it were on a logical volume, you
>could fsfreeze /var/backup (to suspend writes during snapshotting), make a
>LVM snapshot, thaw, mount the read-only snapshot elsewhere and rsync off it.

I never used LVM and this system does not use an LVM partitioning.

[...]
>> jobs fails with messages like "Specified filename /dev/sdf1 has no
>> mountpoint." when *stopping* var-backup.mount.
>>

>Can you be more specific about the messages you get? The closest I found to
>yours was "Specified filename * is not a mountpoint" from the `fuser`
>command ? which is not called by systemd nor umount as far as I could grep.

"Specified filename /dev/sdf1 has no mountpoint." is *exactly* what I
get when calling "systemctl stop var-backup.mount" - but only
occasionally as I wrote.

>(I would just use `umount /var/backup`, however.)

Can't do that as long as the mount unit is under systemd control.
A few seconds later systemd remounts it on its own.

Bye.
Michael.
-- 
Michael Hirmke
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to