So, it actually happened that was exactly the problem.

It seems that a mounted sshfs "simulates" support to hardlinks creating a
copy of the file when you actually issue the creation of a hardlink.
I did some tests and when i created a hardlink to a file (both within the
sshfs) an identical file is created with DIFFERENT inode.

I did some research and i found out that is the default behaviour ... but
it can be disabled adding:
disable_hardlink
to the mount options.
With this option the filesystem throws an error if you try to create a
hardlink in it.

So, in the end i was able to mount the sshfs and make it work (aside from
performance issues) with backuppc with these options:

sshfs#<sshuser@sshhost>
<mountpoint> 
disable_hardlink,idmap=user,allow_other,IdentityFile=<privatekey>,port=<tcpport>,ServerAliveInterval=15,reconnect,_netdev,uid=108,gid=114,cache=yes,kernel_cache,compression=no,Ciphers=
aes128-...@openssh.com 0 0

where uid and gid are the backupc user id and group id

the ssh options ServerAliveInterval=15 and ServerAliveCountMax (not present
here) can be added/edited to improve reconnect time in case of a networking
problem
the options cache=yes,kernel_cache,compression=no were used to improve
performance and also the cipher-forcing option was used to force the
least-cpu-intense common cipher between client and server (cfr.
nmap ssh2-enum-algos script).

A word about performance:
Anyway after a promising 10MB/s for the first two hosts, average backup
speed dropped to an average 0.5MB/s.
I don't know if the access speed to the sshfs has been capped by my cloud
provider due to the intense use. Anyway, I think that could still be a
feasible solution under some limitations.
Also CPU usage was pretty high (probably due to the
lack-of-hardlink-support fallback strategy), but I was using a pretty
entry-level vm template.

Also, but this is peculiar with my setup, it seems that sshfs doesn't
handle well dhcp renewals.

I hope this can be useful to someone.

SB




Il giorno dom 4 set 2022 alle ore 21:24 Sandro Bordacchini <
sandro.bordacch...@gmail.com> ha scritto:

> Thank you for your answer.
> I am aware that's not an optimal solution, K.
> For what i can understand here:
> https://backuppc.github.io/backuppc/BackupPC.html#What-type-of-storage-space-do-I-need
>
> Starting with 4.0.0, BackupPC no longer uses hardlinks for storage of
> deduplicated files. However, hardlinks are still used temporarily in a few
> places for doing atomic renames, *with a fallback doing a file copy if
> the hardlink fails*, and files are moved (renamed) across various paths
> that turn into expensive file copies if they span multiple file systems.
>
> So there is a "fallback" if the filesystem does not support hardlinks.
> Or maybe I got that wrong (I am not an English mother tongue speaker)...
>
>
>
>> ---------- Forwarded message ----------
>> From: backu...@kosowsky.org
>> To: "General list for user discussion, questions and support" <
>> backuppc-users@lists.sourceforge.net>
>> Cc:
>> Bcc:
>> Date: Fri, 2 Sep 2022 14:31:18 -0400
>> Subject: Re: [BackupPC-users] errors on network-mounted pool filesystem
>> Are you sure that BackupPC is compatible with sshfs?
>> While v4 drastically reduces number of hard links by eliminating the
>> hard-link pool scheme, it still uses hard links for *temporary* atomic
>> renames.
>> See: http://backuppc.sourceforge.net/BackupPC-4.2.1.html
>>
>> Sshfs may have other limitations as it is not a full-fledged
>> filesystem...
>>
>>
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

Reply via email to