Hello,

I'm currently investigating the possibility to synchronize 2 s3ql 
filesystems, but without mounting them. Unfortunately, I don't see the 
expected results, but this might be because I missed important points in 
the design of s3ql ;-)

What I'm trying to achieve:


   - NAS1: contains the s3ql filesystem (which is s3ql-mounted by a linux 
   server, through sshfs - but not 24/7)


   - NAS2: contains a "scp -pr" of the NAS1 contents as a baseline.


I use the following command on NAS1 to rsync the delta's made by linux 
server, after which (off course) the s3ql filesystem got umounted properly 
- the delta should be around 2GB.

NAS1#> rsync -rtvz --progress --checksum --inplace --no-whole-file -e "ssh" 
> /path/myfsdata user@NAS2:/path/myfsdata (--dry-run to test)
>

Inplace & nowholefile to allow "intelligent" synchronisation, as some of 
the meta data is large, whilst the checksum to preserve 100% integrity.

What am I seeing now? Rsync wants to sync *everything*:
...
myfsdata/s3ql_data_/169/s3ql_data_16999
myfsdata/s3ql_data_/169/s3ql_data_169990
myfsdata/s3ql_data_/169/s3ql_data_169991
myfsdata/s3ql_data_/169/s3ql_data_169992
myfsdata/s3ql_data_/169/s3ql_data_169993
...

Bottom line:

   1. Is there any reason why adding 2GB of data (to a total of 130GB) 
   causes all these updates to happen? I can't tell (yet) how much rsync in 
   total wants to sync, as the command above is already running for 3 hours, 
   with 30% cpu usage.
   2. Is there a "smarter" way to synchronise *offline *s3ql file systems 
   (so *without* having to mount them)? I could do the remote mounting 
   through sshfs, however running the rsync over sshfs will still transfer 
   everything back and forth for the checksums - my files must be bitchecked).
   

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to