Hi!

I want to use s3ql as a backup solution for a bunch of (bare) git 
repositories and a Seafile cloud server, which stores data chunk-wise as 
single files in the filesystem. 

In order not to make any mistakes, I just wanted to ask for some "best 
practices" for the usage of s3ql with an S3 backend (and a slow network 
connection):

1) I read that I should't use s3ql more than once at a time. Does this only 
apply when using the same storage url? May I use it twice on the same 
computer with e.g. to different S3 target buckets? Is there any advantage 
(in speed or size) of using several smaller s3ql filesystems than a large 
one?
2) May (should) I directly use an s3ql mount point as data directory for my 
Seafile storage server and git storage, or should I prefer e.g. rsync-ing 
the content into s3ql using a cronjob? 
3) What happens with an s3ql filesystem if either the network connection or 
the backend service (s3) is not available? Is there a full, 
offline-available copy of the data? Can I still access it?
4) How can I realize an additional backup of (the current state of) an s3ql 
file system e.g. to an external drive? Is there local (cached) content 
discarded when it gets uploaded, and would I therefore redownload all 
previously uploaded content if I just use a tool like tar on the mount 
point? Should I therefore better backup the data before copying it to s3ql?
5) Will interrupted uploads (e.g. network outage or local system crash) be 
automatically resumed after the next mounting?

Thank you in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to