I just recently discovered S3QL which is a god-send because I've been 
looking for some kind of encrypted cloud backup storage system that I can 
easily access. I had messed with "duplicity" and encfs with FUSE 
filesystems that had Google Storage and S3 mount capabilities to host the 
FS, but none seem as full-featured (or easy!) as S3QL.

That being said, I had a few questions for those of you that actually use 
S3QL to store large amounts of both actively accessed and infrequently 
accessed (e.g. backup) data.

I have a set of data I call the "freezer", which is data that I don't 
intend on accessing unless a catastrophe happens and I lose my physical 
local optical and hard drive backups AND have a need for access to the old 
data. It's unlikely, so I'd like to pay the lowest price and in this case 
it's Google's "Nearline" (or would be Amazon's Glacier, but S3QL does not 
support this it seems).   So , #1 ..

1) How does S3QL work with using Google Cloud's "Nearline" storage option? 
Does it write all the data once and just forget it - that is, not needing 
to access any of the data again to any extent that would rack up 
considerable charges? In general I am unfamiliar with the billing model for 
reading and writing data, and how S3QL would work for both writing data 
once and forgetting it except for listing (ls) the files now and then (like 
for a backup), and for accessing data frequently.  What is PUT/GET request 
usage for this amount of data?


2) I plan on writing approximately 1TB of data to Nearline once, and then 
writing approximately 100GB of data that will be subject to more frequent 
accesses. I'll be uploading anywhere from 3GB to 50GB or more each month 
(some going to the Nearline storage) for backups and for frequent accesses. 
How much can I expect to pay for S3 (and, for Nearline when using that) for 
these accesses? I know how much the storage costs but what about the data 
access itself for the PUT and GET requests and also the data transfer?


Basically what I'm asking here is that when used in a mixed Google Cloud 
Storage and Amazon S3 environment, for long-term infrequent-accessed backup 
storage (Google Storage "Nearline" buckets) and more frequent 
accesses/backups (Amazon S3), what can I expect to spend to store 1TB in 
Nearline backup with writing to it once (along with all the PUTs etc) and 
then 100GB one time PUT of objects to S3, with frequent accesses? I am not 
really familiar with the terminology or what constitutes blocks etc.

Thanks and I'm sorry for the ambiguous and cryptic questions,

Brandon

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to