On Fri, March 21, 2014 16:12, Chris wrote: > On Fri, Mar 21, 2014 at 12:46 PM, S. Dale Morrey <[email protected]>wrote: > >> Depends on the amount of data, but I've had great luck with AWS S3fs and >> glacier for dealing with backups. >> > > I'm curious to know how well this works (economically and practically) at > various scales. Would you be willing to share some (rough) details about > your cloud-based backups? For example: How large is your baseline dataset? > How large are the daily incremental snapshots? What prices do you pay for > that amount of cloud-based storage? Can the backend storage service > transfer data (in & out) as fast as your internet connection allows?
I am using block based persistent storage at Rackspace. It isn't exactly "cloud" like, but it certainly does the trick. Nice performance, though you certainly pay for it. The price is $0.12/GB/Mo. with 100GB minimum. I've looked in to glacier, and I have several solution-specific issues with that approach. Restoration can be a real challenge and/or expensive. If a critical service were off-line waiting for a restoration, it might be difficult to pony up the cash to get it in a reasonable time frame. I think glacier would be perfect for large bits that aren't critical. Photos, video, etc. I don't think I would put database backups in there for anything other than very long-term storage to add liability protection, etc. Turning a file system in to S3 objects seems like a potentially massive migraine waiting to happen if you may want to restore order to the universe in the future. If you are okay with chaos (and many applications would be), then go forth and use it. -Ryan /* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
