On 3/4/2011 1:59 PM, Dan Pritts wrote:
>
> Obviously this should not be a *goal* but it has a cost, and the benefit
> should be well understood too.

I know it doesn't quite fit into backuppc's simple scheme of things, but 
I can't help thinking that there should be some way to use one of the 
big clustered nosql-type databases to hold the data.  Something that 
automatically saves redundant copies on multiple nodes and lets you 
scale up by simply adding nodes to the cluster without any particular OS 
or filesystem level setup.  I like riak in particular for the way nodes 
mostly manage themselves and someone has already done a 'large object' 
layer called luwak that chunks up a stream and stores with a hash as the 
key (giving automatic de-duplication of blocks without needing to 
compare anything). https://github.com/basho/luwak

The show-stopper for this is that there's no way to delete anything 
yet...  But maybe a backuppc-specific variation might be able to track 
the blocks currently active.  Besides scaling the storage space it would 
be nice to be able to run backups from different nodes into the common 
storage.

-- 
   Les Mikesell
    lesmikes...@gmail.com

------------------------------------------------------------------------------
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
_______________________________________________
BackupPC-devel mailing list
BackupPC-devel@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to