Les Mikesell wrote:

>Of course there are other ways to do things, but they aren't
>necessarily going to be better.  I'm not convinced that there
>is much you can do to maintain any kind of structured order
>over a long term when you are adding files from multiple
>sources simultaneously and expiring them more or less randomly.
>  
>
It's not really random!   The data are expiring because a backup of a
host expires.  As I said. Dirvish's performance was more than an order
of magnitude better.    It uses cross-links but it keeps the original
tree  structure for  each host.   To me this  shows that there has to be
is a better way to do things and Dave's proposal seems right on target.

>You might make it faster to traverse the directory of one
>host or the pool, but in my usage those are rare operations.
>You could also make it easier to do normal file-based copies
>of an existing archive/pool, but there are other approaches
>to this too.
>  
>
Maybe, but none is as simple and with the DB managing the metadata one
may be able keep  the transparency without much cost.  Filename mangling
isn't needed anymore.



-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to