Robin Lee Powell wrote: >> Hardlinking is an atomic operation, tied to the inode of the >> filesystem so once established the target identity can't be >> confused. Symlinks are just re-evaluated as filenames when you >> open them. > > Yes, I understand that. > >> So, if you created a symlink to a pooled filename, then the copy >> in the pool directory is removed and re-created (likely, > > *Likely*? > > If that's likely, then backuppc need to change hash algorithms, > because what you're describing requires a hash collision.
Hash collisions will happen any time the hash is smaller than the thing it is trying to uniquely identify. (How many possibilities are there vs. how many representations?) You pick a tradeoff in the work of handling collisions against handling longer hashes but fewer collisions. > Furthermore, keeping a list of what files point to what pool items > shouldn't actually be that hard. You also have to know how many references there are to each pool item. That is, pretty much duplicate the code of a filesystem without gaining much. And you can't let any of this change for the duration it takes to complete your mirroring. >> because you can't tell what symlinks need it), you'd end up with >> filenames under the pc directory pointing to entirely wrong >> contents. > > Again, that requires a hash collision. And it is a given that it will happen. >>> I'd even be willing to put some money forward if someone wants >>> to code this feature; I'd rather not dig into backuppc's code if >>> I can avoid it. >> This isn't really possible at the file level. I think the zfs >> incremental send/receive might work. In some cases it might work >> to just run a copy of backuppc remotely, saving directly to an >> encrypted filesystem instead of trying to mirror a copy. > > Again: not being able to reasonbly mirror the backup system is a > Real Problem; do you have other any ideas as to how to fix it? I do it locally with raid1 mirroring and physically rotate the drives offsite. One set uses external firewire disks; a newer one uses a trayless hotswap cage for SATA drives. For the amount of data you have, though, why not just let a remote instance of backuppc pick it all up directly? Several of my target hosts are remote and other than taking a long time to get the initial copy it is not a big problem. Or, if you want a local copy too and don't want to burden the target with 2 runs, just do a straight uncompressed rsync copy locally, then let your remote backuppc run against that to save your compressed history on an encrypted filesystem. -- Les Mikesell [EMAIL PROTECTED] ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/