On Wed, 21 Dec 2005, Ben Escoto wrote:

> Unfortunately it uses way too much memory now---1GB to make this
> output.  It does hold the entire tree in memory, but 1k per file is
> more than I expected.

it looks like you're keeping data about every file in memory...  i hacked 
around that in mine by trimming the base component of every path before 
adding it into the hash tables... so there are only as many hash keys as 
directories, which is a bit more manageable.

if you want a more general solution you might consider invoking external 
sort(1) ... sort(1) is geared towards sorting inputs which could be as 
large as /tmp or $TMPDIR without consuming all of memory.  but that can be 
slower on smallish inputs because you end up parsing strings multiple 
times.

-dean


_______________________________________________
rdiff-backup-users mailing list at [email protected]
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Reply via email to