> So if you have a tree that is 300MiB, and you fix a couple of > documents over 50 revisions, the total size of the patches might > be as low as 300KiB, but using the `number of patches' algorithm, > you will end up with a new cacherev that takes up 300.3MiB. Not > very smart.
That's even more complex than it seems to be, because (from my experience) latency is usually more important than bandwidth. So it may be faster to download one 5Mb cachedrev than 50 patches of 10Kb each. I guess it all depends on how, where, and what we work with. For my part, network is fast, and the computers are dead slow. Taring up 500MiB is cheaper than tarring up 450MiB and applying a single 50MiB patch, or even 50 1MiB patches. tla get could do some multithreading, download, patch, and download while patching. Lots of weirdo and complex solutions to come up with... :-) _______________________________________________ Gnu-arch-users mailing list Gnu-arch-users@gnu.org http://lists.gnu.org/mailman/listinfo/gnu-arch-users GNU arch home page: http://savannah.gnu.org/projects/gnu-arch/