One thing that have frustrate lots of newbies (including me) is the fact that to get reasonably good performance in any realistic sized tree, you have to have a revision library. So why do we just enforce this policy in the software so nobody accidentally say arch sucks? Derek
> -----Original Message----- > From: Andy Tai [mailto:[EMAIL PROTECTED] > Sent: Friday, November 18, 2005 1:56 PM > To: Matthieu Moy; Derek Zhou > Cc: Gnu-arch-users@gnu.org > Subject: Re: [Gnu-arch-users] Re: recent changes > > > Yes, cache reversions every 50th one may be useful but > should be done in a smarter manner... if > the baz algorithms can be applied in a less disruptive way > that would be great... > > --- Matthieu Moy <[EMAIL PROTECTED]> wrote: > > > [EMAIL PROTECTED] (Ludovic Court�s) writes: > > > >> * cacherev every 50 revisions and every tag even within the same > > >> archive. Disk is cheap > > > > > > While I agree this should be the default, I think it should not be > > > hard-wired. > > > > In particular, cachedrevs for all tags are a bad choice if you > > microbranch a lot. It does not only cost disk space, it also costs > > bandwidth: if you have a close ancestor in your revision > library, it's > > cheaper to apply a few changesets to it than to get the cached > > revision. Bazaar has clever algorithms to chose which full tree > > revision to start with (a cachedrev, the initial import, or in your > > revision library), but that's relatively deep changes, I don't think > > this will ever be merged into tla. > > -- > > Matthieu > > _______________________________________________ Gnu-arch-users mailing list Gnu-arch-users@gnu.org http://lists.gnu.org/mailman/listinfo/gnu-arch-users GNU arch home page: http://savannah.gnu.org/projects/gnu-arch/