On Sun, Jun 07, 2009 at 11:32:12 +0200, Petr Rockai wrote: > > Will loading/reading a 7.2M file for darcs operations create any > > noticeable overhead? > > Well, what do you mean with "noticeable overhead" anyway? Of course, it would > be better if darcs wouldn't need to do anything, but it would be sort of > pointless to use it, then. </irony> ... Ah, and "reading" the file is done > with > mmap, so that's a zero-copy operation.
I just meant if reading the large index file would contribute a significant amount (say a few seconds) of time to our total wall time in a large repository (as you say, it's still a net improvement, but I'm just curious). I was wondering if the future would require us to break the index up into smaller pieces. But from the sounds of it, we won't need to be worrying about that, at least not right now. One more question: where do we get the times that we store in the index from? My guess is that we get it from the working copy, because getting it from the pristine would just bring us back to the original problem? -- Eric Kow <http://www.nltg.brighton.ac.uk/home/Eric.Kow> PGP Key ID: 08AC04F9
pgpi7KLahAQgi.pgp
Description: PGP signature
_______________________________________________ darcs-users mailing list [email protected] http://lists.osuosl.org/mailman/listinfo/darcs-users
