Jim Walker <James.Walker at Sun.COM> writes:

> The key areas I would like people to focus on are
> stress and performance test ideas, but any comments
> are welcome. For performance testing we would probably
> start with a large static workspace, then run a series
> of operations on it from build to build so we can see
> any performance changes. I'll work with Perf-PIT.
>

The bigger perf issues I know of are:

 - Hg's memory consumption with large versioned files (which was much
   improved in 0.9.5 as compared to previous), but due to the delta
   algorithm it's never really going to be less than ~2x filesize (at
   least as I understood Matt's explanation)

 - Performance with large amounts of uninteresting files (build
   product, etc) and no .hgignore to get it out of the way.

The latter suggests it'd be good if we delivered .hgignore files for
the gates in question, however, that's sadly not as simple as it
sounds (see #444), SFW is even worse than ON in that regard, since the
*vast* majority of a build sfwnv workspace is uninteresting (all the
source extracted from tarballs, the build product for same, etc, etc).
That said, it is quite possibly easier to create a maintainable
.hgignore for SFW (I could be wrong, and I could become wrong as sfw
grows...)

The latter affects anything wishing to know about workspace files
(anything that does the equivalent of 'hg status', so hg diff, the
vast majority of cdm, hg status, anything in Hg that checks whether
the workspace is modified, etc.)

As far as a large workspace goes, you'd need/want it to be large both
in terms of number of files, and number of changesets (and quite
possibly number of untracked files, as above).

-- Rich

Reply via email to