Nathaniel Smith writes:
> Hmm. I _think_ you can probably do better than this with more
> judicious use of tools. For instance, it is easy to run callgrind on
> a smaller work set, and you will still probably find that 30% of that
> workset is spent doing the same thing as before. You can also, for
> instance, use kcachegrind to grok oprofile output. (There is a script
> in contrib/ that does this conversion.)
This is what I thought as well initially, but I've found that the
oprofile percentages change fairly substantially over the course of a
run of pulling the entire monotone repository; I don't know why, but
for a while I was optimizing the things that showed up in the first
part of a run and was finding they made much less difference than
expected.
> [ oprofile, kcachegrind instrumenters ]
That should be useful.
> And for callgrind you can do:
>
> python2.4 benchmark.py -m mtn=../opt/mtn \
> -b pull='InitialPull(SimpleRandomRepo(num_files=50, num_revisions=100),
> wait_time=20)' \
> -i callgrind='CallgrindInstrumenter()' \
> scratch results --cache cache
>
> and it'll generate a relatively small data set to run callgrind on.
> (That one'll still take a while, but definitely less than 15 hours...)
>
> Well, I _think_ it will work. You broke SimpleRandomRepo pretty good
> with your mkrandom changes :-). I'm not sure if I fixed stuff up
> right; I've been getting weird errors.
What broke in it? I ran a whole ton of tests on it, and if you didn't
build mkrandom, it should have behaved exactly the same way as before
with the arguments you're showing.
> [ Use lots of branches, fix the UI problems that arise ]
Ok.
-Eric
_______________________________________________
Monotone-devel mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/monotone-devel