> The one thing that concerns me about the underlying data is that the
> iterations run through all 25000 pages and then come back and run
> again.  After thinking about it last night, I think this might be a
> very bad analog for the real world.  Specifically, this means that if I
> hit page1 and then 24999 other pages, by the time I hit page1 again in
> the second iteration the odds of anything being cached for page1 are
> very, very low :-)  This would explain why the numbers don't seem to
> show much improvement from iteration 1 to iteration 2 under 1.8.  How
> easily do you think you could change the harness so that iterations run
> depth first rather than breadth first?  If you had that I'd be able to
> include that when I do my updated run (with the latest bits since I'm
> one build behind).

That's a nearly trivial change, but then again the whole test harness is
nearly trivial. So it's checked in - you can update and build the new
version of the tool. Note: I'm on travel starting today, so all I had time
to do was to make the change. I didn't get a chance to test it. The only
thing I added was a /depthfirst switch. It's in the readme. 



-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Flexwiki-users mailing list
Flexwiki-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flexwiki-users

Reply via email to