> > * Based on these statistics, would you or would you not upgrade the
> > wiki to
> > 2.0? Or is it too early to tell?
> 
> I wouldn't do it yet.  Having pages that are ten times slower is just
> beyond reasonable.  While it's a small percentage of the corpus, for
> the people that use those pages it would be pretty unworkable.  The
> 2...n pages are better than I expected (given the lack of a general
> caching system) but still are too slow in some cases.

Err, the caching system I wrote *is* general. The WikiTalk one was specific.
:) 

Anyway, the new drop is significantly faster in a few ways that hopefully
will drop the number of unacceptable pages down by a lot. 
 
> > * If not, where do we need to be before you can?
> 
> I think I need to do a bit more of this measurement before answering.
> I want to trim out the quick pages (less than e.g., 1 second regardless
> of how much slower they are as a percentage).  Also, I want to get the
> latest 2.0 code.
> 
> Then, of course, will come the hard part of looking at the biggest
> problem pages (which may well number hundreds) and figuring out what's
> actually going on.

I'm pretty sure I know what the answer will be: iteration over large numbers
of topics in WikiTalk. Unfortunately, all the slow operations I've seen are
O(n) and there's no obvious way to make them any better, with the exception
of BELArray.SortBy, which was O(n log(n)), and I fixed that to be O(n), too.
Maybe we can think of something clever. 

But obviously measurement is required. It will be interesting to see where
the problems are. 
 
> I'll try to look at that, though each test is fairly onerous for me
> since there are 1000 namespaces.  Would you be able to tell the
> difference in performance in a meaningful way by just looking at a
> hundred pages?

Well, the tests we're already doing aren't particularly realistic, as they
walk through all x pages one at a time with equal weighting. So I can't see
any problem with just hitting the 200 (or whatever) slowest pages. 
 
> Have you thought about (and would it be easy) to start to drive some
> perf tests in the regression suite?

Obviously it's possible. I wouldn't even say it's particularly difficult.
However, I think we'd be much better served by re-enabling our integration
tests first. For both of those, the hard part (automated installation) is
already done. 
 
> > I have one more optimization I know I want to make: I'm going to try 
> > to speed up the authorization provider by adding per-request caching.
>
> That sounds like a very promising approach.

I wrote most of it today. Tomorrow I'll finish it up and run it through my
tests. 
 
> I don't think reflection is the issue.  With WikiTalk under 1.8 the
> first render can take a while, but after that it's typically cached.
> [insert perennial discussion here]

My meaning was that reflection is the bottleneck in the latest 2.0 code. 



-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Flexwiki-users mailing list
Flexwiki-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flexwiki-users

Reply via email to