> A few more things (I decided to investigate a bit more tonight)...
>
> I am now much more concerned about the underlying data.

I'm not surprised. I went through the same thing when I was analyzing the
flexwiki.com data. 
 
> (1) One example that I just look at is a page we have that uses
> WikiTalk to list all namespaces on the server (about 1000).  The log
> data I have for this page shows that under 2.x it took over 20 seconds
> for each of the three iterations.  But when I hit that page manually it
> seems much, much faster (2-3 second) including successive hits.  And
> the data I have for 1.8 actually shows zero seconds for all three
> iterations.  Apparently there was an exception on this page for all
> three iterations under 1.8.

One case I hit was that the page *before* the one I got weird numbers for
was really slow. Basically, the request was timing out, so the client gave
up and moved onto the next one. But the server was still burning CPU
rendering the last page when the next request came in. 

> (2) I have found a sample page that claims to have taken consistently
> about 3.5 second for all three iterations under 2.0.  However, when I
> access this page directly via the browser (on the same box the tests
> ran on), it comes up in the blink of an eye (as do successive
> refreshes).

Watch out for browser cache. IE is particularly notorious for pulling stuff
from cache. I'd recommend closing the browser and opening a new one when you
suspect this to be happening. 
 
> (3) Looking closer at the question of exceptions, I now see 11000 under
> 1.x and 7000 under 2.x.  Divide by 3 (iterations) and this suggests
> almost 4000 topics (about 10%) got exceptions under 1.x and almost 3000
> under 2.x.   Clearly these need to be addresses as they are apparently
> messing with the data.  Several observations so far:
> 
>         a) Some of these are pages that don't really exist at all and
> it's my mistake that they are in the URLs list to run the testing
> against.  I'll try to weed those out.
>         b) At least some of the pages that gave exceptions seem to work
> just fine when run manually.  I assume the failures during the
> automated run are somehow due to the state of the web application
> (e.g., memory issues, etc.) but I'm not sure.

Hmm. Well, webtestconsole produces a .errors file in addition to the CSV
that records any exceptions that get thrown. Can you look in there to see
what the problem was? It's hard to imagine what's happening differently for
the client code than for a browser, but of course it's possible. 
 
> (4) How do you handle redirects in your test harness?  There are a fair
> number of pages on our internal site that redirect to an Internet site.
> I assume you're just processing the data coming back from the web app,
> so it's just the time required to get back the redirect response -- but
> you don't follow the actual redirect response to the external web site
> and count time to retrieve that.  Is that right?

Initially, I was following redirects, because that's what HttpWebRequest
defaults to. However, the latest version of webtestconsole (and the one I
believe you're using) does not follow redirects. Since the output includes
the status code that was returned, you can simply filter out anything that
came back 3xx. 



-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Flexwiki-users mailing list
Flexwiki-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flexwiki-users

Reply via email to