Roger Perttu ([EMAIL PROTECTED]; Friday, February 28, 2003 10:59 AM): >>>Now, here comes my wish. I would like a list of the top n worst >>>performers. The Processing Time Report tells me there are slow >>>pages. But I don't know which ones to check.
>>See docs/faq.html#faq128 > Yes, I know about #faq128 Therefore a *separate* report. It doesn't matter if you change the Processing Time Report or create a new one. The problems is still the same. You can't accurately track this data unless you count EVERY time for EVERY request. Then at the end you can cut to the "top/bottom n." You have to track them all while processing and that takes far too much memory. > But i don't understand why it wouldn't be possible to track the last > page visited This is a whole different question (unless I misunderstand you). See http://analog.cx/docs/webworks.html for reason why you can't know what the last page a visitor accessed was. > just like the last date. In fact I don't see much use > for the last date but I include it anyway. It's very useful on the Failed Request report. It tells you if the broken link has been fixed. >>>All Analog would have to do is keep a list of the top n (50?) pages >>>with their time-taken and a variable "topListMin" with the quickest >>>of those n pages. For each log line, time-taken is compared to >>>"topListMin". The memory consumption would be proportional to n >>>(small). >>Yes, it would be small for this particular thing. But there are about 200 >>similar things (everyone wants a different cross-correlation report), and >>the total memory requirement would be large. > I don't get the "cross-correlation" part. I don't want to combine > two reports, or do I? Yes (maybe I covered this above) you are asking to track every processing time (or at least processing-time-bucket) for every request. > Surely a report that is turned off wouldn't consume any resources at > all. I haven't read the source code but if I were to write a program > like Analog I would implement it like a pipeline. Each log-line would > move through the pipe and enabled reports would grab the log-line and do > it's stuff. It's the memory requirement. If you have 10,000 unique requests on your site (not including separate query strings) and you have 16 buckets in the processing time report, you now have to track 160,000 unique combinations of processing-time -> request. This is even worse for things like host to referrer! -- Jeremy Wadsack Wadsack-Allen Digital Group +------------------------------------------------------------------------ | TO UNSUBSCRIBE from this list: | http://lists.isite.net/listgate/analog-help/unsubscribe.html | | Digest version: http://lists.isite.net/listgate/analog-help-digest/ | Usenet version: news://news.gmane.org/gmane.comp.web.analog.general | List archives: http://www.analog.cx/docs/mailing.html#listarchives +------------------------------------------------------------------------