Yeah, we have had times when it would be useful. We profile whole URLs first,
then try to figure out what is time consuming by looking at/knowing the code
paths things are taking, often sticking [time] calls in or hacking up tests
to determine if we are stuck in this thing or that thing.
It would be most useful (I think) if the procedure profiling could be
turned on globally or only on URL's matching a pattern (or list of
patterns). If we are trying to understand one URL that is slow, it
doesn't help to have all procedures monitored - we'd only want stats
for procedures inside the URL being monitored.
THe other thing that would be cool is CPU time monitoring. I never
got around to adding it to our monitoring utilities - we usually could
figure things out by real-time tracking, once we started ignoring
ns_returns; if ns_return time is included in your monitoring, it will
be all over the map because it is dependant on the client's connection
performance.
Jim
>
> On 2001.04.29, Mike Hoegeman <[EMAIL PROTECTED]> wrote:
> > dossy, check out the C functions..
> >
> > Tcl_CreateTrace()
> > Tcl_DeleteTrace()
> >
> > they will allow you to trace tcl proc calls..
>
> Oh, kick ass!
>
> I think it's time I wrote the nsprofiler to do URL and proc-level
> profiling. However, would anyone else find it useful? ;-)
>
> - Dossy
>
> --
> Dossy Shiobara mail: [EMAIL PROTECTED]
> Panoptic Computer Network web: http://www.panoptic.com/
>