On Wed, Oct 29, 2008 at 01:33:26PM +0100, Gisle Aas wrote:
> 
> On Tue, Oct 28, 2008 at 21:20, Tim Bunce <[EMAIL PROTECTED]> wrote:
> > Though it occurs to me that the performance concerns all relate to the
> > very high volume statement timing data. It would be helpful to have some
> > easy way for the caller to indicate that statement timing data is or
> > isn't wanted. So those that don't want it aren't slowed down by the high
> > cost of firing the callback for them.
> >
> > That also ties-in with the sub call 'punctuation' idea. It should be
> > possible to turn on and off the statement timing data callbacks when
> > entering/leaving subs of interest.
> >
> > Perhaps when calling for_chunks() pass in ref to a hash where the keys
> > are tags and the values indicate if the callback should be fired for
> > that tag. (The load_profile_data_from_stream() loop could cache those
> > value SVs so the lookup would be very cheap.) The hash could be altered
> > dynamically to enable
> >
> > Taking that a step further, the hash keys could be code refs.
> 
> I assume you meant the hash values here.

Yeap.

> It all seems quite doable and possible to grow support for within the
> current interface.  I'll probably not enhance this interface all that
> much in this direction until I find a "real" application where I'm
> troubled by the time it takes to read the file back in.

Fair enough. (I've been working with >100MB compressed profiles recently
so I'm sensitive to the performance issues :)

> > Another thought: add an option to not call the callback for the incoming
> > raw statement times, but instead fire the callback on the aggrated data
> > once the profile has been loaded.
> 
> Wouldn't that basically be the same as loading the Devel::NYTProf::Data 
> object?

Very similar. I can imagine that some 'stream readers' would be happy
enough with that if their main focus was on some other aspect of the
data, such as the mooted sub entry/exit events.

> >>                            Another idea would be to also output
> >> chunks about how much memory is allocated at different times.
> >
> > There's a note in HACKING about that:
> >
> > : Could optionally track resource usage per sub. Data sources could be
> > : perl sv arenas (clone visit() function from sv.c) to measure number of
> > : SVs & total SV memory, plus getrusage()). Abstract those into a
> > : structure with functions to subtract the difference. Then use the same
> > : logic to get inclusive and exclusive values as we use for inclusive and
> > : exclusive subroutine times.
> >
> > That's potentially very powerful.
> 
> It does seem much harder to isolate the effects of the program itself
> from the effects of the profiler on stuff like this. I hope I'm wrong
> about that.

The statement profiler already measures and discounts its own overhead.
The sub profiler doesn't mainly because it's currently very fast and
doesn't do any i/o, so the overhead should be small and near enough constant.

If the sub profiler gets optional support for generating entry/exit
events and/or resource measurements then it would need to measure and
discount it's own overheads like the statement profiler does.

Tim.

--~--~---------~--~----~------------~-------~--~----~
You've received this message because you are subscribed to
the Devel::NYTProf Development User group.

Group hosted at:  http://groups.google.com/group/develnytprof-dev
Project hosted at:  http://perl-devel-nytprof.googlecode.com
CPAN distribution:  http://search.cpan.org/dist/Devel-NYTProf

To post, email:  [email protected]
To unsubscribe, email:  [EMAIL PROTECTED]
-~----------~----~----~----~------~----~------~--~---

Reply via email to