On Tue, Oct 28, 2008 at 09:24:50AM +0100, Gisle Aas wrote:
> On Mon, Oct 27, 2008 at 21:35, Tim Bunce <[EMAIL PROTECTED]> wrote:
> > On Mon, Oct 27, 2008 at 03:03:02AM -0700, [EMAIL PROTECTED] wrote:
> >>
> >> Log:
> >> Add Devel::NYTProf::Data::Raw module
> >>
> >> This provides an interface for reading the raw profile data files.
> >> I just found this handy for understanding the file format and debugging
> >> it. It also opens up ways for creating alternative ways to summarize the
> >> data from Perl.
> >
> > Thanks Gisle!
> >
> > I'd envisaged this being implemented as a SAX provider. Any chance you
> > could rework it along those lines?
>
> What do you mean by SAX provider? You want it to actually be
> compatible with XML::SAX and call out to the XML::SAX::Base interface?
I hadn't thought about it much beyond a vague desire to call different
methods for different tags. Now I've given it some thought I'd say it's
inappropriate for the low-level interface (but the low-level interface
shouldn't make it hard/slow to add a SAX interface on top).
> I would have preferred an iterator interface instead, but I did not
> see an easy way to get there, so I started with a simple callback
> scheme instead.
>
> My preference would be:
>
> my $next_chunk = nytprof_out_iter("nytprof.out");
> while (my($tag, @args) = $next_chunk->()) {
> # ....
> }
Yes, tricky with the current structure. Would require some significant
changes (for the better generally, if done with efficiency in mind).
> >> +lib/Devel/NYTProf/Data/Raw.pm
> >
> > I'd rather it was called Devel::NYTProf::ReadStream.pm as
> > Devel::NYTProf::Data::Raw doesn't convey much and the extra nesting
> > under Data is potentially confusing as it's not related to the ::Data
> > module (which itself isn't a great name).
>
> I think we can wait with revising the name until the interface has
> found its form. To me the ::Data part means the 'nytprof.out' file
> format basically, so in that way I found the name logical. I started
> out with a class method on the ::Data class, but it did feel right so
> I decided to just create a submodule instead.
I see the ::Data classes (there's ::Data::ProfFile and ::Data::ProfSub
inside Data.pm - they'll be split out at some point) as relating to the
evolving data model.
As an aside... one of the reasons I hadn't implemented a streaming/event
based api was that I doubted how useful it would be for the raw data.
The raw data is very raw in many ways, especially when trying to make
sense of string evals (as you've seen from my recent checkins).
One thing that might help would be to have the sub profiler emit some
data into the stream when subs are entered and left. That would act as
useful 'punctuation' in the fast flowing stream of statement timings.
> > Rather than going the ENTER/sv_2mortal/LEAVE route I was thinking
> > of reusing a number of pre-allocated SVs. Would be much faster.
>
> The semantics could be slightly weird for users that grab references
> to the arguments passed in, but that price might well be worth it.
I think so. Big profiles contain many millions of statement executions,
so many millions of callbacks. Performance is critical here. Doc's can
warn about refs.
Tim.
--~--~---------~--~----~------------~-------~--~----~
You've received this message because you are subscribed to
the Devel::NYTProf Development User group.
Group hosted at: http://groups.google.com/group/develnytprof-dev
Project hosted at: http://perl-devel-nytprof.googlecode.com
CPAN distribution: http://search.cpan.org/dist/Devel-NYTProf
To post, email: [email protected]
To unsubscribe, email: [EMAIL PROTECTED]
-~----------~----~----~----~------~----~------~--~---