NYTProf reporting has always handled string evals by treating them pretty much as a slow running statement. There was no way to 'see inside' to see what code the string eval executed or how long it took.
The current trunk code treats evals pretty much as regular files. You can drill-down into string evals because each now gets its own report file (actually several files: line, block, sub, .dot etc). That's great for 'interesting' evals, like those that define subs, but some apps execute thousands of 'uninteresting' evals. I had planned to add code that would edit the data model to remove 'uninteresting' evals by treating them the old way, i.e., rollup the time within the eval to the statement that called it. It turns out that it's fairly painful to edit the data structures to, for example, make sub calls from uninteresting evals appear to come from the line that called the eval. So I'm now planning to compromise by defining 'uninteresting' more narrowly. An uninteresting eval is one that defines no subs, calls no subs and contains no string evals. I'll probably also not generate the block-sub and sub-level report files for evals. Tim. p.s. As time goes by the effort required to maintain a consistent data model in memory with references will only get larger. I'm coming to the conclusion that NYTProf should include a module that'll load the profile into an SQLite db. (I might have mentioned this before.) That'll be useful for many things but would also, I believe, simplify the data model code significantly. -- You've received this message because you are subscribed to the Devel::NYTProf Development User group. Group hosted at: http://groups.google.com/group/develnytprof-dev Project hosted at: http://perl-devel-nytprof.googlecode.com CPAN distribution: http://search.cpan.org/dist/Devel-NYTProf To post, email: [email protected] To unsubscribe, email: [email protected] To unsubscribe, reply using "remove me" as the subject.
