On Sun, Jul 05, 2009 at 11:55:33AM -0700, Joshua ben Jore wrote:
> 
> In profiling my work's primary search engine I quickly tracked down
> most of the time to IO. I have CPU costs I'd like to address but it's
> difficult to see them around the accounting for the IO.

If you really want to focvus on cpu time then using a cpu-time clock
would be best, especially if your system has a POSIX high-resolution
cpu-time clock available. See the clock option.

A patch to list the available clocks would be most welcome!
It just needs struct to describe a clock (an int id and string name
would do for now) and an array declaration with #ifdef'd initializers
for each CLOCK_* value.

Any volunteers?

> I'd like to inform the reporter that I've accounted for some costs and
> now want to see things without that effect.
> 
> Does this resonate with anyone else?

Yes.

It's one of the motivations for the new sysops profiling (that'll help
with IO via builtins but not IO via xsubs). Currently it treats the
builtin as an xsub in the same package, which is generally useful.
I plan to add an option to treat them as xsubs in a "CORE::" package.
That'll make the cost of IO and other syscalls both easier to see, by
lumping them all in one place, and easier to ignore.

A separate mechanism at the reporting-level to ignore/skip/hide certain
files and/or subs would fit your needs and be very useful in general.

I'd recommend implementing it as a transform on the data model. Perhaps:
    $profile->prune_files($code_ref);
    $profile->prune_subs($code_ref)
where $code_ref gets called for each item in turn and, if it returns
true, the item and related metadata gets deleted.

Tim.

--~--~---------~--~----~------------~-------~--~----~
You've received this message because you are subscribed to
the Devel::NYTProf Development User group.

Group hosted at:  http://groups.google.com/group/develnytprof-dev
Project hosted at:  http://perl-devel-nytprof.googlecode.com
CPAN distribution:  http://search.cpan.org/dist/Devel-NYTProf

To post, email:  [email protected]
To unsubscribe, email:  [email protected]
-~----------~----~----~----~------~----~------~--~---

Reply via email to