On Fri, Dec 18, 2009 at 06:08:00PM +0000, Nicholas Clark wrote:
> On Thu, Dec 17, 2009 at 09:13:44PM +0000, Tim Bunce wrote:
> 
> > Any chance you could add a FileHandle.xs containing the NYTP_* i/o
> > functions and the XS interface code for them?
> 
> Done and pushed.

Awesome.

> > I figure the FileHandle.o could be included in the link that creates the
> > NYTProf.so to avoid having two .so and problems accessing the function
> > pointers. Making the functions non-static and adding a .h should suffice.
> 
> But it turned out that there need to be some games with DynaLoader to 
> bootstrap
> the second module.

Ok.

> > > +        if(!sv_isa(handle, "Devel::NYTProf::FileHandle"))
> > > +            croak("handle is not a Devel::NYTProf::FileHandle");
> > > +        fh = (NYTP_file)SvPVX(SvRV(handle));
> > 
> > A typemap entry could neatly handle that repeated boilerplate code.
> 
> True, but I don't really understand typemaps. I had forgotten about this
> comment when reworking my patches locally.

I know, or at least used to know, far more than I wanted to about
typemaps after pushing them to the limit (and beyond) in:
http://cpansearch.perl.org/src/TIMB/Memcached-libmemcached-0.2101/typemap
I ended up including a custom xsubpp in that distribution.
Mind you, I've no desire to go that deep again :)

> > > +     NEW_FID => sub {
> > > +  my (undef, $fid, $eval_fid, $eval_line, $flags, $size, $mtime, $name) 
> > > = @_;
> > > +  my ($new_fid, $new_eval_fid);
> > > +  if($eval_fid) {
> > > +      $new_eval_fid = $fids{$eval_fid};
> > > +      confess("unknown eval_fid $eval_fid") unless defined $new_eval_fid;
> > > +      $new_fid = $next_fid++;
> > > +      $fids{$fid} = $new_fid;
> > > +  } else {
> > > +      $new_eval_fid = $eval_fid;
> > > +      $new_fid = $file_to_fid{$name};
> > 
> > I'd add $size and $mtime to the key (not used yet, but this'll make them
> > more important).
> > 
> > > +      return if defined $new_fid;
> > > +
> > > +      $new_fid = $next_fid++;
> > > +      $fids{$fid} = $new_fid;
> > > +      $file_to_fid{$name} = $fid;
> > 
> > That looks like it'll assign different fids to the same source file in
> > different profiles. I think most people would much rather the profiles
> > for a given source file are merged. (It might seem to 'work' currently
> > but that would just be a side-effect of the reporting code.)
> 
> I didn't *think* that it did, because when I tested things by merging the
> same profile twice, and then dumping it with dump_profile_data(), I got an
> identical structure, with (as best I could tell) all the times doubled.

I see you've reworked the code now. Testing will flush out these issues.

> > Overall it looks great and would make a wonderful addition to v3.
> 
> On the assumption that it was going in the right direction, and other people
> (hello lurkers) might want to play with it (or even hack on it), I committed
> it.

Great. I'm doing some basic nit-picking on it now.

> For now, it's still guilty of violating Don't Repeat Yourself because
> it re-implements the internals of the file format in Perl. I've not looked
> at how to refactor the XS to provide a higher-level API for writing the output
> file, and providing a Perl API to that for it to use. I'm back in the office
> on Monday, and so far there's nothing more urgent than this to steal my time.

Wonderful!

> It has no tests yet - I was rather hoping that someone lurking would feel
> like that would make a good project to pay with.

(I'm hacking the testing framework to provide a very basic 'null merge'
test now - but that won't test _merging_ at all.)

Volunteers most welcome! Especially all the people who've asked
previously for the ability to merge files.

Tim.

-- 
You've received this message because you are subscribed to
the Devel::NYTProf Development User group.

Group hosted at:  http://groups.google.com/group/develnytprof-dev
Project hosted at:  http://perl-devel-nytprof.googlecode.com
CPAN distribution:  http://search.cpan.org/dist/Devel-NYTProf

To post, email:  [email protected]
To unsubscribe, email:  [email protected]

Reply via email to