Re: post-YAPC::Europe CPANTS news

2006-09-11 Thread Salve J Nilsen

Gabor Szabo wrote:

On 9/7/06, Salve J Nilsen [EMAIL PROTECTED] wrote:

Thomas Klausner wrote:


Oh, and if you want to join the fun and help a bit, here's a (probably 
incomplete) list of tasks:


- Metrics:

[snip]

Would the metrics for community support channels that were suggested a 
while ago be welcome? (The discussion about them sort of died out :-\)


[snip]


The question then might be if that channel is used. E.g. are there (recent)
posts on the forum? How many posts are there? Have the questions been
answered? Has the module author blessed the channel (or for that mater
decided to point people to another support channel)?


Exactly. Having a metric like primary_community_resource: URL (or similar) 
would at least hint of which forum(s) the author intends to use. This is 
obviously useful information, since it lets the user inspect the forum(s) 
without first having to search for them.


Of course some authors don't care about having a community around their 
software, and some don't consider their CPAN package as important or big 
enough to warrant a community (despite it probably being licensed with an open 
source-friendly license). These people are entirely free to continue do nothing. :)



- Salve

--
Salve J. Nilsen salvejn at met dot no / Systems Developer
Norwegian Meteorological Institute   http://met.no/
Information Technology Department / Section for Development



Re: todo tests in the TAP Plan

2006-09-11 Thread Ovid
- Original Message 
From: Michael G Schwern [EMAIL PROTECTED]

  Ah, crud.  I need to support it then.  Bummer.  I'll try to get a release 
  out there when I can, then.

 Don't bother, its a poorly designed feature and likely unused.  I don't want 
 to see it pushed forward into TAP.

OK, I'll ignore them then.

Cheers,
Ovid

--
  
Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/






Re: post-YAPC::Europe CPANTS news

2006-09-11 Thread Salve J Nilsen

Adam Kennedy wrote:

Salve J Nilsen wrote:

Thomas Klausner wrote:


Oh, and if you want to join the fun and help a bit, here's a (probably
incomplete) list of tasks:

- Metrics:

[snip]

Would the metrics for community support channels that were suggested a 
while ago be welcome? (The discussion about them sort of died out :-\)


I think the main issue with this was that it was really only a valid 
metric for huge modules, and for 90% of the smaller things there wasn't 
much point.



For example, Config::Tiny or Catalyst::Plugin::Some::Random::Small::Plugin.


Why?

Having such a metric is quite useful even for the smaller moules, IMO. Firstly, 
it says something about the author's ambitions (I'll be supporting this, I 
will continue developing features, I accept patches, I'd like to help you 
use my software).


And there's nothing wrong if several tiny modules point to a common mailing 
list... E.g. that certain Acme::* module authors subscribe to a hypothetical 
[EMAIL PROTECTED] mailing list.


Or that the Catalyst::Plugin::Some::Random::Small::Plugin author tells that 
she'll monitor irc://irc.perl.org/catalyst for questions...



And frankly, I don't think there's a good way to distinguish between 
should have a community and shouldn't need a community.


That's obviously entirely up to the author. What we, the CPAN community can 
do is urge the authors to consider having and using such a resource, since 
doing this in general /helps the community/, both in the general sense (showing 
the world that the CPAN community is easily accessible for outsiders and new 
users) and in the specific sense (make Perl software easier to use, since 
support apparently is easily available).



On the other hand, what WOULD be interesting, is a check to make sure 
that the URIs of anything mentioned are still valid.


Heh. Yeah, that would be a nice project all by itself. :)


So if the META.yml has a URI with a community page or what have you, 
that the page exists. The same sort of uris_exist could also check URIs 
in the main documentation.


Good idea. :)


- Salve

--
Salve J. Nilsen salvejn at met dot no / Systems Developer
Norwegian Meteorological Institute   http://met.no/
Information Technology Department / Section for Development



using examples as tests + Devel::Cover

2006-09-11 Thread Gabor Szabo

In a module I have just started to maintain there were 0 tests
but there were several exampes with their expected output.

As I would like to keep the examples and I would like to have tests
but I don't want to duplicate this code I added a test t/sample.t
that would run each one of the example files and compare its
output to the expected output.

(system code out 2err)...

So far I have encountered only one problem with this, when I run
./Build testcover I don't get the coverage report from these
example scripts.

So what do you think about using examples as tests?
How could I convince Devel::Cover to collect coverage
information from these tests as well?

Gabor
ps. specifically I am talking about this module:
http://search.cpan.org/dist/Spreadsheet-ParseExcel/


Re: using examples as tests + Devel::Cover

2006-09-11 Thread David Golden

Gabor Szabo wrote:


(system code out 2err)...

So far I have encountered only one problem with this, when I run
./Build testcover I don't get the coverage report from these
example scripts.

So what do you think about using examples as tests?
How could I convince Devel::Cover to collect coverage
information from these tests as well?


They probably need to be run in the same process.  What about using 
Test::Output or IO::Capture to capture the output (and keep it from 
Test::Harness) and just running the code with do code.pl?


Regards,
David Golden





Installing Tests

2006-09-11 Thread Ovid
Last week I was at a testing conference with Acme and he came up with the idea 
of installing tests.  He looked into hacking Module::Build and 
ExtUtils::MakeMaker.  He also considered hacking CPAN.pm and CPANPLUS.pm.  
While I don't know if he plans to continue working on this idea, he said he 
didn't mind me posting his idea here for others to consider.

Basically, installing tests would be good because then you can run your full 
test suite against *installed* modules.  That would be nice because then you 
could install a module and rerun your tests for your entire installation and 
see what broke.

I love this idea, but here are some issues that we spotted:

1.  How does one install tests for modules already installed?
2.  If you install a module with already failing tests, you need to track what 
the failures are so you can note different failures when you run the test suite 
in the future.
3.  What's the best way to install them?  Should a separate tool just for this 
be built?

Anyone want to take a crack at this?

I'm also going to post this to Perlmonks.

Cheers,
Ovid
 
-- 
Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/





Re: Installing Tests

2006-09-11 Thread Adrian Howard


On 11 Sep 2006, at 14:02, Ovid wrote:

Last week I was at a testing conference with Acme and he came up  
with the idea of installing tests.  He looked into hacking  
Module::Build and ExtUtils::MakeMaker.  He also considered hacking  
CPAN.pm and CPANPLUS.pm.  While I don't know if he plans to  
continue working on this idea, he said he didn't mind me posting  
his idea here for others to consider.

[snip]

I'm more of this way of thinking, and tend to install my Test::Class  
modules along with the classes their testing.


See http://www.perlmonks.org/index.pl?node_id=553653 for some  
comments.


Adrian


Re: Installing Tests

2006-09-11 Thread David Golden

Ovid wrote:

I love this idea, but here are some issues that we spotted:

1.  How does one install tests for modules already installed?
2.  If you install a module with already failing tests, you need to track what 
the failures are so you can note different failures when you run the test suite 
in the future.
3.  What's the best way to install them?  Should a separate tool just for this 
be built?


4. What assumptions are we making about how tests are packaged?

Scenarios:

* test.pl vs t/*.t
* Custom Makefile.PL or Build.PL that affects test runs
* build_requires modules bundled in inc/

I'm not convinced that you can get this idea to work short of caching 
the full distribution directory or tarball at install-time and then 
iterating through those using the actual Makefile.PL or Build.PL files 
to prep and call tests.


Regards,
David Golden


Re: Installing Tests

2006-09-11 Thread Chris Dolan

On Sep 11, 2006, at 8:02 AM, Ovid wrote:

Last week I was at a testing conference with Acme and he came up  
with the idea of installing tests.  He looked into hacking  
Module::Build and ExtUtils::MakeMaker.  He also considered hacking  
CPAN.pm and CPANPLUS.pm.  While I don't know if he plans to  
continue working on this idea, he said he didn't mind me posting  
his idea here for others to consider.


Basically, installing tests would be good because then you can run  
your full test suite against *installed* modules.  That would be  
nice because then you could install a module and rerun your tests  
for your entire installation and see what broke.


I love this idea, but here are some issues that we spotted:

1.  How does one install tests for modules already installed?
2.  If you install a module with already failing tests, you need to  
track what the failures are so you can note different failures when  
you run the test suite in the future.
3.  What's the best way to install them?  Should a separate tool  
just for this be built?


Anyone want to take a crack at this?

I'm also going to post this to Perlmonks.


Interesting.  A setup like this would have solved a recent bug in  
Text-PDF-0.27 where installation failed silently due to a bogus  
pm_to_blib file.


However, why install the tests?  Why not just keep the latests  
tarballs for each installed module and periodically do the following  
for each of the tarballs:

  tar -xzvf Foo-1.00.tgz
  cd Foo-1.00
  perl Makefile.PL
  make test
  cd ..
  rm -rf Foo-1.00

That seems significantly less fragile than creating a new  
infrastructure, and still exercises all of the non-Foo dependencies.   
The most significant drawback of that approach is that it doesn't  
exercise the installed copy of Foo itself.  Perhaps that can be  
accomplished by simply deleting lib and blib in Foo-1.00 before  
running tests?


Chris

--
Chris Dolan, Software Developer, Clotho Advanced Media Inc.
608-294-7900, fax 294-7025, 1435 E Main St, Madison WI 53703
vCard: http://www.chrisdolan.net/ChrisDolan.vcf

Clotho Advanced Media, Inc. - Creators of MediaLandscape Software  
(http://www.media-landscape.com/) and partners in the revolutionary  
Croquet project (http://www.opencroquet.org/)





Re: TAPx::Parser 0.21

2006-09-11 Thread Michael G Schwern

Ovid wrote:

 - Corrected the grammar to allow for a plan of 1..0 (infinite
   stream).


1..0 is currently used as part of the skip all syntax.

 1..0 # skip Because I said so

Perhaps an infinite stream is just 1.. ?


Re: TAPx::Parser 0.21

2006-09-11 Thread Ovid
- Original Message 
From: Michael G Schwern [EMAIL PROTECTED]
 Ovid wrote:
   - Corrected the grammar to allow for a plan of 1..0 (infinite
 stream).

 1..0 is currently used as part of the skip all syntax.

  1..0 # skip Because I said so

 Perhaps an infinite stream is just 1.. ?

Ah, I misremembered it.

It would be nice to have the plan indicate that an infinite stream is 
forthcoming.  That would make it easier to write custom harnesses for it.

Cheers,
Ovid
 
-- 
Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/






Re: TAPx::Parser 0.21

2006-09-11 Thread Chris Dolan

On Sep 11, 2006, at 9:58 AM, Ovid wrote:


- Original Message 
From: Michael G Schwern [EMAIL PROTECTED]

Ovid wrote:
 - Corrected the grammar to allow for a plan of  
1..0 (infinite

   stream).


1..0 is currently used as part of the skip all syntax.

 1..0 # skip Because I said so

Perhaps an infinite stream is just 1.. ?


Ah, I misremembered it.

It would be nice to have the plan indicate that an infinite stream  
is forthcoming.  That would make it easier to write custom  
harnesses for it.


Sorry if I'm jumping into this thread out of context.  I hadn't seen  
any discussion of infinite streams before now.


How is that infinite stream different from the no_plan case?  Is it  
truly infinite or just undetermined?  That is, are you trying to code  
for the following use case?


  use Test::More test = 'Inf';
  use LWP::Simple qw(get);
  while (1) {
 ok(get('http://www.example.com/'));
 sleep 60;
  }

or is it something else entirely?

Chris

--
Chris Dolan, Software Developer, Clotho Advanced Media Inc.
608-294-7900, fax 294-7025, 1435 E Main St, Madison WI 53703
vCard: http://www.chrisdolan.net/ChrisDolan.vcf

Clotho Advanced Media, Inc. - Creators of MediaLandscape Software  
(http://www.media-landscape.com/) and partners in the revolutionary  
Croquet project (http://www.opencroquet.org/)





Re: TAPx::Parser 0.21

2006-09-11 Thread Ovid
- Original Message 
From: Chris Dolan [EMAIL PROTECTED]

 How is that infinite stream different from the no_plan case?  Is it  
 truly infinite or just undetermined?  That is, are you trying to code  
 for the following use case?

   use Test::More test = 'Inf';
   use LWP::Simple qw(get);
   while (1) {
  ok(get('http://www.example.com/'));
  sleep 60;
   }

 or is it something else entirely?

You have the basic idea.  You can also look at 
http://search.cpan.org/dist/Test-AtRuntime/ for another example of an infinite 
stream.

It's important to distinguish an infinite stream because as more people start 
writing harnesses to deal with TAP output, they need to know immediately that a 
stream is infinite so they don't do something bad like try to cache all TAP 
output.

 Clotho Advanced Media, Inc. - Creators of MediaLandscape Software  
 (http://www.media-landscape.com/) and partners in the revolutionary  
 Croquet project (http://www.opencroquet.org/)

Hey, I didn't know you had anything to do with Croquet.  That's an awesome 
project!

Cheers,
Ovid
 
-- 
Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/






Re: TAPx::Parser 0.20

2006-09-11 Thread Michael G Schwern

Torsten Schoenfeld wrote:

Yeah, this is hand-rolled stuff.  One example:

  http://search.cpan.org/src/TSCH/Glib-1.140/t/7.t

As the comment in there says ...

  we do not use Test::More or even Test::Simple because we need to test
  order of execution...  the ok() funcs from those modules assume you
  are doing all your tests in order, but our stuff will jump around.


I've patched 7.t to use home-rolled subroutines instead of scattering the code 
all over.  Makes hairy code a little less hairy.  Patch attached.

=== t/7.t
==
--- t/7.t   (revision 18034)
+++ t/7.t   (local)
@@ -19,11 +19,36 @@
 
 =cut
 
+use Test::More import = ['diag'];
+
 print 1..34\n;
 
+sub ok($$;$) {
+my($test, $num, $name) = @_;
+
+my $out = $test ? ok : not ok;
+$out .=  $num if $num;
+$out .=  - $name if defined $name;
+
+print $out\n;
+
+return $test;
+}
+
+sub pass($;$) {
+my($num, $name) = @_;
+return ok(1, $num, $name);
+}
+
+sub fail(;$) {
+my($name) = @_;
+return ok(0, 0, $name);
+}
+
+
 use Glib;
 
-print ok 1\n;
+pass(1, 'Glib compiled');
 
 package MyClass;
 
@@ -53,7 +78,7 @@
   # more complicated/sophisticated value returner
   list_returner = {
  class_closure = sub {
-   print ok 32 # hello from the class closure\n;
+   ::pass(32, hello from the class closure);
-1
  },
  flags = 'run-last',
@@ -101,7 +126,7 @@
 }
 
 sub do_returner {
-   print ok 24\n;
+   ::pass(24);
-1.5;
 }
 
@@ -117,16 +142,14 @@
 my $b = 0;
 
 sub func_a {
-   print 0==$a++
-  ? ok 4 # func_a\n
-  : not ok # func_a called after being removed\n;
+ok(0==$a++, 4, func_a);
 }
 sub func_b {
if (0==$b++) {
-   print ok 5 # func_b\n;
+   pass(5, func_b);
$_[0]-signal_handlers_disconnect_by_func (\func_a);
} else {
-   print ok 7 # func_b again\n;
+   pass(7, func_b again);
}
 
$_[0]-signal_stop_emission_by_name(something_changed);
@@ -134,19 +157,19 @@
 
 {
my $my = new MyClass;
-   print ok 2 # instantiated MyClass\n;
+   pass(2, instantiated MyClass);
$my-signal_connect (something_changed = \func_a);
my $id_b = $my-signal_connect (something_changed = \func_b);
-   print ok 3 # connected handlers\n;
+   pass(3, connected handlers);
 
$my-something_changed;
-   print ok 6\n;
+   pass(6);
$my-something_changed;
-   print ok 8\n;
+   pass(8);
 
$my-signal_handler_block ($id_b);
$my-signal_handler_unblock ($id_b);
-   print .($my-signal_handler_is_connected ($id_b) ? ok : not ok).  
9\n;
+   ok($my-signal_handler_is_connected ($id_b), 9);
 
$my-signal_handler_disconnect ($id_b);
$my-something_changed;
@@ -155,26 +178,26 @@
# this is part of the emission process going wrong, not a handler,
# so it's a bug in the calling code, and thus we shouldn't eat it.
eval { $my-test_marshaler (); };
-   print $@ =~ m/Incorrect number/
-  ? ok 10 # signal_emit barfs on bad input\n
- : not ok 10 # expected to croak but didn't\n;
+   ok( $@ =~ m/Incorrect number/, 10, signal_emit barfs on bad input );
 
$my-test_marshaler (qw/foo bar 15/, $my);
-   print ok 11\n;
+   pass(11);
my $id = $my-signal_connect (test_marshaler = sub {
-  print $_[0] == $my   
- $_[1] eq 'foo' 
- $_[2]   # string bar is true
- $_[3] == 15 # expect an int
- $_[4] == $my# object passes unmolested
- $_[5][1] eq 'two' # user-data is an array ref
- ? ok 13 # marshaled as expected\n
- : not ok 13 # bad params in callback\n;
+  ok( $_[0] == $my   
+  $_[1] eq 'foo' 
+  $_[2]   # string bar is true
+  $_[3] == 15 # expect an int
+  $_[4] == $my# object passes unmolested
+  $_[5][1] eq 'two' # user-data is an array ref
+   ,
+  13,
+   marshalling
+   );
   return 77.1;
}, [qw/one two/, 3.1415]);
-   print ($id ? ok 12\n : not ok\n);
+   ok($id, 12);
$my-test_marshaler (qw/foo bar/, 15, $my);
-   print ok 14\n;
+   pass(14);
 
$my-signal_handler_disconnect ($id);
 
@@ -193,18 +216,10 @@
 
my $tag;
$tag = Glib-install_exception_handler (sub {
-   if ($tag) {
-   print ok 16 # caught exception $_[0]\n;
-   } else {
-   print not ok # handler didn't uninstall itself\n;
-   }
+ok( $tag, 16, exception_handler );
0  # returning FALSE uninstalls
   }, [qw/foo bar/, 0]);
-   print 
-   . ($tag
-  ? ok 15 # installed exception handler with tag 

example metric (was Re: post-YAPC::Europe CPANTS news)

2006-09-11 Thread Michael G Schwern

Thomas Klausner wrote:

The one advantage of dedicated examples for me is that I can take that
example file (mostly downloaded from search.cpan.org), run it, modifiy
it, run it etc.


Cutting and pasting from the docs works as well, no network required.  And its 
going to be the example for the right version (ie. the one you have installed, 
not the latest on CPAN).



This hardly works with code embedded in the docs, as this code tends to
be overly verbose (eg no 'use strict' etc).


That's spurious.  The quality of the code has nothing to do with whether its in 
a file or a POD document.  In fact, with Test::Inline I can test my example 
code in the POD.


Re: Integrating Test::Run with Module::Build

2006-09-11 Thread Ovid
- Original Message  
From: Shlomi Fish  
 
 In other news, Test::Run now makes use of TAPx::Parser to parse the TAP. It  
 still collects the statistics on its own, because I couldn't remember whether 
  
 TAPx::Parser does that or not, and it was too much work to do at one time. 
 
Depends upon which statistics you want.  If you want aggregate statistics of 
various test runs, use TAPx::Parser::Aggregator: 
http://search.cpan.org/dist/TAPx-Parser/lib/TAPx/Parser/Aggregator.pm 
 
  use TAPx::Parser::Aggregator; 
 
  my $aggregate = TAPx::Parser::Aggregator-new; 
  $aggregate-add( 't/00-load.t', $load_parser ); 
  $aggregate-add( 't/10-lex.t',  $lex_parser  ); 
 
  my $summary = 'END_SUMMARY'; 
  Passed:  %s 
  Failed:  %s 
  Unexpectedly succeeded: %s 
  END_SUMMARY 
  printf $summary,  
scalar $aggregate-passed,  
scalar $aggregate-failed, 
scalar $aggregate-todo_failed; 
  
The above assumes that $load_parser and $lex_parser are parsers which have 
already finished parsing. You can also look at the various tprove* files in 
examples/ for more usage.
 
Cheers, 
Ovid 
--  
Buy the book -- http://www.oreilly.com/catalog/perlhks/ 
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/ 
 
 
 




Comments after ending plan

2006-09-11 Thread Ovid
I've run into a possible bug with TAPx::Parser.  According to 
http://search.cpan.org/dist/TAP/TAP.pm#The_plan:

  The plan cannot appear in the middle of the output, 
  nor can it appear more than once.

I'm getting parse errors because comments are output after the plan:

  TAPx-Parser $ perl -MTest::More=no_plan -e 'ok 0'
  not ok 1
  #   Failed test in -e at line 1.
  1..1
  # Looks like you failed 1 test of 1.

Which is correct?  I'm assuming that comments *are* allowed after the plan?  If 
so, that's a bit of work I'll have to do to correct for this.
 
Cheers,
Ovid

-- 
Buy the book -- http://www.oreilly.com/catalog/perlhks/
Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/





Re: Comments after ending plan

2006-09-11 Thread Michael G Schwern

Ovid wrote:

I've run into a possible bug with TAPx::Parser.  According to 
http://search.cpan.org/dist/TAP/TAP.pm#The_plan:

  The plan cannot appear in the middle of the output, 
  nor can it appear more than once.


I'm getting parse errors because comments are output after the plan:

  TAPx-Parser $ perl -MTest::More=no_plan -e 'ok 0'
  not ok 1
  #   Failed test in -e at line 1.
  1..1
  # Looks like you failed 1 test of 1.

Which is correct?  I'm assuming that comments *are* allowed after the plan?  If 
so, that's a bit of work I'll have to do to correct for this.


Comments are exempt.


Re: Installing Tests

2006-09-11 Thread Randy W. Sims

Ovid wrote:

Last week I was at a testing conference with Acme and he came up with the idea 
of installing tests.  He looked into hacking Module::Build and 
ExtUtils::MakeMaker.  He also considered hacking CPAN.pm and CPANPLUS.pm.  
While I don't know if he plans to continue working on this idea, he said he 
didn't mind me posting his idea here for others to consider.

Basically, installing tests would be good because then you can run your full 
test suite against *installed* modules.  That would be nice because then you 
could install a module and rerun your tests for your entire installation and 
see what broke.

I love this idea, but here are some issues that we spotted:

1.  How does one install tests for modules already installed?
2.  If you install a module with already failing tests, you need to track what 
the failures are so you can note different failures when you run the test suite 
in the future.
3.  What's the best way to install them?  Should a separate tool just for this 
be built?

Anyone want to take a crack at this?

I'm also going to post this to Perlmonks.


IIRC, I think this is something Ken has had in mind for a long time for 
Module::Build. He might have some ideas about how it might be done.


Randy.


Re: Installing Tests

2006-09-11 Thread Ken Williams


On Sep 11, 2006, at 7:08 PM, Randy W. Sims wrote:


Ovid wrote:
Last week I was at a testing conference with Acme and he came up  
with the idea of installing tests.  He looked into hacking  
Module::Build and ExtUtils::MakeMaker.  He also considered hacking  
CPAN.pm and CPANPLUS.pm.  While I don't know if he plans to  
continue working on this idea, he said he didn't mind me posting  
his idea here for others to consider.
Basically, installing tests would be good because then you can run  
your full test suite against *installed* modules.  That would be  
nice because then you could install a module and rerun your tests  
for your entire installation and see what broke.

I love this idea, but here are some issues that we spotted:
1.  How does one install tests for modules already installed?
2.  If you install a module with already failing tests, you need  
to track what the failures are so you can note different failures  
when you run the test suite in the future.
3.  What's the best way to install them?  Should a separate tool  
just for this be built?

Anyone want to take a crack at this?
I'm also going to post this to Perlmonks.


IIRC, I think this is something Ken has had in mind for a long time  
for Module::Build. He might have some ideas about how it might be  
done.


Yes, I've been thinking about this for a long time.  In fact, in the  
most recent M::B beta I made some steps toward it, by adding a  
'retest' action that's just like 'test' except that it doesn't look  
in blib/, just in @INC.  Functionally that actually covers a lot of  
the same ground you're after.


What I like about the 'retest' approach is that it's very easy and  
it's much more likely to work.  It also makes it possible to run old  
tests against a new installation, or vice versa.  What I don't like  
about it is that the user has to find the tarball again that they  
previously installed.  I can imagine that in some situations that  
wouldn't be trivial.  In other situations, when people can plan  
ahead, it's probably not a big deal.


I think there are some larger issues than 1,2,3 above that you might  
have missed, too:


 4) Many distributions, including many of the most crucial and well- 
used ones, have some extra set-up steps in their build/install  
sequences.  Others make assumptions in their test suites about where  
various files are located relative to the test code or relative to  
the current working directory.  It's quite possible that in order for  
installed tests to work correctly it could take some serious  
coöperation by modules' authors.


 5) Where should tests be installed?  Where would any other  
supporting materials be installed?


For your #1 above, I'd say just perform a reinstall.  For #2, maybe  
just punt - is there much of a need for that?  For #3, I think we can  
work it into Module::Build as an action or flag(s) to the 'install'  
action.  For EU::MM-based modules I'm not sure what the best approach  
would be, but probably I don't have to think about it. =)



 -Ken



Re: post-YAPC::Europe CPANTS news

2006-09-11 Thread Adam Kennedy
Of course some authors don't care about having a community around their 
software, and some don't consider their CPAN package as important or 
big enough to warrant a community (despite it probably being licensed 
with an open source-friendly license). These people are entirely free to 
continue do nothing. :)


Yes, but we've seen what happens once the metrics are created. The 
natural competitive nature of people comes out and they start doing 
things just because there's a metric for it.


Any metric that catches bad things, particularly bad technical things, 
is going to be just fine.


Metrics that try to push good behavior are fraught with trouble, 
because they start pushing people in odd directions.


I think it's important that we take some care about the metrics that get 
created that encourage people to take good behaviors (as opposed ones 
that just encourage not-bad behavior).


Finally, I don't personally see an obvious (causative or otherwise) link 
between a non-author community support channel, and module Kwalitee (or 
quality for that matter).


Adam K


RFC:: Test::Example

2006-09-11 Thread Gabor Szabo

Going along the path of testing the examples in my distribution,
I think it could be generalized. What do you think about this?
  Gabor


=head1 NAME

Test::Example - Check if all the examples in the distribution work correctly

=head1 SYNOPSIS

   use Test::Example;
   test_all_examples();

or

   use Test::Example;
   foreach my $file (glob 'myexamples/*.plx') {
   test_example(
   dir= 'myexamples',
   script = $file,
   stdout = stdout/$file,
   stderr = stderr/$file,
   );
   }


=head1 METHODS


=head2 test_all_examples

Goes over all the .pl files in the eg/ examples/ /sample/  (...?)
directories runs each one of the scripts using Ltest_example.
Options given to test_example are:

   test_example(
   dir= 'eg',  # the name of the relevant directory
   script = 'scriptname.pl',   # the name of the current .pl file
   stdin  = 'scriptname.pl_stdin',
   stdout = 'scriptname.pl_stdout',
   stderr = 'scriptname.pl_stderr',
   );


=head2 test_all_examples_do

The same as test_all_examples but

=head2  test_example

   test_example(
   dir = 'myexamples',
   script  = 'doit.pl',
   stdin   = 'file_providing_stdin',
   stdout  = 'file_listing_expected_output_of_doit',
   stderr  = 'file_listing_expected_errors_of_doit',
   argv= ['command', 'line', 'arguments'],
   );

Before running doit.pl chdirs into the 'myexamples' directory.
doit.pl is executed using system. The list of values provided
as argv are supplied as command line parameters.
Its STDIN is redirected from the file that is given as 'stdin'.
Its STDOUT and STDERR are captured.

In short, something like this:

   chdir 'myexamples';
   system($h{script} @{ $h{argv} }  $h{stdin}  temp_out 2 temp_err);

Once the script finished the content of temp_out is compared to
the expeced output and the content of temp_err to the expected errors.

If no 'stderr' key provided then the expectation is that nothing
will be printed to STDERR.

=head2 test_example_do

The same as L/test_example but instead of using Csystem to run the external
script it will use Cdo 'scriptname.pl'

=cut