Re: running tests

2004-04-08 Thread Ken Williams
On Apr 2, 2004, at 4:59 PM, Andy Lester wrote:
Sure, but even better is to run only the tests that need to be run,
which is a key part of prove.  You can run prove -Mblib t/mytest.t
instead of the entire make test suite.
When I'm using Module::Build, I do this:

   Build test --test_files t/mytest.t

And actually, I have a shell alias set up so it just becomes:

   t t/mytest.t

 -Ken



Re: running tests

2004-04-05 Thread darren chamberlain
* Andy Lester andy at petdance.com [2004/04/02 16:59]:
 Sure, but even better is to run only the tests that need to be run,
 which is a key part of prove.  You can run prove -Mblib t/mytest.t
 instead of the entire make test suite.

  $ make test TEST_FILES=t/mytest.t

(darren)

-- 
An idea is not responsible for the people who believe in it.


pgp0.pgp
Description: PGP signature


Re: running tests

2004-04-05 Thread Andy Lester
   $ make test TEST_FILES=t/mytest.t

Sure, and you can turn on HARNESS_VERBOSE to get the raw output of the
.t file.  prove puts all that stuff behind easy command-line switches,
and lets you specify wildcards, and lets you specify a directory that
implicitly does all the *.t within the directory, and lets you turn on
taint checking, and...

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-05 Thread Simon Cozens
[EMAIL PROTECTED] (Andy Lester) writes:
 Sure, and you can turn on HARNESS_VERBOSE to get the raw output of the
 .t file.  prove puts all that stuff behind easy command-line switches,
 and lets you specify wildcards, and lets you specify a directory that
 implicitly does all the *.t within the directory, and lets you turn on
 taint checking, and...

And, unlike the bizarre corners of ExtUtils::MakeMaker, is actually
documented!

-- 
Do you associate ST JOHN'S with addiction to ASIA FILE?
- Henry Braun is Oxford Zippy


Re: running tests

2004-04-05 Thread Mark Stosberg
On Mon, Apr 05, 2004 at 05:05:34PM +0100, Simon Cozens wrote:
 [EMAIL PROTECTED] (Andy Lester) writes:
  Sure, and you can turn on HARNESS_VERBOSE to get the raw output of the
  .t file.  prove puts all that stuff behind easy command-line switches,
  and lets you specify wildcards, and lets you specify a directory that
  implicitly does all the *.t within the directory, and lets you turn on
  taint checking, and...
 
 And, unlike the bizarre corners of ExtUtils::MakeMaker, is actually
 documented!

'prove' has really made a big difference for me. With 'make test', I had
the sense there were ways to fine tune the output. But how to do that
seemed different to discover, and difficult to remember. With prove,
there's 'prove -h' and 'perldoc prove', making things much easier to
learn and lookup.

Since it is 100% compatible with the test files already in use, there
is nothing to lose by trying it or using it. It's not as if it's a new
dependency that module users have to to have to install it. 

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Re: running tests

2004-04-05 Thread Andy Lester
 there's 'prove -h' and 'perldoc prove', making things much easier to
 learn and lookup.

And, for that matter, prove --man == perldoc prove

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-03 Thread Fergal Daly
On Fri, Apr 02, 2004 at 04:59:41PM -0600, Andy Lester wrote:
  Even if you have a smoke bot, you presumably run the tests (depends on the
  size of the suite I suppose) before a checkin and it's convenient to know
  that the first failure message you see if the most relevant (ie at the
  lowest level). Also when running tests interactively it's nice to be able to
  save even 30 seconds by killing the suite if a low level test fails,
 
 Sure, but even better is to run only the tests that need to be run,
 which is a key part of prove.  You can run prove -Mblib t/mytest.t
 instead of the entire make test suite.

If the suite's big enough to warrant a bot then that makes sense but many of
my modules have test suites that complete within a fairly short time.

I tend to run the relevant test until it passes and then run the suite
before checkin. I can pipe the verbose output the whole suite into less and
know that the first failure is probably the most important one.

F



Re: running tests

2004-04-03 Thread Fergal Daly
On Sat, Apr 03, 2004 at 01:37:03AM +0200, Paul Johnson wrote:
 Coming soon to Devel::Cover (well, on my TODO list anyway):
 
  - Provide an optimal test ordering as far as coverage is concerned - ie
tests which provide a large increase in coverage in a short time are
preferred.  There should also be some override to say run these tests
first anyway because they test basic functionality.

For me, the perfect order of display would be:

Coverage A is a subset of Coverage B implies that Test A must be displayed
before Test B. You could call Test A a subtest of Test B.

You then order all the tests by their coverage increase and attempt to
display them in that order (while satisfying the above rule).

This will ensure that low level precedes high level (because the low level
tests will be subsets of the high level ones).

You need to consider subset in terms of packages or modules rather than
function, otherwise if lowlevel.t tests func1() and func2() but highlevel1.t
only calls func1 then there is no subset relationship. You also need to
keep your test scripts kind of modular .

On the other hand, if you are trying to save time on your test suite then
the same information as above can be used to cut corners.

You run the tests in coverage increase order until you have run out of tests
that will increase the coverage, then you stop. The only exception is if a
Test C fails, then you run it's largest subtest (Test B) and if Test B fails
then you run Test B's largest subtest etc. Until one of them doesn't fail.
Then you have located the failure as well as you can with the given tests,

F


running tests

2004-04-02 Thread Tim Harsch
Hi all,
If I have several test files in my test suite, is there a way to get them to
run in a predefined order when the user runs make test?  I realize I could
name them alphabetically like Atest1.t, Bsometest.t, but it seems hokey
and I'm not sure it would work on all systems.



Re: running tests

2004-04-02 Thread Andy Lester
 If I have several test files in my test suite, is there a way to get them to
 run in a predefined order when the user runs make test?  I realize I could
 name them alphabetically like Atest1.t, Bsometest.t, but it seems hokey
 and I'm not sure it would work on all systems.

Look at Test::Manifest by brian d foy.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Tim Harsch
No.  But there are certain classes of functions of the module that don't
work until others have been run.  So others should have been tested
rigorously in a previous test.  For instance the first would be an error
reporting function which should work no matter what (but I don't want to
test it in every test file), then there are a class of functions for
querying the system which only rely on error function that works, then there
are establishing a template and using the template to set attributes, then
there is job submission and synchronization, then there is job problem
detection, then there is alternate job synchronization, then blah, blah...

But the tests get more complicated in progression so it will need to rely on
previous tests having suceeded, OR write one giant test script.  I just
thought it would be more useful to the user to catch trivial problems in the
first tests, and more complicated problems in later tests.

However, Test::Manifest seems not to be a part of core perl, so if I used
that it would be one more prereq module I'd need, so that is something of a
draw back.

- Original Message - 
From: Andy Lester [EMAIL PROTECTED]
To: Tim Harsch [EMAIL PROTECTED]
Cc: Perl Mod Authors [EMAIL PROTECTED]
Sent: Friday, April 02, 2004 9:59 AM
Subject: Re: running tests


  If I have several test files in my test suite, is there a way to get
them to
  run in a predefined order when the user runs make test?  I realize I
could
  name them alphabetically like Atest1.t, Bsometest.t, but it seems
hokey
  and I'm not sure it would work on all systems.

 Also, WHY do you want them to run in a predefined order?  Are you doing
 setup in one, running some other tests, and then shutdown in another?

 xoa

 -- 
 Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance



Re: running tests

2004-04-02 Thread Arthur Corliss
On Fri, 2 Apr 2004, Tim Harsch wrote:

 Hi all,
 If I have several test files in my test suite, is there a way to get them to
 run in a predefined order when the user runs make test?  I realize I could
 name them alphabetically like Atest1.t, Bsometest.t, but it seems hokey
 and I'm not sure it would work on all systems.

I think a lot of us just use numeric prefixes to control the order:

  01_ini.t
  02_scalar.t
  03_list.t
  ... etc.

--Arthur Corliss
  Bolverk's Lair -- http://arthur.corlissfamily.org/
  Digital Mages -- http://www.digitalmages.com/
  Live Free or Die, the Only Way to Live -- NH State Motto


Re: running tests

2004-04-02 Thread Tim Harsch
That seems a better idea than A, B etc.  I'll just use that.  Thanks!

- Original Message - 
From: Arthur Corliss [EMAIL PROTECTED]
To: Tim Harsch [EMAIL PROTECTED]
Cc: Perl Mod Authors [EMAIL PROTECTED]
Sent: Friday, April 02, 2004 10:32 AM
Subject: Re: running tests


 On Fri, 2 Apr 2004, Tim Harsch wrote:

  Hi all,
  If I have several test files in my test suite, is there a way to get
them to
  run in a predefined order when the user runs make test?  I realize I
could
  name them alphabetically like Atest1.t, Bsometest.t, but it seems
hokey
  and I'm not sure it would work on all systems.

 I think a lot of us just use numeric prefixes to control the order:

   01_ini.t
   02_scalar.t
   03_list.t
   ... etc.

 --Arthur Corliss
   Bolverk's Lair -- http://arthur.corlissfamily.org/
   Digital Mages -- http://www.digitalmages.com/
   Live Free or Die, the Only Way to Live -- NH State Motto



Re: running tests

2004-04-02 Thread Andy Lester
 No.  But there are certain classes of functions of the module that don't
 work until others have been run.  So others should have been tested

So some tests are setting up other ones, then?

One of my goals when writing tests is to make everything as independent
as possible, so that I can run a single test (using prove) as part of my
development process.

Write the test.
Code the code.
Run the test.
Fix the code.
Run the test.
etc.

xoa


-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Mark Stosberg
On Fri, Apr 02, 2004 at 01:52:12PM -0600, Andy Lester wrote:
  No.  But there are certain classes of functions of the module that don't
  work until others have been run.  So others should have been tested
 
 So some tests are setting up other ones, then?
 
 One of my goals when writing tests is to make everything as independent
 as possible, so that I can run a single test (using prove) as part of my
 development process.

Andy,

So how do you recommend handling the case where some tests depends on
other things being in place?

For example, with DBD::Pg, a lot of tests depend on having test data in
the database, and having the database connection working and open.  

One idea would seem to be have a testing module that provides the
setup and tear-down functionality. Then each individual test could 
load the testing module, and setup and teardown for itself.

Is that what you do, Andy?

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Re: running tests

2004-04-02 Thread Andy Lester
 For example, with DBD::Pg, a lot of tests depend on having test data in
 the database, and having the database connection working and open.  

Every one of our *.t and *.phpt files is self-contained.  If it needs a
connection to the database, it opens one.  If it needs test data in the
database, it creates it.  If it needs to delete the data, then it
deletes it.  We also have some tests that watch for leftover bad data
(customers that have a special status of Z are ones that only exist
for testing, so if they're in the DB, we know that something didn't
clean up)

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Tim Harsch
I don't know what you mean by using prove?

And, you're right, that is the best way to develop a module I think.

Some tests cannot be run unless certain functions run to setup the state, so
I want to formally test the prereq functions to make sure they are returning
the correct number of return values etc, in the earlier trivial tests, but
in the subsequent tests that concentrate on a different subset of functions
they will need to use the previous functions to set up the state, so I only
want to test that the function succeeded in setting up the state, but not
that it is capable of returning the correct number of return values and
working under every imaginable legitimate permutation yet again.

I don't know if I'm making sense.

- Original Message - 
From: Andy Lester [EMAIL PROTECTED]
To: Tim Harsch [EMAIL PROTECTED]
Cc: Perl Mod Authors [EMAIL PROTECTED]
Sent: Friday, April 02, 2004 11:52 AM
Subject: Re: running tests


  No.  But there are certain classes of functions of the module that don't
  work until others have been run.  So others should have been tested

 So some tests are setting up other ones, then?

 One of my goals when writing tests is to make everything as independent
 as possible, so that I can run a single test (using prove) as part of my
 development process.

 Write the test.
 Code the code.
 Run the test.
 Fix the code.
 Run the test.
 etc.

 xoa


 -- 
 Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance



Re: running tests

2004-04-02 Thread Mark Stosberg
On Fri, Apr 02, 2004 at 02:02:24PM -0600, Andy Lester wrote:
  For example, with DBD::Pg, a lot of tests depend on having test data in
  the database, and having the database connection working and open.  
 
 Every one of our *.t and *.phpt files is self-contained.  If it needs a
 connection to the database, it opens one.  If it needs test data in the
 database, it creates it.  If it needs to delete the data, then it
 deletes it.  

Does that mean the test scripts are full of copy/paste coding?

So if there is a bug in the test up routine, it would be propagated
everywhere. It seems reasonable to break with the ideal of self
contained tests a bit and put shared test setup/tearcode code into  
a re-usable testing module. (which itself might have a single set of
tests run against it). 

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Re: running tests

2004-04-02 Thread Andy Lester
 Does that mean the test scripts are full of copy/paste coding?
 So if there is a bug in the test up routine, it would be propagated
 everywhere.

That is indeed potentially the case.  OTOH, once the code works, then
changes to it are intentionally painful.

 It seems reasonable to break with the ideal of self
 contained tests a bit and put shared test setup/tearcode code into  
 a re-usable testing module. (which itself might have a single set of
 tests run against it). 

And in many cases we do that as well.  The problem with that is knowing
where the test counts are incremented.

We also have a module TW::Mechanize (TW is our app) that subclasses
WWW::Mechanize and includes TW-specifc object methods.  Now, instead of:

   $mech-get( $url );
   html_ok( $mech-content, HTML is OK );

we do

   $mech-get( $url );
   $mech-html_ok( HTML is OK );

which means that if we want to do other HTML checking, it's encapsulated
in the html_ok() method.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Mark Stosberg
On Fri, Apr 02, 2004 at 02:12:46PM -0600, Andy Lester wrote:
  I don't know what you mean by using prove?
 
 prove is a command-line utility that ships with Test::Harness.  It
 allows you to run a specific test or tests, as specified on the command
 line, without having to go through the make test rigamarole. 

I use 'prove' as well and find it to be very useful. Here's a command I
mind use to run all the 'base' tests, plus those for milestones 1
through 3:

prove -I../perllib --ext=.pl base m{1,2,3}

Then if one fails, I can zero in one it and turn on the verbose option:

prove -v -I../perllib m1/shelter_add_edit_func.pl

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Re: running tests

2004-04-02 Thread Paul Johnson
On Fri, Apr 02, 2004 at 02:19:03PM -0600, Andy Lester wrote:

  Does that mean the test scripts are full of copy/paste coding?
  So if there is a bug in the test up routine, it would be propagated
  everywhere.
 
 That is indeed potentially the case.  OTOH, once the code works, then
 changes to it are intentionally painful.
 
  It seems reasonable to break with the ideal of self
  contained tests a bit and put shared test setup/tearcode code into  
  a re-usable testing module. (which itself might have a single set of
  tests run against it). 

Refactor the duplicated code.

Most of my Devel::Cover tests look something like:

  use Devel::Cover::Test;
  my $test = Devel::Cover::Test-new(trivial);
  $test-run_test

And my Gedcom tests:

  use Basic (resolve = unresolve_xrefs, read_only = 0);

Coding tests is just like coding anything else.  Possibly more so,
because bugs in the tests themselves are a real pain.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net


Re: running tests

2004-04-02 Thread Adrian Howard
On 2 Apr 2004, at 20:59, Mark Stosberg wrote:
[snip]
One idea would seem to be have a testing module that provides the
setup and tear-down functionality. Then each individual test could
load the testing module, and setup and teardown for itself.
[snip]

That would be my approach. If you want some infrastructure to help you 
might want to take a look at bias type=author Test::Class /bias.

It provides a framework for creating setup/teardown routines that run 
around your tests allowing you to make test fixtures.

Also, if there were dependencies between modules Foo and Bar I'd also 
try to create mocks for Foo in the Bar tests, and for Bar in the Foo 
tests - so I can run each test suite independently.

Adrian



Re: running tests

2004-04-02 Thread Andy Lester
 coded correctly. So it's desirable to see the results of the lower level
 tests first because running the higer level tests could be a waste of time.

But how often does that happen?  Why bother coding to optimize the
failures?

Besides, if you have a smokebot to run the tests for you, then you don't
care how long things take.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Tim Harsch
Not concerned with wasted time really (at least in this case).  Just seems
more logical to me that if early tests fail it is more of a clue to the user
that something fundamental to the installation was wrong, whereas in later
tests is seems more of a clue that perhaps something architecture dependant
isn't working in the module, or who knows what else.  Sort of a severity
measure

I take it back, I suppose time is a concern: here's why...  My module
provides an API for distributing jobs via a DRM (Distributed Resource
Manager) like SGE or Condor.  A busy cluster may take a while for the job to
leave to waiting queue, transfer to a running state, and complete.  So I
spose my tests in theory could take weeks to complete ( I guess I better
code in an option to not actually distribute jobs), but even just
communicating to the master node (as early tests would do), depending on how
the cluster is configured and how busy the network is, could take awhile on
a busy network.

Sorta similiar I imagine, for a database module, what if the database is
heavily loaded?

I'll post my module docs to help provide context to the discussion.

- Original Message - 
From: Andy Lester [EMAIL PROTECTED]
To: Fergal Daly [EMAIL PROTECTED]
Cc: Tim Harsch [EMAIL PROTECTED]; Perl Mod Authors
[EMAIL PROTECTED]
Sent: Friday, April 02, 2004 12:51 PM
Subject: Re: running tests


  coded correctly. So it's desirable to see the results of the lower level
  tests first because running the higer level tests could be a waste of
time.

 But how often does that happen?  Why bother coding to optimize the
 failures?

 Besides, if you have a smokebot to run the tests for you, then you don't
 care how long things take.

 xoa

 -- 
 Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance



Re: running tests

2004-04-02 Thread Andy Lester
   Beats me what a smokebot is. Presumably it's something I should
 know about. But I only have so many hours in the day to know about everything,
 so simple and not requiring effort is better, for me anyway.

I think you'll find that having a smokebot adds to the simple and not
requiring effort.

A smokebot is a script that runs your test suite at regular intervals,
kicked off by cron.  It checks out a given CVS branch, and then runs the
entire test suite.  For us, it runs once an hour, and if any tests fail,
the entire department gets notified.

# Crontab entries
0 * * * * smokebot HEAD [EMAIL PROTECTED]
30 * * * * smokebot cp2004-branch [EMAIL PROTECTED]

Note that it pulls the main trunk to do testing on the hour, and the
branch for development on the half-hour.

And here's the smokebot script

# The script proper
#!/bin/sh

if [ $# -lt 2 ]
then
echo Must pass at least a branch, and one email address, 
echo plus any parms to echo pass to smoke.
exit 1
fi

REV=$1
shift

MAIL=$1
shift

cd $TMP
DIR=tw
FULLPATH=$TMP/$DIR

# This assumes you have already logged in once as anoncvs
# so that the password is in your ~/.cvspass file.
cvs -d/home/cvs -Q co -d $DIR -r $REV tw  /dev/null

TWROOT=$FULLPATH
export TWROOT

/home/smoke/tw/Dev/devapache stop/dev/null 21
/home/smoke/tw/Dev/devapache start  /home/smoke/smoke.out 21 

cd $TWROOT
smoke $@  /home/smoke/smoke.out 21 
grep -i ^Failed /home/smoke/smoke.out  /home/smoke/smoke.out.fail

if [ -s /home/smoke/smoke.out.fail ]
then
STATUS=FAILED
mail -sSmoke $REV $@ $STATUS `date` $MAIL  /home/smoke/smoke.out
else
STATUS=passed
fi

/home/smoke/tw/Dev/devapache stop  /home/smoke/smoke.out 21 
-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Mark Stosberg
On Fri, Apr 02, 2004 at 03:48:12PM -0600, Andy Lester wrote:
 
 A smokebot is a script that runs your test suite at regular intervals,
 kicked off by cron.  It checks out a given CVS branch, and then runs the
 entire test suite.  For us, it runs once an hour, and if any tests fail,
 the entire department gets notified.

Very helpful Andy.

 smoke $@  /home/smoke/smoke.out 21 

And what does the inside of this 'smoke' script look like?

Mark

--
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .


Re: running tests

2004-04-02 Thread Andy Lester
  smoke $@  /home/smoke/smoke.out 21 
 
 And what does the inside of this 'smoke' script look like?

It's just prove.  An FLR-specific version of prove, but it's just prove.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Fergal Daly
On Fri, Apr 02, 2004 at 02:51:11PM -0600, Andy Lester wrote:
  coded correctly. So it's desirable to see the results of the lower level
  tests first because running the higer level tests could be a waste of time.
 
 But how often does that happen?  Why bother coding to optimize the
 failures?
 
 Besides, if you have a smokebot to run the tests for you, then you don't
 care how long things take.

It's more the time spent looking at the test results rather than the time
spent running the tests. So actually it's the result presentation order that
matters. Basically you want to consider the failure reports starting from
the lowest level as these may make the higher level failures irrelevant.

The order the tests actually ran in should be irrelevant to the outcome but
if you're running from the command line the run order determines the
presentation order.

Even if you have a smoke bot, you presumably run the tests (depends on the
size of the suite I suppose) before a checkin and it's convenient to know
that the first failure message you see if the most relevant (ie at the
lowest level). Also when running tests interactively it's nice to be able to
save even 30 seconds by killing the suite if a low level test fails,

F


Re: running tests

2004-04-02 Thread Andy Lester
 Even if you have a smoke bot, you presumably run the tests (depends on the
 size of the suite I suppose) before a checkin and it's convenient to know
 that the first failure message you see if the most relevant (ie at the
 lowest level). Also when running tests interactively it's nice to be able to
 save even 30 seconds by killing the suite if a low level test fails,

Sure, but even better is to run only the tests that need to be run,
which is a key part of prove.  You can run prove -Mblib t/mytest.t
instead of the entire make test suite.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: running tests

2004-04-02 Thread Andy Lester
On Sat, Apr 03, 2004 at 01:37:03AM +0200, Paul Johnson ([EMAIL PROTECTED]) wrote:
 Coming soon to Devel::Cover (well, on my TODO list anyway):

Could we pleease get it to run under -T first, though?

Then I could do coverage testing on Test::Harness!

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance