Re: TPF Devel::Cover grant report May/June 2014

2014-08-02 Thread Gabor Szabo
On Fri, Aug 1, 2014 at 3:32 PM, Paul Johnson p...@pjcj.net wrote:


  or at least on http://news.perlfoundation.org/ ...

 all the previous reports have ended up there, posted by either Mark or
 Karen.  But, as busy folk, it sometimes takes a day or two for that to
 happen.  I should probably have explicitly let you know, Gabor, for
 newsletter purposes, but again it's a little late now.  Oh well, I'll
 know better for next time ;-)



Thanks, though it is not only for newsletter purposes. There is a feeling
(at least in me)
that a blog post, especially on TPF web site is a much more
serious/official statement than
an e-mail in a mailing list. Besides, I am also collecting all the reports
of every grant given by TPF
http://perlmaven.com/tpf (click on the + to see the list) to make it easier
for people
to get to the details.


(I'll probably add the data to the https://github.com/rjbs/tpf-grant-history
when time permits.)

regards
   Gabor


Re: TPF Devel::Cover grant report May/June 2014

2014-07-31 Thread Gabor Szabo
On Thu, Jul 24, 2014 at 6:39 PM, Christian Walde walde.christ...@gmail.com
wrote:

 On Mon, 21 Jul 2014 00:59:51 +0200, Paul Johnson p...@pjcj.net wrote:

  http://cpancover.com/staging/index.html (Warning, that's quite large now

   Total  25:00


 That is a hell of a thing. Thanks for your work and +1. :)

 Also, might i suggest posting this email on BPO?



or at least on http://news.perlfoundation.org/ ...

Gabor


Re: Running perl from a test on Windows

2013-01-15 Thread Gabor Szabo
On Tue, Jan 15, 2013 at 11:32 AM, Buddy Burden barefootco...@gmail.com wrote:
 Gabor,

 I am not sure if this helps but in Windows you need to put the
 double-quotes around  $cmd

 my $output = qx{$^X -e $cmd};

 Yes, that would work if I were running _only_ on Windows.  But I need
 it work for everything (and the double quotes on Linux will cause any
 variables in my perl code to get intepreted by the shell. :-/

Oh sure, I'd have a conditional based on $^O eq 'MSWin32'
and two cases.

Gabor


Re: Running perl from a test on Windows

2013-01-14 Thread Gabor Szabo
On Mon, Jan 14, 2013 at 9:59 PM, Buddy Burden barefootco...@gmail.com wrote:
 Guys,

 Okay, my Google-fu is failing me, so hopefully one of you guys can help me 
 out.

 For a test, I need to run a snippet of Perl and collect the output.
 However, if it rus in the current interpreter, it will load a module
 that I need not to be loaded ('cause I'm also going to test if my code
 properly loads it).  So I want to run it in a separate instance of
 Perl.

 First (naive) attempt:

 my $output = `$^X -e '$cmd'`;

 This works fine on Linux, but fails on Windows.  Happily, as soon as I
 saw the failures, I recognized I had a quoting problem.  No worries, I
 said: let's just bypass the shell altogether:

I am not sure if this helps but in Windows you need to put the
double-quotes around  $cmd

my $output = qx{$^X -e $cmd};

and of course inside $cmd you should use single quotes and not double
quotes if you need
some quotation.

Oh the joy :)

Gabor


Fwd: [LDTP-Dev] [Ann] Cobra 2.5 - Windows GUI test automation tool

2012-10-04 Thread Gabor Szabo
Many thanks to Sawyer!
 -- Gabor

-- Forwarded message --
From: Nagappan Alagappan nagap...@gmail.com
Date: Fri, Oct 5, 2012 at 2:11 AM
Subject: [LDTP-Dev] [Ann] Cobra 2.5 - Windows GUI test automation tool
To: LDTP Dev Mailinglist ldtp-...@lists.freedesktop.org


Hello,

Highlights

* Added Perl interface (Contributed by xsawyerx)

[...]

About LDTP:

Cross Platform GUI Automation tool Linux version is LDTP, Windows
version is Cobra and Mac version is PyATOM (Work in progress).

* Linux version is known to work on GNOME / KDE (QT = 4.8) / Java
Swing / LibreOffice / Mozilla application on all major Linux
distribution.
* Windows version is known to work on application written in .NET /
C++ / Java / QT on Windows XP SP3 / Windows 7 / Windows 8 development
version.
* Mac version is currently under development and verified only on OS X
Lion/Mountain Lion. Where ever PyATOM runs, LDTP should work on it.

Download source: https://github.com/ldtp/cobra

Download binary (Windows XP / Windows 7 / Windows 8):
https://github.com/ldtp/cobra/downloads
System requirement: .NET 3.5, refer README.txt after installation

Documentation references:

For detailed information on LDTP framework and latest updates visit
http://ldtp.freedesktop.org

For information on various APIs in LDTP including those added for this
release can be got from
http://ldtp.freedesktop.org/user-doc/index.html
Java doc - http://ldtp.freedesktop.org/javadoc/

Report bugs - http://ldtp.freedesktop.org/wiki/Bugs

To subscribe to LDTP mailing lists, visit
http://ldtp.freedesktop.org/wiki/Mailing_20list

IRC Channel - #ldtp on irc.freenode.net

Thanks
Nagappan

--
Linux Desktop (GUI Application) Testing Project - http://ldtp.freedesktop.org
Cobra - Windows GUI Automation tool - https://github.com/ldtp/cobra
http://nagappanal.blogspot.com

___
LDTP-dev mailing list
ldtp-...@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/ldtp-dev


Fwd: [LDTP-Dev] Announce: Cobra 2.0 - Windows GUI test automation tool

2012-08-02 Thread Gabor Szabo
FYI something is really missing there :-(
  Gabor

-- Forwarded message --
From: Nagappan Alagappan nagap...@gmail.com

* Java / C# / VB.NET / PowerShell / Ruby are now officially supported
LDTP scripting languages other than Python

[...]


About LDTP:

Cross Platform GUI Automation tool Linux version is LDTP, Windows
version is Cobra and Mac version is PyATOM (Work in progress).

Download source: https://github.com/ldtp/cobra


Re: Fatal wide character warnings in tests

2012-01-29 Thread Gabor Szabo
On Sun, Jan 29, 2012 at 11:55 PM, Ovid publiustemp-perl...@yahoo.com wrote:
 How do I make Wide character in print warnings fatal in tests?

Test::NoWarnings catches all forms of warnings in your test, not only
the specific one you mentioned.
Maybe that could be used/changed.

Gabor

-- 
Gabor Szabo
http://szabgab.com/


Re: Capturing only STDERR with Capture::Tiny?

2011-12-01 Thread Gabor Szabo
On Fri, Dec 2, 2011 at 12:25 AM, David Golden xda...@gmail.com wrote:
 On Wed, Nov 30, 2011 at 5:24 AM, Gabor Szabo szab...@gmail.com wrote:
 Does a request for such feature in Capture::Tiny sound reasonable?

 https://metacpan.org/source/DAGOLDEN/Capture-Tiny-0.12/Changes

Nice.

Thank you!

Gabor


Re: Capturing only STDERR with Capture::Tiny?

2011-11-30 Thread Gabor Szabo
On Wed, Nov 30, 2011 at 12:16 PM, David Golden xda...@gmail.com wrote:
 On Tue, Nov 29, 2011 at 11:25 PM, Gabor Szabo szab...@gmail.com wrote:
 What is your suggestion to solve this?
 Does a request for such feature in Capture::Tiny sound reasonable?

 It's reasonable and has been requested before:

 https://rt.cpan.org/Public/Bug/Display.html?id=60515

 I haven't found the tuits to dive back into the mess and implement it.
  To some extent, I'm trying to keep this Tiny and not implement
 every possible variation of capture that people might like.  To
 another extent, I've not decided on an API that I like for doing the
 variations.  (I cringed when I added 'capture_merged').

For now the way I used works. I pushed it out to Github
https://github.com/szabgab/Term-ProgressBar
and sent a mail to Martyn J. Pearce.
I hope he will answer soon, though he has not uploaded
anything for almost a year now.

Regarding the API,

What about something like:
capture {}, {stdout = 'none', stderr = 'capture'};

Where actually = 'capture' is the default and the available values would be
'none', 'tee', 'capture'

regards
   Gabor


Capturing only STDERR with Capture::Tiny?

2011-11-29 Thread Gabor Szabo
I've started to patch the test of Term::ProgressBar as it was failing
on Windows.
The module prints things to STDERR and dies when it is called too many times.
In the test script there is now code like this:

use Test::More;
use Test::Exception;
use Capture::Tiny qw(capture);

my ($our, $err) = capture {
 lives_ok { code };
 lives_ok { code };
};
print $out;
like $err, qqr/.../;

The problem is that within the capture there are test assertions that
need to be able to print to STDOUT so after capturing them I have
to print them out manually (print $out) to keep the TAP flow.

I could probably remove the lives_ok wrappers eliminating this
problem, but I think the reason there are several lives_ok calls
within the capture block is to help pinpoint the exact piece
of code that died.

So probably what I need is a version of capture that will only
capture STDERR and let STDOUT flow through.

I have not seen such capability in Capture::Tiny.

What is your suggestion to solve this?
Does a request for such feature in Capture::Tiny sound reasonable?

regards
   Gabor


stable and development releases on CPAN

2011-06-25 Thread Gabor Szabo
Slightly related to the recent discussion on DarkPan management tools I wonder
why are we still stuck with latest-and-breakest when installing
modules from CPAN?

Actually now I am wondering from the side of the module developer who
wants to upload new versions of her module but wants to make be able to upload
both  bug-fix versions on several stable branches and development
versions on the,
well, development branch.

For projects like Padre we would like to be able to upload versions and let
the end user easily decide which version they would like to use. e.g.

2.00 stable - 2.01 bugfix - 2.02 bugfix - 2.03 bugfix
4.00 stable - 4.01 bugfix - 4.02 bugfix -
5.00 dev  -5.01 dev- 5.02 dev ...

So I wonder if it would be possible to add a flag to every CPAN upload
(e.g in the META.yml file) that will allow the developer to create branches.
Then, once the CPAN clients support this, the end user will be able to tell
for Padre I would like to use branch foobar and it will keep upgarding only
the releases that are marked to be in the 'foobar' branch.

Old clients would still get the latest version.

regards
   Gabor


Net::Server 0.99 fails with Devel::Cover 0.73

2011-03-22 Thread Gabor Szabo
Hi,

I just tried to run the tests of Net::Server using Devel::Cover but
that failed:

~/work/Net-Server-0.99 HARNESS_PERL_SWITCHES=-MDevel::Cover make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/Options.t ... ok
t/Port_Configuration.t  ok
t/Server_BASE.t ... ok
t/Server_Fork.t ... All 5 subtests passed
t/Server_http.t ... Dubious, test returned 255 (wstat 65280, 0xff00)
All 5 subtests passed
t/Server_INET.t ... ok
t/Server_Multiplex.t .. All 4 subtests passed
t/Server_MultiType.t .. ok
t/Server_PreFork.t  ok
t/Server_PreForkSimple.t .. ok
t/Server_Single.t . ok
t/UDP_test.t .. ok
t/UNIX_test.t . ok

Test Summary Report
---
t/Server_Fork.t (Wstat: 14 Tests: 5 Failed: 0)
  Non-zero wait status: 14
t/Server_http.t (Wstat: 65280 Tests: 5 Failed: 0)
  Non-zero exit status: 255
t/Server_Multiplex.t(Wstat: 14 Tests: 4 Failed: 0)
  Non-zero wait status: 14
Files=13, Tests=195, 77 wallclock secs ( 0.18 usr  0.05 sys + 40.09
cusr  1.05 csys = 41.37 CPU)
Result: FAIL
Failed 3/13 test programs. 0/195 subtests failed.
make: *** [test_dynamic] Error 255


Without Devel::Cover the tests pass.

Anyone with tuits who could figure out why do those test
fail and what needs to be fixed?

regards
   Gabor

-- 
Gabor Szabo
http://szabgab.com/


Flier about using Perl for Testing

2011-01-17 Thread Gabor Szabo
Hi,

I am preparing for the Perl booth at FOSDEM in the beginning of February
I'd like to make sure we have a number of fliers about Perl in various subjects.

I believe that using Perl for testing other things is an important
area and I think
I know a bit about it I started to write a document. Here is what I
wrote so far.
The idea is to fill two sides of an A4 page with information. It can of course
also contain images. I'd be happy to get your help in preparing the document.

The main difference between this and
http://www.perl.org/about/whitepapers/perl-testing.html
mentioned in my previous message is that the objective of this document is
to show how Perl can be used to test *other* things. Maybe that's the direction
where the document of Leo is also heading, in that case these could be
integrated.


==

TAP - Test Anything Protocol - provide a good separation between test
running and analyzing the results.
Having the protocol implemented in several languages it enables
integration of unit tests
written in the languages of the application with a common reporting
back-end.http://testanything.org/

Smolder is a web-based continuous integration smoke server. It's a
central repository for you smoke
tests for multiple public and private repositories. It can collect TAP
based test reports and generate
graphs and charts showing the progress of the
http://search.cpan.org/dist/Smolder/

There are over 400 testing and quality assurance related modules
available on CPAN.

Modules such as Test::More, Test::Most and Test::Class provide a
framework to write tests producing TAP
as their output. Similar packages exist for other languages as well
including C, C++, C#, Java,
Javascript, PostgreSQL (pgTAP), Python, Ruby and PHP.


Test::Harness allows the collection and processing of TAP based
results and the generation of reports
in various formats: text, HTML, graphs and charts.


Test::WWW::Mechanize allows the automatic testing of web sites. It
acts as a web browser,
understanding the HTML objects such as links and forms. Perl also
supports driving Firefox itself providing
identical behavior to the experience of regular users. Perl can also
integrate well with Selenium.

Perl provides a seamless way to interact with all the relational
databases such as Oracle, MySQL, PostgreSQL, Ingres
It can also use ODBC for connecting any database supporting that. You
can use plain SQL or one of the abstraction libraries or ORMs of Perl
to interact with the databases.

Major NoSQL databases such as Apache CouchDB and MongoDB are also
supported using Perl.

In many organization management likes to see reports arriving in the
e-mail as an Excel file with charts and graphs. Perl allows the
automatic creation of such files even on system where MS Excel is not
available such as Unix or Linux. Perl also provides an easy way to
automatically send e-mails with attachments.


Perl Quality Assurance Projects http://qa.perl.org/


==

Your help would be highly appreciated as I need to have a version of
this document in
about 2 days to get it printed on time. Later on we can improve it but
I'd rather have something
reasonable for FOSDEM than nothing.

regards
   Gabor
-- 
Gabor Szabo                     http://szabgab.com/
Perl Ecosystem Group       http://perl-ecosystem.org/


Re: Flier about using Perl for Testing

2011-01-17 Thread Gabor Szabo
Maybe I should add some use cases:

You built a web-site in Java. You can still write your external
tests with Perl and integrate the results with the output of your
Java based unit tests via TAP.

You built a router. It has a telnet interface and a simple web
based interface you need to interact with. You also need to see
that sending packets arrive in the correct place.

You wrote an application in C or C++. You wrote in those languages to
get crazy speed even though you know it takes a lot longer to write
in those languages than in Perl. In your test your preference is to
write as fast as possible. Execution time isn't that critical.
For this you build a small bridge between Perl and your C or C++
libraries and write your tests in Perl.

Gabor


--
Gabor Szabo                     http://szabgab.com/
Perl Ecosystem Group       http://perl-ecosystem.org/


Testing (other things) with Perl

2011-01-16 Thread Gabor Szabo
Hi,

I just looked at http://qa.perl.org/ and I see it has 3 tabs on the left:

Home  / Testing your code  /Testing CPAN and Perl

I wonder if we could/should add another tab called
Testing other things with Perl
or
Testing with Perl
where we could showcase the tools in Perl that can be
used to test any application regardless of its language.

IT could mention things such as
TAP, Smolder, WWW::Mechanize,
Net::Telnet, Net::SSH and other Net::* modules
DBI/DBD* etc.

BTW I also tried to locate the source of the web site but I have not found it
on the web site. If it is not on the site, could it be added there?
Maybe on the http://qa.perl.org/siteinfo.html page.

I also found this page: http://www.perl.org/about/whitepapers/perl-testing.html
that might be linked from the QA website.


regards
  Gabor Szabo
  http://szabgab.com/


Devel::Cover with options

2011-01-03 Thread Gabor Szabo
In a project I am working on the directory layout does not have a lib
directory so we have

/X.pm
/X/Y.pm
...
/t/test.t

when I run
$ PERL5OPT=-MDevel::Cover make test
$ cover

I get report only for the files in t/

how can I tell Devel::Cover to report about all the files in the
current directory except those in t?

I thought I can do it by this:

cover -t +inc . -inc t

but I get:

Unknown option: inc
Invalid command line options at
/home/gabor/perl5/lib/perl5/x86_64-linux-thread-multi/Devel/Cover/Report/Html_minimal.pm
line 677.

from http://search.cpan.org/dist/Devel-Cover/lib/Devel/Cover.pm it is
unclear to me how can I supply these options.

regards
   Gabor


SQE conferences

2010-08-10 Thread Gabor Szabo
hi,

has anyone visited or given talks at SQE conferences in the past?

http://www.sqe.com/conferences/

Could you please share your experience?

Would it be interesting at those conferences to offer talks about TAP
and the way testing is done using Perl?

regards
   Gabor


What are you doing with Perl in the QA department?

2010-05-08 Thread Gabor Szabo
Hi,

In this question I am not so much interested in how you do unit
testing of your Perl code but how you use Perl in a QA or QC department?


Some of you might know that I have a training course on how to
use Perl in test automation. That course assumes you already know
Perl. (First I teach TAP and how to test Perl code and then things
like WWW::Mechanize, Selenium for web, Expect and Net::Telnet for CLI
and Win32::GUITest for, well GUI related things to test anything else.)


Now I am thinking on building a course or writing a book or just a
series of blog
entries that will not assume any Perl knowledge and will help people in the
QA/QC department and in test automation.
So I wonder, if you are working in QA, what are you doing with Perl?

Is that processing log files?
Is that running batch jobs or wrapping other utilities?
Is that for preparing input files for the test?
Do you also use Perl to test various applications (web, CLI, GUI, etc)?

regards
   Gabor


getting more details from a test script

2010-04-05 Thread Gabor Szabo
Hi,

when I am writing a test script for a perl module or even
an application I am writing, the output I get from TAP seems enough.
When I write a test for an application where the author is someone
else and sometime the person running the test is a 3rd one, in those
case we usually need more details to drill down.

It is not unique to it but let me give an example with Test::WWW::Mechanize.

When this call fails

$mech-content_like(qr{regex});

it gives me a part of the content of the page. That's
ok in the TAP stream but I'd like to be able to save
the actual content and later let the user see it.

I could write this:

$mech-content_like(qr{regex}) or diag $mech-content;

but then I get all the content in the TAP stream making it quite unreadable.

Maybe I need something like this:

$mech-content_like(qr{regex}) or do {
my $filename = 'some_filename';
if (open my $fh, '', $filename) {
print $fh $mech-content;
diag File: $filename;
}
};

and then parse the TAP output for 'File:' *after* a test failure.

Is there a better way to do this?

regards
   Gabor


-- 
Gabor Szabo
http://szabgab.com/


Re: getting more details from a test script

2010-04-05 Thread Gabor Szabo
On Mon, Apr 5, 2010 at 12:17 PM, Ovid publiustemp-perl...@yahoo.com wrote:
 --- On Mon, 5/4/10, Gabor Szabo szab...@gmail.com wrote:

 From: Gabor Szabo szab...@gmail.com

 Maybe I need something like this:

 $mech-content_like(qr{regex}) or do {
     my $filename = 'some_filename';
     if (open my $fh, '', $filename) {
         print $fh $mech-content;
         diag File: $filename;
     }
 };

 and then parse the TAP output for 'File:' *after* a test
 failure.

 Is there a better way to do this?

 The problem, I think, is that everyone wants subtly different things from 
 tests outside of ok/not ok.  The question I'm wondering is what you mean by 
 this in is there a better way to do this?.

hmm, looking at again I think I used 'this' to mean that I don't know
what needs to be improved.


 Are you wanting a better way of presenting the filename to test 
 authors/runners?  Are you wanting a better way to store the file contents?

 If it's the former, we need structured diagnostics in TAP to be formalised 
 and implemented.  If it's the latter, I would recommend writing your own 
 output to file function and then instead of using Test::More and your own 
 test utilities, bundle all of them with Test::Kit so you can just do this:

  use My::Custom::Test::More tests = $test_count;

 The advantage here is that you have your own custom test behaviours nicely 
 controlled by one module and if you need to change them, you can do so in one 
 spot.

I certainly don't want to repeated the do { ... } part for every call
that needs saving and as a
second thought maybe it would be better and simpler to save the
content of the web page
every time, regardless of success or failure and then let the user
drill down if she wants but I'd
also prefer an already formalized way to communicate the filenames so
I don't need to invent
my own way.

 Or maybe you meant something else by this entirely :)

Yeah, possibly.

Maybe I also meant that I'd like a more general solution that would work for
other  Test::* modules as well and not only Test::WWW::Mechanize but I
am not sure
any more :-)

Maybe all that I need here is a callsave_data($var, 'title')
that would save the content of $var to a newly generated file and print
a diag(File: 'title' path/to/file). The test runner would then collect the
files and zip them together with the TAP output. The tool that displays
them (e.g. Smolder) would then be able to add links to these files.
This save_data would work similarly to explain an recognize when $var is a
reference and call Dumper on it before saving.

Gabor


Re: getting more details from a test script

2010-04-05 Thread Gabor Szabo
On Mon, Apr 5, 2010 at 1:58 PM, Joe McMahon mcma...@ibiblio.org wrote:
 On Mon, Apr 5, 2010 at 1:34 AM, Gabor Szabo szab...@gmail.com wrote:

 I could write this:

 $mech-content_like(qr{regex}) or diag $mech-content;

 but then I get all the content in the TAP stream making it quite unreadable.
 Yep. That's why we implemented the snapshot plugin for
 WWW::Mechanize::Pluggable. It automatically takes snapshots to a
 directory you set up at the start of the test script and diags() out a
 message saying where the snapshot is.

 It would be possible to set this up so that the snapshots were enabled
 only if (say) and environment variable was set.


I'll take a look at this.

Gabor


Makefile:1457: *** multiple target patterns. Stop.

2010-03-27 Thread Gabor Szabo
Though this has been already resolved but let me send it here for the
next person
who might fall in this trap

I got the error message  Makefile:1457: *** multiple target patterns.
Stop. when I
was trying to add a Makefile.PL to Bugzilla.

The problem was that Bugzilla has its modules from the root directory
and there is a
lib/ subdirectory that is the target location of 3rd party CPAN
modules installed for Bugzilla.
Specifically I had a a bunch of lib/man/man3/*.3pm files.

So when I tried to run

perl Makefile.PL
make

I got the above error message.

Once I removed those files the standard module installation process went well.

Gabor


Re: Interesting test failure under prove --merge

2009-12-06 Thread Gabor Szabo
On Thu, Dec 3, 2009 at 11:25 PM, Michael Peters
michael00pet...@gmail.com wrote:
 On 12/03/2009 04:18 PM, David Golden wrote:

 On Thu, Dec 3, 2009 at 3:25 PM, Gabor Szaboszab...@gmail.com  wrote:

 2 2 : 2 4 : 4 2 :   E r r o r :   C a n n o t   s e t   l o c a l e
 t o   l a n g u a g e   A r a b i c .
  ok 3 - -change_locale(ar)
 ok 4 - -change_locale(de)

 Extra space in the output before ok 3?

 Yeah, looks like your error error message is somehow ending with \n  which
 throws off the TAP when merged with the same stream. TAP tries to ignore
 things it doesn't recognize (which is why you can do merge in the first
 place) but if those things actually mess up the generated TAP there's not
 much the parser can do.

 If it's acceptable for these tests to emit extra stuff like this when
 failing, it might be best to try and catch it and output it via diag() (or
 even better Test::Most explain()).

For now I am capturing the stderr and then repringing it out via diag() but
I opened a ticket for Padre that we should find out why does Windows emit those
warnings and how to catch them in the application.

thanks for your help

Gabor


Interesting test failure under prove --merge

2009-12-03 Thread Gabor Szabo
I encountered and interesting failure that only appears when I run
prove with --merge on Windows.
This is running in the Padre Stand Alone which means Strawberry
October 2009. perl 5.10.1
Same thing on Linux works well though I have not compared the versions
of Test::Harness.


C:\work\padre\Padreprove -b t\15-locale.t
t\15-locale.t .. 1/7 2 1 : 3 2 : 2 0 :   E r r o r :   C a n n o t   s
e t   l o c a l e   t o   l a n g u a g e   A r a b i c .
t\15-locale.t .. ok
All tests successful.
Files=1, Tests=7,  3 wallclock secs ( 0.02 usr +  0.04 sys =  0.06 CPU)
Result: PASS


C:\work\padre\Padreprove --merge -b t\15-locale.t
t\15-locale.t .. Failed 1/7 subtests

Test Summary Report
---
t\15-locale.t (Wstat: 0 Tests: 6 Failed: 0)
  Parse errors: Tests out of sequence.  Found (4) but expected (3)
Tests out of sequence.  Found (5) but expected (4)
Tests out of sequence.  Found (6) but expected (5)
Tests out of sequence.  Found (7) but expected (6)
Bad plan.  You planned 7 tests but ran 6.
Files=1, Tests=6,  3 wallclock secs ( 0.02 usr +  0.05 sys =  0.07 CPU)
Result: FAIL


The test file can be seen here:

http://padre.perlide.org/trac/browser/trunk/Padre/t/15-locale.t?rev=9453

or to check it out here:
svn co -r9453 http://svn.perlide.org/padre//trunk/Padre/t/15-locale.t


In r9454 I committed a work-around capturing the stderr using Capture::Tiny
at least for the case that fails.

regards
   Gabor


Re: Interesting test failure under prove --merge

2009-12-03 Thread Gabor Szabo
C:\work\padre\Padresvn up -r9453
UMakefile.PL
Ut\15-locale.t
UChanges
Updated to revision 9453.

C:\work\padre\Padreprove --merge -bv t\15-locale.t
t\15-locale.t ..
1..7
ok 1 - The object isa Padre
ok 2 - The object isa Padre::Wx::Main
2 2 : 2 4 : 4 2 :   E r r o r :   C a n n o t   s e t   l o c a l e
t o   l a n g u a g e   A r a b i c .
 ok 3 - -change_locale(ar)
ok 4 - -change_locale(de)
ok 5 - -change_locale(en-au)
ok 6 - -change_locale()
ok 7 - no warnings
Failed 1/7 subtests

Test Summary Report
---
t\15-locale.t (Wstat: 0 Tests: 6 Failed: 0)
  Parse errors: Tests out of sequence.  Found (4) but expected (3)
Tests out of sequence.  Found (5) but expected (4)
Tests out of sequence.  Found (6) but expected (5)
Tests out of sequence.  Found (7) but expected (6)
Bad plan.  You planned 7 tests but ran 6.
Files=1, Tests=6,  4 wallclock secs ( 0.01 usr +  0.05 sys =  0.06 CPU)
Result: FAIL

C:\work\padre\Padreprove -bv t\15-locale.t
t\15-locale.t ..
1..7
ok 1 - The object isa Padre
ok 2 - The object isa Padre::Wx::Main
2 2 : 2 4 : 5 8 :   E r r o r :   C a n n o t   s e t   l o c a l e
t o   l a n g u a g e   A r a b i c .
 ok 3 - -change_locale(ar)
ok 4 - -change_locale(de)
ok 5 - -change_locale(en-au)
ok 6 - -change_locale()
ok 7 - no warnings
ok
All tests successful.
Files=1, Tests=7,  4 wallclock secs ( 0.02 usr +  0.06 sys =  0.08 CPU)
Result: PASS


On Thu, Dec 3, 2009 at 10:18 PM, Ovid publiustemp-perl...@yahoo.com wrote:
 - Original Message 

 From: Gabor Szabo szab...@gmail.com

 I encountered and interesting failure that only appears when I run
 prove with --merge on Windows.
 This is running in the Padre Stand Alone which means Strawberry
 October 2009. perl 5.10.1
 Same thing on Linux works well though I have not compared the versions
 of Test::Harness..


 C:\work\padre\Padreprove -b t\15-locale.t
 t\15-locale.t .. 1/7 2 1 : 3 2 : 2 0 :   E r r o r :   C a n n o t   s
 e t   l o c a l e   t o   l a n g u a g e   A r a b i c .
 t\15-locale.t .. ok
 All tests successful.
 Files=1, Tests=7,  3 wallclock secs ( 0.02 usr +  0.04 sys =  0.06 CPU)
 Result: PASS


 C:\work\padre\Padreprove --merge -b t\15-locale.t
 t\15-locale.t .. Failed 1/7 subtests

 Test Summary Report
 ---
 t\15-locale.t (Wstat: 0 Tests: 6 Failed: 0)
   Parse errors: Tests out of sequence.  Found (4) but expected (3)
                 Tests out of sequence.  Found (5) but expected (4)
                 Tests out of sequence.  Found (6) but expected (5)
                 Tests out of sequence.  Found (7) but expected (6)
                 Bad plan.  You planned 7 tests but ran 6.
 Files=1, Tests=6,  3 wallclock secs ( 0.02 usr +  0.05 sys =  0.07 CPU)
 Result: FAIL

 Hi Gabor,

 Can you rerun that test in verbose mode?  Is the failure still there?  If so, 
 can you post the output?  We've had problems with --merge in the past because 
 of how it works, but I'm curious to know what this issue is.


 Cheers,
 Ovid
 --
 Buy the book         - http://www.oreilly.com/catalog/perlhks/
 Tech blog            - http://use.perl.org/~Ovid/journal/
 Twitter              - http://twitter.com/OvidPerl
 Official Perl 6 Wiki - http://www.perlfoundation.org/perl6




Re: Where are the detailed error messages in TAP?

2009-12-02 Thread Gabor Szabo
On Tue, Dec 1, 2009 at 12:36 AM, Michael Peters mpet...@plusthree.com wrote:
 On 11/30/2009 05:25 PM, Gabor Szabo wrote:

 One thing I seem to be missing from the failing tests are the details
 of the failures.  Shouldn't that be part of the TAP stream
 and displayed on Smolder? It is not included in the archive file
 that was generated.

 http://search.cpan.org/~wonko/Smolder-1.40/lib/Smolder/Manual.pm#Full_Diagnostic_Messages

 Errors and diagnostics by default go out on STDERR and TAP is on STDOUT.
 This presents problems because there's no way to merge the 2 reliably after
 the fact. They need to come out on the same stream. See

if I was reading my own blog, or just remember what I was already told
on this list several times...

thanks!

Gabor


Where are the detailed error messages in TAP?

2009-11-30 Thread Gabor Szabo
I just setup a smoke-bot for Padre and started to push the results to
http://smolder.plusthree.com/app/public_projects/smoke_reports/11

One thing I seem to be missing from the failing tests are the details
of the failures.  Shouldn't that be part of the TAP stream
and displayed on Smolder? It is not included in the archive file
that was generated.

Gabor


Re: Discourage use_ok?

2009-11-09 Thread Gabor Szabo
On Mon, Nov 9, 2009 at 12:41 PM, Philippe Bruhat (BooK)
philippe.bru...@free.fr wrote:
 On Mon, Nov 09, 2009 at 02:24:11AM -0800, Ovid wrote:
 --- On Mon, 9/11/09, Ovid publiustemp-perl...@yahoo.com wrote:

  From: Ovid publiustemp-perl...@yahoo.com

  The *only* use I've ever had for use_ok() has been in a
  t/00-load.t test which attempts to load all modules and does
  a BAIL_OUT if it fails.  I'm sure there are other use
  cases, but if that's the only one, it seems a very, very
  slim justification for a fragile code.

 Thinking about this more, what about a compile_ok()?  It merely
 asserts that the code compiles (in an anonymous namespace, perhaps?),
 but doesn't make any guarantees about you being able to even use the
 code -- just that it compiles.  It wouldn't need to be done at BEGIN
 time, nor would it necessarily require a or die after it, since its
 availability is not guaranteed (though that would be problematic as
 cleaning a namespace is also fragile).

 Just tossing out ideas here.


 compile_ok() would certainly be interesting with scripts shipped with
 a module, that usually have very little meat that needs testing (since
 most of the work is done in the modules), but that one would at least
 check that they compile.

If I understand this would either do

perl -c blib/lib/Module/Name.pm
or
perl -Iblib/lib -MModule::Name -e1


I think there are several Test:: modules on CPAN that does this and I know I
have implemented something similar at least 4 times at various degrees
of brokenness.  I am sure other have similar code.

Having compile_all_pm_ok() would be also useful.

Gabor


Re: Making TODO Tests Fail

2009-07-13 Thread Gabor Szabo
On Mon, Jul 13, 2009 at 4:56 PM, Ovidpubliustemp-perl...@yahoo.com wrote:

 We currently have over 30,000 tests in our system.  It's getting harder to 
 manage them.  In particular, it's getting harder to find out which TODO tests 
 are unexpectedly passing.  It would be handy have to some option to force 
 TODO tests to die or bailout if they pass (note that this behavior MUST be 
 optional).

 Now one might think that it would be easy to track down missing TODOs, but 
 with 15,000 tests aggregated via Test::Aggregate, I find the following 
 unhelpful:


  TODO passed:   2390, 2413

 If those were in individual tests, it would be a piece of cake to track them 
 down, aggregated tests get lumped together.  Lacking proper subtest support 
 (which might not mitigate the problem) or structured diagnostics (which could 
 allow me to attach a lot more information to TODO tests) at the end of the 
 day, I need an easier way of tracking this.

I think it would be better to have a tool (Smolder) be able to display
various drill-downs from the aggregated test report.
e.g. list of all the TODOs
list of all the TODOs that pass
etc...

That way you don't need to run your test suit again with an option but
can get this information from the report
of the regular run.

Gabor


Re: Making TODO Tests Fail

2009-07-13 Thread Gabor Szabo
On Mon, Jul 13, 2009 at 5:10 PM, Michael Petersmpet...@plusthree.com wrote:
 Gabor Szabo wrote:

 I think it would be better to have a tool (Smolder) be able to display
 various drill-downs from the aggregated test report.

 If you want to see what Smolder would do to your tests, create a TAP archive
 and then you can upload it to the Junk project at
 http://smolder.plusthree.com (here's the upload form:
 http://smolder.plusthree.com/app/public_projects/add_report/9)

 Then you can click the OK or FAILED boxes for each stream and it will
 expand them out so you can see the details. TODOs are light green. So find
 the the one you might be interested in, click it and then see it's details.


AFAIK due to the number of tests it won't work well in Smolder - but I
have not tried it.
I was referencing to a future version of it ;-)

aka another feature request...

Gabor


My Smolder wish-list

2009-07-01 Thread Gabor Szabo
http://szabgab.com/blog/2009/07/1246433080.html

Gabor


prove is not generating archive when test bails out.

2009-06-29 Thread Gabor Szabo
When running tests with   prove -a file.tar.gz it nicely creates the
archive file
but if the test bails out the archive file is not created at all.

Is this a feature or a bug ?

Gabor


Combining TAP with more extensive logging of raw data

2009-06-10 Thread Gabor Szabo
I am trying to switch some home grown test scripts to test scripts
using Test::* modules and TAP.

There is one major issue and I wonder how others deal with it.

The home grown test scripts include raw data in their reports.

e.g.  when testing a web services we send an XML and receive another XML.
Both of these XMLs are recorded in the raw log file. This helps a lot
tracking down issues.

So now that I am switching reporting to TAP how do I log the raw data?

So far I could think only to either create a log file with the raw data or to
print the raw data using diag().
In the former case I lose the single result file advantage and I'll have
to somehow maintain the connection between the TAP output and the log file.

In the latter case printing so much with diag() might quite easily
clutter the output and
if there are newlines in the raw data they might even break the TAP output.

How do you deal with similar situation ?

Gabor


Re: Combining TAP with more extensive logging of raw data

2009-06-10 Thread Gabor Szabo
On Wed, Jun 10, 2009 at 5:57 PM, Andy Armstronga...@hexten.net wrote:
 On 10 Jun 2009, at 14:07, Michael Peters wrote:

 Gabor Szabo wrote:

 How do you deal with similar situation ?

 Test::More::Diagnostic lets you output structured YAML with your tests.
 Not all of the tools in the chain understand this YAML, but those that don't
 should ignore it equally. It's part of the spec (at least loosely) so it
 will be supported going forward.


 Yeah, it doesn't currently support arbitrary diagnostic blocks though -
 which would be my fault :)

 I can probably make a release that does within a few days if that's the kind
 of thing that Gabor needs.

I am quite confused and I am not sure what do I really want :-)

I recall that we talked about a possibility to emit yamlish but the last thing
I remember was the discussion about lower or upper case names...
Was there a progress in that subject ?


Anyway here is another thing that I found.
The test script fetches a few rows from a database and prints out a
nicely formatted
table of the values using high quality ascii art:

1  |  3  | foo
1  |  7  | bar

I can just print the array holding this using explain \...@data but that
will lead to
an uprising. The people who need to see this are Java and Matlab programmers.
Any other YAML like output will still be inferior to that nicely
formatted table but I hope
I'll be able to hook up some formater to display that nicely.
Preferably inside Smolder
as that's what we are going to use to collect the reports.

Gabor
ps. This is just my wish list here :-)


Fwd: [TIP] Announcement: Racetrack 1.0 repository

2009-05-25 Thread Gabor Szabo
FYI

-- Forwarded message --
From: Nagappan Alagappan nagap...@gmail.com
Date: Tue, May 26, 2009 at 7:19 AM
Subject: [TIP] Announcement: Racetrack 1.0 repository
To: testing-in-pyt...@lists.idyll.org


Hello all,

Racetrack is a designed to store and display the results of automated
tests.  At VMware, over 2,000,000 test results have been stored in
Racetrack Repository.  Over 25 different teams use the repository to
report results.  It has a very simple data model, just three basic
tables.  ResultSet (stores information about a set of tests (Product,
Build, etc.)  Result, which stores information about the testcase
itself, and ResultDetail, which stores the details of each
verification performed within the test.  ResultDetails also include
screenshots and log files, make it easy for the triage engineer to
determine the cause of the failure.

We are very excited to offer Racetrack to the public as an Open Source
project.  It offers complete visibility on test results to the
organization, much more than Pass/Fail. QA Engineers, Developers, QA
Managers, Project Managers all find it useful to quickly see the
results of Basic Acceptance Tests, available within an hour of the
build completing.  Racetrack Triage Report makes it easy to see the
number of defects found by a set of tests, and the number of failures
caused by Product Changes, and Script failures.   By adding a
reference to your Bugzilla and Build systems, you can easily provide
links directly from Racetrack to a defect or a build information page.
 The Web Services API is already part of the package, and SilkTest and
Java APIs will be added shortly.
Thanks
Nagappan

--
Linux Desktop (GUI Application) Testing Project - http://ldtp.freedesktop.org
http://nagappanal.blogspot.com

___
testing-in-python mailing list
testing-in-pyt...@lists.idyll.org
http://lists.idyll.org/listinfo/testing-in-python


prove with line numbers

2009-05-18 Thread Gabor Szabo
Is there a way to ask prove to print out the row number of each ok() call ?

I have a test script that blows up after 18 calls to ok() inside
the application printing only the line number in the application.
I can search for the title of the ok() all but it would be nice if I
could ask prove
(maybe in the -d flag) to print the line number of each ok() call.

Gabor


FTP server for testing

2009-05-06 Thread Gabor Szabo
Is there and FTP server module that could be used for light-weight
testing of an application that among other things also fetches a file from
and ftp server ?

I should be able to set it up simply without root access and run it
on a high port allowing access based on some username/password
not related to the real system underneath.

On my quick search I could not find any.

Gabor


OT: lists.cpan.org

2009-03-24 Thread Gabor Szabo
I know there are people who will complain about the fact I did it
or the way I did it, I am getting really used to it.

So I took the liberty to copy all the data from lists.cpan.org (aka
lists.perl.org)
to the Perlfoundation wiki.

As the data was too big to fit on one page I split it up into 6
pages.

Now people should go and manually clean up and reorganize
the data.

I'll ask the webmaster @ perl.org if they agree to replace the current
database with a link to the wiki.

http://www.perlfoundation.org/perl5/index.cgi?mailing_lists

regards
   Gabor


SocialText WTF

2009-03-24 Thread Gabor Szabo
I was editing the mailing_lists page on the perlfoundation wiki when
it complained a bit
and told me the service is currently not available. After that
*it saved its own error message as the next revision of the page*.

http://www.perlfoundation.org/perl5/index.cgi?action=revision_view;page_name=mailing_lists;revision_id=20090324143853

At least it gave me an option to restore the previous version.

Gabor


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:04 PM, Evgeny evgeny.zis...@gmail.com wrote:
 I have seen the page :
 http://perl-qa.hexten.net/wiki/index.php/Why_can%27t_a_program_count_its_own_tests

 And I still don't understand, why can't a perl program count its test and
 then when all the tests are done write something like:

 I ran 45976347563873456 tests and 587643873645 of then failed and
 234598634875634 of them passed.

 (dont mind that the numbers dont add up)


 Then you dont really need to count the amount of tests before hand, you
 count them as you go, and will only know the final amount of tests at the
 very end.


They can, just say

use Test::More 'no_plan';


The problem is that what happens if you constantly
get 100 success reports while in fact you had 300
tests, just you test script exited early?

e.g. because you added an exit; in the middle to shortcut
your test running while you were debugging some failing test.


Gabor
http://szabgab.com/test_automation_tips.html


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:40 PM, Evgeny evgeny.zis...@gmail.com wrote:
 If my script ended early, because maybe even a core dump ... the I wont
 care. It's just another case of a failed test that cant be reported by
 Test::More, but a human looking at the screen will hopefully understand what
 happened.

Human?

Why would a human look at a test report that says everything is ok?

Gabor

Perl 6 Tricks and Treats
http://szabgab.com/perl6.html


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:45 PM, Evgeny evgeny.zis...@gmail.com wrote:
 Gabor,
 Since you are in the field of testing - then you probably know about the
 other frameworks in other languages. Specifically what Ruby's Cucumber is
 about.
 I tried writing something similar in Perl, using Test::More no less. But I
 believe you are a far better perl programmer than me, and I would love to
 hear your comments -- if you agree to take a look.
 The project (one small perl file really) is currently here:
 http://github.com/kesor/p5-cucumber/

 Just thought that it would be interesting to you even if you don't have time
 to help out a little bit.

Well, there are a few people on this list (maybe all of them?) who are far more
competent than I am both in testing and Perl.
I am sure some of them will be glad to take a look.

I'll do as well later on.

Gabor


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:53 PM, Evgeny evgeny.zis...@gmail.com wrote:
 I actually put a link to the FAQ at the very first mail I sent.
 It does not address my questions, it gives examples that say we can't count
 tests ahead of time, its impossible. But I just want you to change the
 approach from ahead of time into realtime or something ... like all the
 other testing frameworks do it.

There is a work in progress to let people tell during their test code:
here I have 5 more test instead of planning ahead all of them
that might address the issue you see.

Besides that I'd be glad to see which framework solves this problem and how?

Gabor


Re: Testing scripts with expected STDOUT and STDERR in external files

2009-02-21 Thread Gabor Szabo
On Fri, Feb 20, 2009 at 11:30 PM, David E. Wheeler da...@kineticode.com wrote:
 On Feb 20, 2009, at 1:23 PM, Gabor Szabo wrote:

 I wonder if there are modules out there that already do this?
 I could not find any that would fit my needs.

 Test::Output?

  http://search.cpan.org/perldoc?Test::Output

 If it doesn't capture output from other programs, have a look at
 Capture::Tiny.

I looked at both Test::Output and Test::Trap but for both of them I
need to do lots of other things.
Capture::Tiny is cute.

Maybe I need to put together something using some of those though
I think I'll start just by putting what I have now in a module.

What should I call though ?
Test::Exec ?
Test::Execute ?
Test::External ?

Maybe I should extend Text::Cmd or Text::Commands.

Test::Commands ?

Gabor


Testing scripts with expected STDOUT and STDERR in external files

2009-02-20 Thread Gabor Szabo
Lately I need to test many small utilities written in various languages.

For the most simple cases I just need to run the utilities with a set of
input
and check if the output is the same as expected.

For this it is enough to run

system path/to/utitlity  in.txt  out.txt 2 err.txt

and then compare them to the expected.out and expected.err
To facilitate this I generate the input and the expected output files
and put them next to the utility:


path/to/utility
path/to/utility.out
path/to/utility.err
path/to/utility.in

In case I need to test the same utility with various inputs I keep

path/to/utility
path/to/utility.2.out
path/to/utility.2.err
path/to/utility.2.in http://utility.in


Sometimes these utilities need to be executed with
some other tool that even needs parameters. In such cases I use:

system pat/to/interpreter param param  path/to/utitlity  in.txt  out.txt
2 err.txt

In most cases all the utilities in one package need to be executed the same
way.


I wonder if there are modules out there that already do this?
I could not find any that would fit my needs.


Gabor


explain() of Test::Most and that of Test::More are different

2009-02-15 Thread Gabor Szabo
It is obvious but would be nice if would not happen

use Test::More;
diag explain $data;

works nicely, then if I swicth to

use Test::Most;
diag explain $data;

it stops printing as it now requires TEST_VERBOSE


so Test::Most is not a drop-in replacement for Test::More.


Gabor


Re: explain() of Test::Most and that of Test::More are different

2009-02-15 Thread Gabor Szabo
On Mon, Feb 16, 2009 at 9:17 AM, Ovid publiustemp-perl...@yahoo.com wrote:
 - Original Message 

 From: Gabor Szabo szab...@gmail.com

 It is obvious but would be nice if would not happen

 use Test::More;
 diag explain $data;

 works nicely, then if I swicth to

 use Test::Most;
 diag explain $data;

 it stops printing as it now requires TEST_VERBOSE


 so Test::Most is not a drop-in replacement for Test::More.

 In Test::Most, 'explain' shows the output all by itself.  You don't need the 
 diag() in front of it.  At the time I wrote it, it was backwards-compatible 
 because Test::More didn't have it.

yes I noticed that it works differently but now they don't work
together and that is very sad!

Really guys, you should find a solution that will satisfy both of you.


 I argued that most programmers would just want to do this:

  explain $thing;

 Schwern argued that I was now trying to do too much with explain (call Dumper 
 and output) and that those should be separate.

 While he's right that explain() would be doing more than one thing, I feel 
 that optimizing for the common case is what was important here (you just want 
 your data dumped), but Schwern and I couldn't agree, hence the 
 incompatibility.


Probably neither of you will like but what about having two different
names then?

For the sake of world peace and the sanity of users I hope it is not
too late to change it.


What about having a functiondig() [1] that will dump if needed but
not print and
explain() that will replace note(dig()).  (I think this is the
same as what explain() is in Test::Most.)

Please restore the Test::Simple = Test::More = Test::Most drop in
replacement chain!

I offer both of you a beer or whatever your favorite drink the next
time we meet!


Gabor

[1] look, its shorter!


Re: done_testing()

2009-02-09 Thread Gabor Szabo
On Mon, Feb 9, 2009 at 10:48 AM, Philippe Bruhat (BooK)
philippe.bru...@free.fr wrote:
 On Tue, Feb 03, 2009 at 08:21:33PM -0800, Michael G Schwern wrote:

 Finally, this makes it now possible to build up the test plan as you go.  I'd
 like to put first order support into Test::Builder and Test::More for it, but
 for the moment this will work:

   use Test::More;

   my $tests = 2;
   pass;
   pass;

   $tests += 1;
   pass;

   done_testing($tests);

 Just a side note: this has always been possible, as I've seen people do the 
 following:

my $tests;
use Test::More tests = $tests;

BEGIN { $tests += 2 };
ok( ... );
ok( ... );
BEGIN { $tests += 1 };
ok( ... );

 I like the plan add = $n interface a lot, especially with DrHyde's
 suggestion to use it for optional tests. That may look better than
 skipped tests, but I guess it's mostly a difference in the message one
 wants to send: skipped tests include a reason why tests were skipped.

 It may also make the plan computation much easier for complicated test
 suites where one tests a list of cases with a varying number of tests
 for each case, and doesn't want to put the hairy computation in a map {}
 at the plan() stage. Now that I think about it, this latter case is
 probably a better use case for plan add.



I have been using the BEGIN {} code for some time since I learned it on
the perl-qa list but the plan add is much better IMHO especially if
it can also report on incorrect number of tests in each section.

Gabor


Re: running tests with an arbitrary interpreter using Test::Harness

2009-02-08 Thread Gabor Szabo
On Sat, Feb 7, 2009 at 9:26 PM, Michael G Schwern schw...@pobox.com wrote:
 Gabor Szabo wrote:
 With prove I can say

 prove --exec $ENV{PARROT_DIR}/parrot
 $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc t/01.t

 and it will run

 $ENV{PARROT_DIR}/parrot $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc t/01.t

 how can I achieve the same thing with make test or Build test ?

 You'll get the most control, and the least headaches, if you just override the
 test target.  Even if you can hack Test::Harness into running Rakudo it has so
 many built in Perl5-isms that it'll keep biting you all down the line.

 In Module::Build it's simple, override ACTION_test().

 sub ACTION_test {
my $self = shift;
$self-depends_on('code');

my $tests = $self-find_test_files;

# XXX Throw in a check that PARROT_DIR is set

my $parrot = $ENV{PARROT_DIR}/parrot;
my $rakudo = $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc;

# XXX Throw in some checks that the above actually exists.

system(prove, --exec, $parrot $rakudo, @$tests);

return $? == 0 ? 1 : 0;
 }

 I believe you can avoid the override and set the use_tap_harness property
 and then just feed TAP::Harness arguments (which are very much like prove's)
 in with tap_harness_args.

 In MakeMaker you override test_via_harness().

 package MY;
 sub test_via_harness {
my($self, $perl, $tests) = @_;

# XXX Throw in a check that PARROT_DIR is set

my $parrot = $ENV{PARROT_DIR}/parrot;
my $rakudo = $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc;

# XXX Throw in some checks that the above actually exists.

my $command = $self-quote_literal($parrot $rakudo);
return qq[\tprove --exec $command @$tests];
 }


By the time I saw this I already used a shimming method Alias suggested
renaming the test files to something.6t and adding something.t to be a simple
script that will run the something.6t script.
See http://search.cpan.org/dist/Perl6-Conf on how it is working now.

For the next release or so I'll check out your suggestion too.

regards
   Gabor


running tests with an arbitrary interpreter using Test::Harness

2009-02-07 Thread Gabor Szabo
I'd like to start to upload experimental Perl 6 modules to CPAN and
make it easy for anyone to install.

I think the only issue I have right now is that I don't know how to
force make test
to use Rakudo for running the test suit.

For now I'll expect the user to have $ENV{PARROT_DIR} point to the checked
out version of parrot trunk.


With prove I can say

prove --exec $ENV{PARROT_DIR}/parrot
$ENV{PARROT_DIR}/languages/rakudo/perl6.pbc t/01.t

and it will run

$ENV{PARROT_DIR}/parrot $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc t/01.t

how can I achieve the same thing with make test or Build test ?

I found an environment variable in Test::Harness called HARNESS_PERL.
if I set it to $ENV{PARROT_DIR}/parrot
$ENV{PARROT_DIR}/languages/rakudo/perl6.pbc
before I call make test it then runs

$ENV{PARROT_DIR}/parrot $ENV{PARROT_DIR}/languages/rakudo/perl6.pbc -w t/01.t

that is almost right but it adds a -w option that rakudo does not like.

So the first question would be how to get rid of that -w ?

The second question how can I in Makefile.PL or Build.PL set this up?

Once this is done we can start uploading and installing Perl 6 code
from our already existing
tools. The user will only need to set PERL6LIB to the correct path.

regards
   Gabor


Re: done_testing()

2009-02-05 Thread Gabor Szabo
On Wed, Feb 4, 2009 at 4:35 PM, Ovid publiustemp-perl...@yahoo.com wrote:
 - Original Message 

 From: Michael G Schwern schw...@pobox.com

 First of all, thank you!  This is fantastic work and I'm sure it will make a 
 lot of people happy.

++


 Thoughts on first order support:

 use Test::More;

 plan(3);
 pass;
 pass;
 pass;
 plan(2)
 pass;
 pass;
 done_testing() # optional

 Then, you can incrementally build a plan for those who want it and it seems 
 backwards almost compatible (since done_testing() wouldn't be required).

 The problem is that internally, TB will see that plan() has been called and 
 will die if a plan has been called twice.

why not call it something else then?
plan_more(2)   or  add_plan(2)

You might want to report error if both plan(2) and add_plan(2) was
called in the same file.


BTW what about

use Test::More;
pass;
pass;
# done_testing() # not being called at all.

Is this a failure now as there was no plan, not even a no_plan ?

Gabor


cpan.org is our of date?

2008-12-25 Thread Gabor Szabo
I wonder why http://www.cpan.org/modules/02packages.details.txt.gz
still has a timestamp of  2008-12-17 01:27 ?

$ ping www.cpan.org
PING cpan.pair.com (66.39.76.93)

Oh and

$ wget http://www.perl.org/

does not answer after trying from from two locations.

Gabor


Module::Install and Test::NoWarnings- require Test::NoWarnigs should not hide warnings.

2008-12-23 Thread Gabor Szabo
In the Padre project we encountered some strange behavior.
We recently switched to use Module::Install and we also use Test:::NoWarnings
in our tests.

It seems Module::Install loads all the modules listed as requires or
test_requires
during the execution of Makefile.PL


This brought up the question what happens when Test::NoWarnings is
require-d and
not use-d. As I can see from the code, the import() call is what
generated the
additional test in the END block. So that part behaves nicely.
On the other hand Test:::NoWarnings still hides all the warnings in such case.
Specifically it hides the warnings Makefile.PL generates when some of
the modules
are missing or are at a version lower than required.

So I think - besides the fact that M::I probably should not load the
required modules
to memory - require Test::NoWarnigs should not hide warnings.


As a workaround now, I added the following lines to the beginning of
our Makefile.PL

eval  {
require Test::NoWarnings;
$SIG{__WARN__} = 'DEFAULT';
};



regards
   Gabor


Re: Public Humiliation and Kwalitee

2008-10-31 Thread Gabor Szabo
On Fri, Oct 31, 2008 at 3:20 PM, Barbie [EMAIL PROTECTED] wrote:
 On Thu, Oct 30, 2008 at 01:06:21AM +0100, Philippe Bruhat (BooK) wrote:
 and that the cpantesters tools would ignore them.

 isnt(CPAN Testers, CPANTS);

 You're confusing the issue. Please do not bring CPAN Testers into this.

why, we have not bashed them for over a month now ;-)

Gabor


Re: Public Humiliation and Kwalitee (was Re: Tested File-Find-Object-0.1.1 with Class::Accessor not installed)

2008-10-23 Thread Gabor Szabo
 http://cpants.perl.org/highscores/hall_of_shame

It says Not Found

thanks domm

  Gabor


Re: New CPAN Testers Reports site

2008-09-23 Thread Gabor Szabo
On Mon, Sep 22, 2008 at 7:08 AM, David E. Wheeler [EMAIL PROTECTED] wrote:
 On Sep 20, 2008, at 00:29, Barbie wrote:

 See http://use.perl.org/~barbie/journal/37496 for all the gory details.

 Barbie++ # Thank you!

More Barbie++


BTW  you could double the link entries in the
PAUSEID.html file so the will be:

link rel=alternate type=application/rss+xml title=CPAN Testers
RSS href=PAUSEID.rss /
link rel=alternate type=application/rss+xml title=CPAN Testers
RSS (No-PASSes) href=PAUSEID-nopass.rss /

that way Firefox will offer both feeds when you click in on the orange
radiation sign in the addressbar

Gabor


Can't locate version/vpp.pm in @INC

2008-09-11 Thread Gabor Szabo
Can someone tell me what causes this failure?

http://www.nntp.perl.org/group/perl.cpan.testers/2008/09/msg2187300.html

the latest Module::Inspector (1.05) is installed
http://search.cpan.org/dist/Module-Inspector/
and that module has not test failure at all.

Gabor


Re: passing the baton onwards

2008-09-06 Thread Gabor Szabo
On Sat, Sep 6, 2008 at 3:15 AM, brian d foy [EMAIL PROTECTED] wrote:
 I'll do the work to handle the ones the authors give up without a
 maintainer, and my first idea was that a virtual user than we
 advertised as free modules (free as in kittens) would move modules
 int willing homes faster. But then, maybe not.

Use case 1:
  I have two modules I would like to give up.
  Occassionally I might still update it (e.g. if someone sends me a good patch)
  but in general I'd like to put it in the take this module basket.
  IMHO this means the module needs to stay in my pause id or it will
get back there
  if I upload a new version but it should be visible that
  this module needs a new primary maintainer.

Use case 2: (quite similar)
  I see a module that seems to be unmaintained and needs a fix but something
  I don't really want to maintain.
  I can ask the author and if she is not responsive then you to take it over.
 Once I got the module I upload my fix but I'd also would like to *easily* set
 the this module needs a new primary maintainer flag.

Use case 3:
  Someone passes away or just disappears for a long period.
  The CPAN maintainers should set the flag this module needs a new
primary maintainer.


IMHO Instead of encouraging people to upload new modules we should
encourage them
to take over existing ones.

Gabor


Re: s/FAIL/welcome basket/

2008-09-06 Thread Gabor Szabo
On Sat, Sep 6, 2008 at 6:33 AM, Eric Wilhelm [EMAIL PROTECTED] wrote:
 Until PAUSE starts doing that, how do you let new authors know about
 cpantesters?  Also note that creation of an account may be separated
 from uploading a module by several years.

IMHO it is easy to add a few lines about the resources to every
message that goes out from PAUSE when you upload a module.

You get those anyway.
Some people might even look at it.

Gabor


Re: Reporting Bugs Where they Belong (was Re: The relation between CPAN Testers and quality)

2008-09-06 Thread Gabor Szabo
On Fri, Sep 5, 2008 at 9:15 PM, David Golden [EMAIL PROTECTED] wrote:

 It will identify it, but testers may or may not see the warning as it
 scrolls by in the CPAN output. Not much I can do about that.  But it
 will suppress the reports at least.

Actually you can send an e-mail about this to the tester.

Gabor


Re: cpantesters - why exit(0)?

2008-09-02 Thread Gabor Szabo
I am personally quite satisfied with the CPAN Testers though I do think that
there is too much noise (false FAIL reports) which mean the average CPAN
user who is not familiar with the situation will be misled.

AFAIK Barbie and co are working on a better schema for the database that
soon will improve the displayed information a lot.

So I'll wait for now.

On Wed, Sep 3, 2008 at 12:10 AM, David Cantrell [EMAIL PROTECTED] wrote:
 On Tue, Sep 02, 2008 at 01:23:31PM -0700, chromatic wrote:
 I already know that my distributions don't work if you don't install the
 dependencies

 I'm pretty damned sure that this a straw man.  Can you point at any
 regular tester who *right now* is regularly failing to follow the
 dependency chain?

actually both my recent modules get tons of FAIL reports as they only have
Build.PL and if I understand it the current version of CPANPLUS cannot deal
with that situations. As I understood it will be fixed soon so I shut my
keyboard now.
http://www.cpantesters.org/show/Wx-Perl-Dialog.html
http://www.cpantesters.org/show/Padre.html

On the other hand I don't understand why was this sent:
http://www.nntp.perl.org/group/perl.cpan.testers/2008/08/msg2102300.html

It is trying to test a module on 5.6.2 while the module declares both
in META.yml and in Build.PL
that it needs 5.8


Gabor


Re: Should MANIFEST go in the repository?

2008-08-20 Thread Gabor Szabo
First of all thanks for Perl::Critic!

I am quite sure this question has the potential of a nice holy war.
Anyway. I am in the Keep MANIFEST in repo and manually update camp.

I think MANIFEST is and should be the tool you control what gets in the
distro and the failures and warnings you might get during continuous
integration are just the way to err, warn you about forgetting something.

Gabor


Fwd: [TIP] Pythoscope proposal

2008-08-19 Thread Gabor Szabo
Hi,

the following message is cross posted from TIP -
the Python testing mailing list.
http://lists.idyll.org/listinfo/testing-in-python
Archive of this thread starts here:
http://lists.idyll.org/pipermail/testing-in-python/2008-August/000921.html

Where does Perl stand in regards of such tool?
As Perl has been around for more time, it has much more legacy code around
with even less testing and probably with a lot less readable code.
Do we have tools helping the new maintainer find its way around?


Gabor
http://szabgab.com/blog.html



-- Forwarded message --
From: Michał Kwiatkowski [EMAIL PROTECTED]
Date: Tue, Aug 19, 2008 at 6:43 AM
Subject: [TIP] Pythoscope proposal
To: tip [EMAIL PROTECTED]


Hi list,

What you'll find below is a proposal for a project we (signed below)
were thinking about for some time now and finally find some time to
work on. We feel test generation is a great idea, and probably an only
viable way for legacy system maintainers to remain sane. So, we
present you a Python code stethoscope, a tool for all codebases
doctors out there. We've already setup a homepage:

http://pythoscope.org

and a Launchpad project:

https://launchpad.net/pythoscope

Let us know what you think by replying on TIP. You can also put your
comments on the wiki if you want. Enjoy. :-)


Signed off by:
 Titus Brown
 Grig Gheorghiu
 Paul Hildebrandt
 Michal Kwiatkowski



=
Our mission statement
=

To create an easily customizable and extensible open source tool that
will automatically, or semi-automatically, generate unit tests for
legacy systems written in Python.



==
Slogan ;-)
==

Pythoscope. Your way out of The (Lack of) Testing Death Spiral[1].



==
Milestones
==

Milestones listed below are there to give you a general idea about
where we stand and where we want to go. Having said that, we plan
working on the system the agile way, with requirements fleshing out
(and, undoubtedly, numerous problems popping out) as we go. We
definitely want to keep our goals realistic and quickly want to arrive
to the point where our work will be helpful to at least part of real
projects out there. We hope to work closely with the Python testing
community in order to keep the project on a right track.

Rather tentative schedule for milestones follows. We want to complete
milestone 2 pretty quickly, to start working with code as soon as
possible. Our plan is to complete milestone 6 in about a month and
start working on, what now looks like the hardest problem, side
effects.

Milestone 1 (The proposal): done
Milestone 2 (Architecture): August 20th
Milestone 3 (Static analysis): August 31st
Milestone 4 (Dynamic analysis): September 7th
Milestone 5 (Setup  teardown): September 14th
Milestone 6 (Side effects): September 21st


Milestone 1: Write a proposal and introduce the project to the Python community.
-

At the time of this writing, this milestone has just been completed. :-)


Milestone 2: Decide on an initial architecture.
-

In terms of architecture I basically see it divided into two parts.
First part's responsibility is to collect and analyze information
about the legacy code and store it on disk. After that the second
component jumps in and uses this information to generate unit tests.
This separation is nice in many ways. First of all, it clearly
isolates responsibilities. Second, it allows us to rerun the parts
independently. So, whether we want the tool to gather new information
from recently changed source code, or start from scratch with unit
test stubs for some old class, we can do it without touching the other
part of the system.

This separation should be mostly understood at the conceptual level.
Both parts will surely share some of the library code and they may
even end up being invoked with the same script, using appropriate
command line flag. The distinction is important, because we may end up
using the relevant information for other things than unit test
generation. Like, for example, powerful source code browser, debugger,
or a refactoring tool. This is possible, but not certain, future. For
now we'll focus our attention on the test generation, because we feel
this is an area of Python toolbox that needs improvement most.

The information collector will accept directories, files and points of
entry (see dynamic code analysis description in milestone 4) to
produce a comprehensive catalog of information about the legacy
system. This includes things like names of modules, classes, methods
and functions, types of values passed and returned during execution,
exceptions raised, side effects invoked and more, depending on the
needs of the test generator. This is the part of the system that will
require most of the work. Dynamic nature of Python, while it gives us
a lot of leverage and freedom, introduces specific challenges related
to code analysis. This will be a fun 

Re: testing for warnings during tests

2008-08-19 Thread Gabor Szabo
I was just cleaning up old mails when I found this thread

On Tue, Jun 10, 2008 at 2:49 PM, David Golden [EMAIL PROTECTED] wrote:
 On Tue, Jun 10, 2008 at 12:28 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

 Are there situations where a test suite should give warnings?  I.e.
 stuff that the user should see that shouldn't get swallowed by the
 harness running in a quiet (not verbose) mode?

 For example, I have some tests in CPAN::Reporter that test timing out
 a command.  Since that could look like a test has hung (to an
 impatient tester) I make a point to use warn to flag to the user that
 the test is sleeping for a timeout test.

 Looks like this:

 $ Build test --test_files=t/13_record_command.t
 t/13_record_command..18/37 # sleeping for timeout test
 t/13_record_command..22/37 # sleeping for timeout test
 t/13_record_command..26/37 # sleeping for timeout test
 t/13_record_command..ok
 All tests successful.
 Files=1, Tests=37, 19 wallclock secs ( 0.01 usr  0.01 sys +  6.51 cusr
  2.06 csys =  8.59 CPU)

 So is there a better way to do this than warn?

Sure. IMHO that is what *diag* is for.
To print all kinds of messages to the screen in a TAP.

 That said, if you try this at home (with Proc::ProcessTable), you'll
 also get a lovely warning from Proc::ProcessTable having a
 non-portable v-string.  That is a warning that should perhaps be
 fixed, though it turns out to be upstream.  Should I clutter my code
 with stuff to suppress it?  Maybe.

 But I don't see how I can have the one without the other.

I think that warning should be reported.
If the tester can (automatically) understand where the warning
comes, it should try to report there. If it cannot then it should
report to you.
I know it is not optimal but then you should complain to the
author of the module you used preferably with a test case that
catches the warnings.


The tester should only report about stuff that it sees which is not in
TAP.

Gabor


Re: testing for warnings during tests

2008-08-19 Thread Gabor Szabo
On Tue, Aug 19, 2008 at 3:11 PM, David Golden [EMAIL PROTECTED] wrote:
 On Tue, Aug 19, 2008 at 8:02 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
 Sure. IMHO that is what *diag* is for.
 To print all kinds of messages to the screen in a TAP.

 Going up the thread, I think you had asked about whether the harness
 could catch warnings to find things that T::NW can't.  I think I was
 pointing out that there are legitimate reasons for a test author to
 issue warnings -- diag is just a specially formatted warning, after
 all.  So I don't think the harness can be expected to distinguish
 warnings from code versus intentional warnings from the test only
 from observing the output stream.

Sure people can fake the output of diag.

For now I'd like someone to start reporting anything not in
the TAP stream.

Then, if someone really wants to do it, she can replace the diag to be
silent and catch anything which looks like coming from diag that obviously
was only faking it.

Gabor


Error report for Padre 0.05

2008-08-18 Thread Gabor Szabo
Looking at this report I am not sure why does it fail and how to fix it
http://www.nntp.perl.org/group/perl.cpan.testers/2008/08/msg2041140.html

Besides, AFAIK PAR::Packer is not one of the prereqs of Padre so what are these
messages regarding PAR::Packer?

Known uninstallable prereqs PAR-Packer-0.982 - aborting install


Gabor


Re: wxWidgets and TAP processing (YAPC::EU?)

2008-08-09 Thread Gabor Szabo
On Sat, Aug 9, 2008 at 12:10 PM, Eric Wilhelm
[EMAIL PROTECTED] wrote:
 # from Gabor Szabo
 # on Friday 08 August 2008 02:05:

but I'd like to create a
wxWidgets interface for TAP.

 Have you looked at wxCPANPLUS?

 I think Sam was just using a text widget for the harness output, but
 perhaps hooking-in somewhere there is a good place to start.

 (And it should be nearly done because SoC is coming to and end *soon*.)

Why is that TAP parser related to CPANPLUS?

I know CPANPLUS must be something really cool, you just can't install
it on 5.8.8:

cpan install CPANPLUS

[...]

  CPAN.pm: Going to build K/KA/KANE/CPANPLUS-0.84.tar.gz

CPAN: CPAN::Reporter loaded ok (v1.13)
Can't locate CPANPLUS/Backend.pm in @INC (@INC contains: inc
inc/bundle lib /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi
/home/gabor/perl5lib/lib /home/gabor/perl5lib/lib/perl/5.8.8 /etc/perl
/usr/local/lib/perl/5.8.8 /usr/local/share/perl/5.8.8 /usr/lib/perl5
/usr/share/perl5 /usr/lib/perl/5.8 /usr/share/perl/5.8
/usr/local/lib/site_perl .) at Makefile.PL line 64.
BEGIN failed--compilation aborted at Makefile.PL line 64.


Gabor


scary and strange thing in FindBin

2008-08-09 Thread Gabor Szabo
If I have the following code in a file called Makefile.PL

use FindBin;
print $FindBin::Bin\n;

perl Makefile.PL  prints

/home/gabor/work/pugs

no matter where the file is located.

If the file is called a.pl  with the same content
it prints the correct directory as expected.

perl 5.8.8 on Ubuntu
perl -MFindBin -e'print $FindBin::VERSION'  is 1.47

The only environment variable I can see containing pugs is PATH

Gabor


Re: scary and strange thing in FindBin

2008-08-09 Thread Gabor Szabo
On Sat, Aug 9, 2008 at 1:22 PM, Paul Johnson [EMAIL PROTECTED] wrote:
 On Sat, Aug 09, 2008 at 01:09:23PM +0300, Gabor Szabo wrote:
 If I have the following code in a file called Makefile.PL

 use FindBin;
 print $FindBin::Bin\n;

 perl Makefile.PL  prints

 /home/gabor/work/pugs

 no matter where the file is located.

 If the file is called a.pl  with the same content
 it prints the correct directory as expected.

 perl 5.8.8 on Ubuntu
 perl -MFindBin -e'print $FindBin::VERSION'  is 1.47

 The only environment variable I can see containing pugs is PATH

 I suspect you've fallen foul of the problem documented in the KNOWN BUGS
 section of the documentation.

It looks like you are right.
Not so surprisingly I don't have an executable a.pl anywhere in my path but I
have such a Makefile.PL in the pugs directory.

(on more good reason for the CPANTS metric checking for *not* executable
Makefile.PL :-)

This is the reason I could not install CPANPLUS as first thing it does
in Makefile.PL it calls

   chdir $FindBin::Bin

thanks
   Gabor


Re: wxWidgets and TAP processing (YAPC::EU?)

2008-08-09 Thread Gabor Szabo
On Sat, Aug 9, 2008 at 12:41 PM, Gabor Szabo [EMAIL PROTECTED] wrote:
 On Sat, Aug 9, 2008 at 12:10 PM, Eric Wilhelm
 [EMAIL PROTECTED] wrote:
 # from Gabor Szabo
 # on Friday 08 August 2008 02:05:

but I'd like to create a
wxWidgets interface for TAP.

 Have you looked at wxCPANPLUS?

 I think Sam was just using a text widget for the harness output, but
 perhaps hooking-in somewhere there is a good place to start.

 (And it should be nearly done because SoC is coming to and end *soon*.)

 Why is that TAP parser related to CPANPLUS?

 I know CPANPLUS must be something really cool, you just can't install
 it on 5.8.8:

 cpan install CPANPLUS

 [...]

  CPAN.pm: Going to build K/KA/KANE/CPANPLUS-0.84.tar.gz

 CPAN: CPAN::Reporter loaded ok (v1.13)
 Can't locate CPANPLUS/Backend.pm in @INC (@INC contains: inc
 inc/bundle lib /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi
 /home/gabor/perl5lib/lib /home/gabor/perl5lib/lib/perl/5.8.8 /etc/perl
 /usr/local/lib/perl/5.8.8 /usr/local/share/perl/5.8.8 /usr/lib/perl5
 /usr/share/perl5 /usr/lib/perl/5.8 /usr/share/perl/5.8
 /usr/local/lib/site_perl .) at Makefile.PL line 64.
 BEGIN failed--compilation aborted at Makefile.PL line 64.


After learning about the FindBin bug and taking corrective actions I
managed to install
CPANPLUS and also CPANPLUS::Shell::Wx 0.03

Unfortunatelly I could not really run it as it is crashing on several actions.

Gabor


wxWidgets and TAP processing (YAPC::EU?)

2008-08-08 Thread Gabor Szabo
I am not sure if this is related to the recent ascii-art discussion
but I'd like to create a
wxWidgets interface for TAP.

Actually, as some of might have heard I have started to write and IDE
for Perl called
Padre. It now even has a web site and public SVN repository:

http://padre.perlide.org/

One of the things I'd like to add is a way to run the unit tests of the
project being developed in Padre and then show it in some nice way
to the user.
It would probably make sense to also have a stand-alone
version of this test-runner for those few who will still use other tools.

So if anyone is interested in doing this, or helping me with this,
there are several options. One of them is to hack on this during and
after YAPC::EU
http://www.yapceurope2008.org/ye2008/wiki?node=PerlIDE

Otherwise, you are welcome to join me on the Padre project.

regards
   Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: Interrest for a QA-Tool hackathon after YAPC::EU

2008-07-18 Thread Gabor Szabo
I am about to book my flight to Copenhagen but I have to decide when
to come back
so I was wondering what's going on with the Hackathon?

Is it going to take place?
Who is attending?
What is being planned for?

Is there still place to join?

Gabor


On Tue, May 6, 2008 at 1:49 PM, nadim khemir [EMAIL PROTECTED] wrote:
 In Oslo I proposed to organize a hackathon the week after YAPC::EU.

 Subject:
-QA

- tool: IDE, editors, _debugger_, CPAN::Mini::xxx
(I believe most toolchain ppl are on the QA list)

- your_idea_here

 I'd like to make it possible to have this one on a shoestring for those who
 have better ways to spend their money (like better hardware). What costs is
 transport, hotel and food. If things are as I plan:

 Transport:
 - the crossing to Sweden (train) is 30 Euros return, transport to Simrishamn
 is another 20 euros return (bus)

 http://en.wikipedia.org/wiki/Simrishamn

 Hotel:
 - I have some room in my summer house. 10-15 uncomplicated people can live
 there for a few days (There are two toilets and one shower).

 - for the rich and fortunate, hotel rooms are available 300m from the house
 but Simrishamn is crowded with tourist in that season so early booking is
 good.

 -a camping 2 KM away but we can put tents in the garden or better yet rent a
 big tent (6 * 4)

 Food:
 - restaurants costs 10-15 euros for lunch and much more evening time. pizzas
 etc .. are available during evening for around 10 euros.

 - cooking is the one thing I'm not too bad at. I'll cook if I get help. That
 will keep food cost low. Vegetarians will have to teach me (we could all eat
 vegetarian once just to share) (there's a dishwasher so don't be
 frightened). If no cooking is involved, I can at least make pancakes.

 - since this is a private place, There are no working hours; hacking can go
 round the clock and is expected to.

 Other:
 - there will be a projector
 - no internet (right now) but I'll fix a gsm-modem that would limit the
 bandwidth but still usable (up to  200 KB/s) and everyone is carying a
 CPAN::Mini right?
 - web site, etc: a volontary please
 - what else do we need?

 Getting some money for the hackathon:
 -Sponsering is wished for. I'd like you to help me with this.
 The money could be used to pay transportation, buy beds and rent the mega tent
 or other activities.

 Dates:
Saturday: Transportation from Danemark and free time in Sweden. I can 
 find
 room for 4/5 persons in Lund (possibly more). It will be possible to
 directely got to simrishamn too (1 1/2 hour bus) where accomodation is
 possible.

Sunday: transportation to Simrishamn, installation, start hacking
Monday-Wednesday-Longer?: more hacking

Weather permitting, a hacking session outdoors with ocean breeze in our
 faces.

It would be great if someone could give a presentation every evening

 Rapid feedback please, Nadim.


Re: About tidying up Kwalitee metrics

2008-06-29 Thread Gabor Szabo
On Sun, Jun 29, 2008 at 4:49 PM, chromatic [EMAIL PROTECTED] wrote:
 On Sunday 29 June 2008 02:28:54 Thomas Klausner wrote:

 For example:
 http://cpants.perl.org/kwalitee.html#no_cpants_errors
   no_cpants_errors
 Shortcoming: Some errors occured during CPANTS testing. They might
   be caused by bugs in CPANTS or some strange features of this
   distribution. See 'cpants' in the dist error view for more info.
 Remedy: Please report the error(s) to
 [EMAIL PROTECTED]

 'Shortcoming' should be extended to say:
 The goal of deducting a kwalitee point for 'no_cpants_errors' is to get
 authors to report CPANTS bugs. As you might guess, testing 10.000+
 different dists is hard. There are lot of special cases. It's impossible
 to figure out all those special cases in advance. 'no_cpants_errors' is
 a way to outsource the discovery of special cases to module authors.

 or something like that...

 I thought the goal of Kwalitee was to identify good free software, not to
 humiliate thousands of other authors of free software for not anticipating
 and working around your bugs.

I also think the no_cpants_errors has no place in the core metrics nor actually
any metric. It should be only seen by the CPANTS authors

... but chromatic, while I have not added that specific metric your tone is
offending and humiliating me and maybe also Thomas and possibly others who
invest time to try to make CPAN a better place.

Gabor


Re: About tidying up Kwalitee metrics

2008-06-28 Thread Gabor Szabo
On Thu, Jun 26, 2008 at 2:23 AM, Hilary Holz [EMAIL PROTECTED] wrote:
 On 6/25/08 10:24 AM, chromatic [EMAIL PROTECTED] wrote:

 On Wednesday 25 June 2008 03:15:59 Thomas Klausner wrote:

 One comment regarding 'each devel sets his/her own kwalitee metrics':
 This could be quite easy for the various views etc. But I'm not sure how
 to calculate a game score then. Do we end up with lots of different
 games? But then, it's only the game (which still motivates a few
 people..)

 Removing the game score completely would fix a lot of what I consider wrong
 with CPANTS.

 -- c
 second!

It seems that the game theme is after all turned into fierce competition or
lack of interest depending on ... I don't know on what, but neither
is good for
CPAN.
In some cases - me included - people fix the symptom to get the metric point
while the underlying code does not really change. So the indicator stops being
an indicator.

I don't know how to fix that.
Maybe the suggestions above and elsewhere to get rid of the game theme
and the top N bottom N authors would help.

Maybe what we need to do is
1) remove the game
2) fix the current metrics (e.g. license is not correct now)
3) Add detailed explanations for each metric, or maybe to create a page on
the TPF Perl 5 wiki for each metric where it would be easier to provide
pro and contra explanations for each metric.
4) add more metrics (including those that collect data from external sources)
5) categorize the metrics as suggested by Salve
6) get the search engines to start to use some of the metrics
 in their search results.

Not necessarily in that order

Gabor


Re: testing for warnings during tests

2008-06-10 Thread Gabor Szabo
On Tue, Jun 10, 2008 at 10:33 AM, Fergal Daly [EMAIL PROTECTED] wrote:
 2008/6/10 Gabor Szabo [EMAIL PROTECTED]:
 So apparently using Test::NoWarnings isn't that cool
 and mandating it with CPANTS metric is even less cool.

 What's the problem with T::NW? Maybe I'm misunderstanding the rest of
 this mail but you seem to be looking for something that will catch
 warnings from other people's test scripts which is no what T::NW is
 about. Or is there some other problem?


Well, the issue is that I would like to eliminate the warnings given by CPAN
modules during testing. (not only mine, everyones)

One way is that they all start to use T::NW in all their test scripts.
That was my original idea by adding it as a metric to CPANTS.

As it turns out people have all kinds of issues with T::NW.
I am not sure if the technical issues are really correct or not and if
you could fix them or not (see also the other thread about CPANTS).

One thing I understand is that they don't want to be forced to use this
specific solution.

So I thought I'll look for a solution where someone (a test runner?)
could check if there was any warning from the test. This is of course not
the job of T::NW.

regards
   Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: New CPANTS metrics

2008-06-10 Thread Gabor Szabo
On Tue, Jun 10, 2008 at 10:37 PM, Ricardo SIGNES
[EMAIL PROTECTED] wrote:
 * brian d foy [EMAIL PROTECTED] [2008-06-10T12:27:29]
 I'd like to see the metircs that only talk about the quality of the
 distribution, and leave everything else alone. If it's something I can
 fix by doing something to the distribution itself, measure it. If it's
 anything else, leave it out. :)

 Given that CPANTS has been discussed as a tool to help authors write better
 dists, I think this is a very, very good suggestion.

 Gathering other information is great.  I'd like to know if there Debian is
 packaging my code.  It just isn't as much about your code seems to be
 well-produced.

At one point I'd like to add a tool to CPANTS so people can give
weights to each one
of the metrics and see their 'kwalitee' through that magic weight system.

So you could say

10 for has_test and
1 use_warnings
0 for debian_redistributes
etc.

Once this is implemented it will be interesting to see what weights people
use for each metric.

I wanted the Debian specific metrics there in order to give feedback
to the authors so
they know what's the status of their module in Debian. (and I hope I
can add more distros)
It could be gathered in yet another site but I thought it might have a
better place in CPANTS.
I also think it has high added value to the kwalitee metrics but let's
not discuss that again.

Gabor


Re: New CPANTS metrics

2008-06-10 Thread Gabor Szabo
On Wed, Jun 11, 2008 at 5:26 AM, Paul Fenwick [EMAIL PROTECTED] wrote:
 I think a good solution is to add a new category called honours.  It is an
 honour to have your module packaged by Debian, included in the ActiveState
 distro, or to be used by another CPAN module.  For honours, we only mention
 what honours a module has received, not what it hasn't.  For example, an
 honours list may read:

* Packaged by Debian.
* Included with ActiveState Perl 5.8.8
* Given a 5 star review on cpanratings

 The important thing is that the list doesn't mention all the honours that
 haven't been received (packaged by RedHat, sent to the moon, included as a
 dual-life core module etc).  Honours don't contribute to the kwalitee score.

 The end result is authors feel good about their honours page (it doesn't
 show at all if there are no honours), the kwalitee metrics continue to
 measure things an author can reasonably fix, and end-developers using CPANTS
 for research won't be turned off by a large number of red 'optional metrics'
 from an otherwise excellent module.

Sounds like a good idea!

We were also thinking with Thomas on how to mark the other 3 debian
related metrics to be dependent on the packaged_by_debian metric.
After all if the module is not packaged by debian then the other 3
have no meaning.

So if a module is packaged by debian it will get it
packaged_by_debian honour
metric turned on. Then she will also see 3 new metrics where she has
partial control.
e.g. she can fetch the patch include it in the official distribution
on CPAN and notify the
Debian maintainers to upgrade.

Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


New CPANTS metrics

2008-06-09 Thread Gabor Szabo
Two days ago or so I posted a blog entry about the new CPANTS metrics.
http://szabgab.com/blog/2008/06/1212827982.html

I am glad that already there are some comments about them
even if both chromatic and Andy Lester are well, slightly against them
and even Ovid did not like the Test::NoWarnings metric.
http://use.perl.org/~chromatic/journal/36627

I know they all are authorities in matters of quality but I hope at some
point I might be able to either convince them or learn from them how
to improve these metrics.

Anyway, shall we leave all the fun to the use.perl.org readers only?

I am sending this to both the module-authors list so you can be
aware of the new metrics and the perl-qa list as they might
have a few words as well regarding kwalitee.

BTW if you go to CPANTS http://cpants.perl.org/
you will see that all the new metrics are marked as experimental
and as such by default they are not supposed to be displayed.

Also if you as a module author are interested in what's the status of
your module in downstream distros (well, currently only debian)
then you can go to CPANTS and check it out.

There are two issues regarding the criticism:
1) I did not find any mention of any new metric that would be good.
I'd be really glad to hear ideas of what could be a good metric?

2) True, it would be great if more of the module authors knew about
 CPANTS and cared. I agree so how could we let them know about
 it besides posting on use.perl.org and on this mailing list?
 Maybe http://perlbuzz.com/ ? Any other ideas?


regards
   Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


testing for warnings during tests

2008-06-09 Thread Gabor Szabo
So apparently using Test::NoWarnings isn't that cool
and mandating it with CPANTS metric is even less cool.

The issue I am trying to solve is how to catch and report
when a test suit gives any warnings?

I wrote it in my blog too but here it is. Occasionally when I install
a module manually I see warnings. Sometimes I report them
but mostly I don't. I guess smokers will not see them as the
tests actually pass.

How could we catch those cases without using Test::NoWarnings ?

Could the harness catch them?

Catching anything on STDERR isn't good enough as diag() goes there.

Would catching and reporting any output (both STDOUT and STDERR)
that is not proper TAP help here?

Of course it would still miss if someone has

  print STDERR # no cookies\n;

I know one of the features of TAP that a parser should ignore anything it
does not understand and it is especially important for forward compability.

Maybe the harness of the smokers could do that - assuming they have the latest
version of TAP - and then report the issues.


Gabor


-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: testing for warnings during tests

2008-06-09 Thread Gabor Szabo
On Tue, Jun 10, 2008 at 7:42 AM, chromatic [EMAIL PROTECTED] wrote:
 On Monday 09 June 2008 21:28:40 Gabor Szabo wrote:

 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

 Is it even possible?  I thought one of the goals of CPANTS was not to run any
 of the distribution's code directly.  The most useful metrics seem to meet
 that goal.

I did not mean it to be done by CPANTS.

Having those warnings during tests is a problem that should be somehow solved.

My attempt to recommend Test::NoWarnings (that I would change to
use Test::NoWarnings or Test::NoWarnings::Plus or etc if there were
other solutions) don't seem be the right solution.

So I wonder if there are other ways. E.g. if the harness could catch
the warnings?

Gabor


-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


best practices for temporary files during testing

2008-06-07 Thread Gabor Szabo
I just got a failure report from David Golden as this line
did not work  with Permission denied

copy($0, $0.tmp)

He was running it on Windows (Strawberry Perl).

I am sure I create temporary files in various test, so it might be the right
time to streamline them.

So what is the recommended place for temporary files and directories
that I can expect to be writable?

Should I just use File::Tempdir ?

Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Perl Training in Israel http://www.pti.co.il/
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: Test::NoWarnings and plan skip_all don't like each other

2008-06-07 Thread Gabor Szabo
On Thu, May 15, 2008 at 2:34 PM, Gabor Szabo [EMAIL PROTECTED] wrote:

 Today I found out that if you skip_all tests while you have
 Test::NoWarnings your test
 will fail.
 Bad.:-(


... and a work around I just found would be to load Test::NoWarnings
only after the call to plan() like this:

==

use Test::More;

plan skip_all = 'Why not?';
plan tests = 2;

eval use Test::NoWarnings;

ok(1)

==

Gabor


-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: CPANTS has_separate_license_file test

2008-06-03 Thread Gabor Szabo
On Tue, Jun 3, 2008 at 3:24 PM, Paul Fenwick [EMAIL PROTECTED] wrote:
 G'day QA folks,

 I'm gaming my CPANTS quality scores a little, and I've found one of the
 optional metrics has given me an odd result.

 The 'has_separate_license_file' test returns 'not ok' on:

http://cpants.perl.org/dist/kwalitee/IPC-System-Simple

 However, IPC-System-Simple *does* have a LICENSE file:

http://search.cpan.org/dist/IPC-System-Simple/LICENSE

 Is CPANTS looking for something special inside the LICENSE file?

The currently running version of CPANTS has that bug.
I think I have fixed that already in the SVN we just have to wait till Thomas
has some time to upgrade the version on the server.

Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Perl Training in Israel http://www.pti.co.il/
Test Automation Tips http://szabgab.com/test_automation_tips.html


Test Automation Tips and discussion list

2008-05-28 Thread Gabor Szabo
Hi,

I have already posted this invitation in several places,
I hope you don't mind receiving it here as well.

== Tips

I have setup a newsletter called Test Automation Tips where
I am going to send out various ideas I have on the subject.
As someone who has been practicing it for several years
and teaching it since 2003 (see http://www.pti.co.il/qa_automation.html )
Its mostly stuff I learned from you guys but I still hope it will
be an interesting read.

The tips will come from several languages, most probably
Perl, Python, PHP and Ruby but I'll probably give examples in
Java and maybe even in .NET.

Register here:
http://szabgab.com/mailman/listinfo/test-automation-tips

== Discussion list

In addition I have setup a public mailing list in order to provide
a place where people from the various languages and projects
can come together and discuss their experience.

For example on the PHP-QA  mailing list they are currently
discussion the addition of something very similar to the TODO
tests in TAP. I am sure people from the Perl world could give
their opinion there but the same subject could be discussed
on such a cross language mailing list.

Registration:
http://szabgab.com/mailman/listinfo/test-automation

== Training

Lastly, while this is slightly unrelated, I am going to teach my
QA Test Automation using Perl course on 19-20, June,
right after YAPC::NA in Chicago.

The syllabus is here:
http://www.pti.co.il/qa_automation.html


The registration is on the same page as the other master classes organized
by brian d foy:
https://www.theperlreview.com/cgi-bin/events.cgi

regards
   Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tipshttp://szabgab.com/test_automation_tips.html


Test Anything Planet - aggregating testing related blogs

2008-05-23 Thread Gabor Szabo
I have always wanted a reason to ask you people,
how are you improving your testing skills?

Are there any particular books you recommend?
Are you reading blogs that are really good place to learn about testing?
Or are you just writing more tests and improve by experience?

Among other things I am asking this, as I have setup a blog
aggregation planet and I would like to add to it other blogs that I
have not found
or missed otherwise:

http://tap.szabgab.com/

regards
Gabor
http://www.szabgab.com/blog.html


Test::NoWarnings and plan skip_all don't like each other

2008-05-15 Thread Gabor Szabo
Yesterday I was happy to notice that you can use Test::Warn together with
Test::NoWarnings.
So you can test for a watning in a specific test while testing that
nothing else gives a warning.\
Good. :-)

Today I found out that if you skip_all tests while you have
Test::NoWarnings your test
will fail.
Bad.:-(


Gabor


The following code:

===

use Test::More;
use Test::NoWarnings;

plan skip_all = 'Why not?';
plan tests = 2;

ok(1)

===

gives this output:

1..0 # Skip Why not?
You tried to run a test without a plan at
/root/perl5lib/lib/Test/NoWarnings.pm line 45.
END failed--call queue aborted.

===

and prove gives this:

===

t/w..You tried to run a test without a plan at
/root/perl5lib/lib/Test/NoWarnings.pm line 45.
END failed--call queue aborted.
t/w..skipped: Why not?

Test Summary Report
---
t/w.t (Wstat: 65280 Tests: 0 Failed: 0)
  Non-zero exit status: 255
Files=1, Tests=0,  0 wallclock secs ( 0.01 usr  0.01 sys +  0.04 cusr
0.00 csys =  0.06 CPU)
Result: FAIL


===


Test::Harness 3.10 lots of issues under Devel::Cover 0.64

2008-05-12 Thread Gabor Szabo
Hi Paul,

I am not sure if this was discussed already but while Test::Harness
3.10 passes its tests
on my Ubuntu 7.10 with perl 5.8.8 it fails when running with Devel::Cover:
Here is  the beginning of the 2684 lines of output:


Deleting database /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/cover_db
PERL_DL_NONLAZY=1 /usr/bin/perl -Iblib/lib -Iblib/arch
-MExtUtils::Command::MM -e test_harness(0, 'blib/lib',
'blib/arch') t/*.t t/compat/*.t
t/000-load# Testing Test::Harness 3.10, Perl
5.008008, /usr/bin/perl
ok
t/aggregator..ok
t/bailout.ok
t/baseok
t/callbacks...ok
t/compat/env..ok
t/compat/failure..ok
t/compat/inc-propagation..
#   Failed test '@INC propagated to test'
#   at inc_check_taint.t.tmp line 38.
# Structures begin differing at:
#  $got-[0] =
'/home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch'
# $expected-[0] = 'wibble'
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/lib,
# wibble,
# t/lib,
# blib/lib,
# blib/arch,
# /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi,
# /home/gabor/perl5lib/lib,
# /home/gabor/perl5lib/lib/perl/5.8.8,
# /etc/perl,
# /usr/local/lib/perl/5.8.8,
# /usr/local/share/perl/5.8.8,
# /usr/lib/perl5,
# /usr/share/perl5,
# /usr/lib/perl/5.8,
# /usr/share/perl/5.8,
# /usr/local/lib/site_perl
# -
# wibble,
# t/lib,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/lib,
# blib/lib,
# blib/arch,
# /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi,
# /home/gabor/perl5lib/lib,
# /home/gabor/perl5lib/lib/perl/5.8.8,
# /etc/perl,
# /usr/local/lib/perl/5.8.8,
# /usr/local/share/perl/5.8.8,
# /usr/lib/perl5,
# /usr/share/perl5,
# /usr/lib/perl/5.8,
# /usr/share/perl/5.8,
# /usr/local/lib/site_perl
# Looks like you failed 1 test of 2.

#   Failed test at t/compat/inc-propagation.t line 84.
#  got: '1'
# expected: '0'

#   Failed test '@INC propagated to test'
#   at inc_check.t.tmp line 39.
# Structures begin differing at:
#  $got-[0] =
'/home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch'
# $expected-[0] = 'wibble'
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/lib,
# wibble,
# t/lib,
# blib/lib,
# blib/arch,
# /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi,
# /home/gabor/perl5lib/lib,
# /home/gabor/perl5lib/lib/perl/5.8.8,
# /etc/perl,
# /usr/local/lib/perl/5.8.8,
# /usr/local/share/perl/5.8.8,
# /usr/lib/perl5,
# /usr/share/perl5,
# /usr/lib/perl/5.8,
# /usr/share/perl/5.8,
# /usr/local/lib/site_perl
# -
# wibble,
# t/lib,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/arch,
# /home/gabor/.cpan/build/Test-Harness-3.10-R6Fkfr/blib/lib,
# blib/lib,
# blib/arch,
# /home/gabor/perl5lib/lib/i486-linux-gnu-thread-multi,
# /home/gabor/perl5lib/lib,
# /home/gabor/perl5lib/lib/perl/5.8.8,
# /etc/perl,
# /usr/local/lib/perl/5.8.8,
# /usr/local/share/perl/5.8.8,
# /usr/lib/perl5,
# /usr/share/perl5,
# /usr/lib/perl/5.8,
# /usr/share/perl/5.8,
# /usr/local/lib/site_perl
# Looks like you failed 1 test of 2.

#   Failed test at t/compat/inc-propagation.t line 84.
#  got: '1'
# expected: '0'
# Looks like you failed 2 tests of 2.
Devel::Cover: Can't open inc_check_taint.t.tmp for MD5 digest: No such
file or directory
Devel::Cover: Deleting old coverage for changed file inc_check_taint.t.tmp
Devel::Cover: Can't open inc_check.t.tmp for MD5 digest: No such file
or directory
Devel::Cover: Deleting old coverage for changed file inc_check.t.tmp
 Dubious, test returned 2 (wstat 512, 0x200)
 Failed 2/2 subtests
t/compat/inc_taintok
t/compat/nonumbersok
t/compat/regression...ok
t/compat/test-harness-compat..ok
t/compat/version..ok
t/console.ok
t/errors..ok
t/grammar.ok
t/harness.ok
t/iterators...ok
t/multiplexer.ok
t/nofork-mux..ok
t/nofork..ok
t/parse...ok
t/premature-bailout...ok
t/process.ok
t/prove...
#   Failed test 'Call with defaults: run results match'
#   at t/prove.t line 1376.
# Structures begin differing at:
#  $got-[0][1]{switches} = ARRAY(0x89e4528)
# $expected-[0][1]{switches} = Does not exist
# $VAR1 = {
#   'got' = [
#  [
#'_runtests',
#{
#  'verbosity' = 0,
#  'switches' = [
#  '-MDevel::Cover'
#   

libwww-perl-5.812 and Devel::Cover 0:64

2008-05-12 Thread Gabor Szabo
After running the following as I have been doing with many other modules

cover -delete
export 
DEVEL_COVER_OPTIONS=-coverage,statement,branch,condition,path,subroutine,time
HARNESS_PERL_SWITCHES=-MDevel::Cover make test
cover -report html

all tests PASS and then I get:

Can't open database /home/gabor/.cpan/build/libwww-perl-5.812-7Pcs7m/cover_db

Indeed there was no cover_db in the root directory of the distribution
but I found it in t/cover_db

cd t/
cover -report html

solved the issue but I think this is a bug somewhere.

The report is here:
http://www.szabgab.com/coverage/libwww-perl-5.812/coverage.html

regards
   Gabor


Keeping tests for execution at a later point in time

2008-04-10 Thread Gabor Szabo
The issue was raised on the Oslo Hackathon that it would be cool
if we could keep the tests around so that they can be executed
later again making sure that even after one has upgraded other
parts of his system the previously installed modules still work as
expected.

AFAIK the issue did not get anywhere but as I have just seen on
the Fedora packagers mailing list
https://www.redhat.com/archives/fedora-perl-devel-list/2008-April/msg00095.html
there too is some request for this. There they also point out the
documentary value of the test files.


So let's see what needs to be done in order to be able to keep
the test files and run them later.


There are two concerns I could immediately see.
1) Tests might assume a certain directory structure, they might say
 use lib 'lib';
 use lib 't/lib';
 or other things.
2) Tests might use other files outside the t/ directory.
3) What else do you think might be problematic?


I wonder if we could put together some guidelines amending the
Testing Best Practices that will allow the easy distribution and
and later execution of the test files?

I know many people create helper modules in t/lib/...
some would call these helper packages My::Package::Test and
then say use lib 't/lib' in the test suit while others - I saw in
the code of AdamK - name their packages t::lib::Test so
they don't have to change @INC.

Both assume the current directory to be the parent of /t

Testing examples:
At least in one module I test the example scripts living in the
eg/ directory. I might not be the only one.
What should we do about this?


For now I think

Checking if all this can work:
CPANTS cannot do that without actually executing the tests
but the smokers could have a mode where - after unzipping
the tarball - they move the whole t/ directory to some other
place, move blib to another place and chdir to a  3rd place
and run the tests then.

If they did this for packages that have already passed their
tests and then report the possible issues we could have an
understanding how many of the distributions might actually
be tested after installation?

regards
Gabor

-- 
Gabor Szabo
http://www.szabgab.com/


Re: Preparing a new Test::* module

2008-04-09 Thread Gabor Szabo
On Mon, Apr 7, 2008 at 10:14 PM, Fergal Daly [EMAIL PROTECTED] wrote:
  I would say put as much as possible of this outside the Test::
  namespace and then wrap in a thin Test:: wrapper. I wish I'd done this
  with Test::Deep, it's on the todo list but I'll never get around to
  it,

I am not sure how it will work out, but maybe add it to the TODO Tracker
http://todo.useperl.at/ so someone will pick it up and do it for money.
I'd really like to see that separated.

Gabor


Re: W3C validator without network access?

2008-04-07 Thread Gabor Szabo
Thanks.

Now I hope someone will package it as a module on CPAN :-)

Gabor

On Mon, Apr 7, 2008 at 2:04 PM, Yitzchak Scott-Thoennes
[EMAIL PROTECTED] wrote:
 On Sun, April 6, 2008 9:28 pm, Gabor Szabo wrote:

  Is there a W3C validator that works locally on my computer?
  
   All the modules I found so far use the http://validator.w3.org/ service
   including  Test::HTML::W3C but that's not really usable in a frequently
   running test suit.

  The source for that is available as described on
  http://validator.w3.org/source/


W3C validator without network access?

2008-04-06 Thread Gabor Szabo
Is there a W3C validator that works locally on my computer?

All the modules I found so far use the http://validator.w3.org/ service
including  Test::HTML::W3C but that's not really usable in a frequently
running test suit.

There is Bundle::W3C::Validator that I think bundles all the modules need
to setup the service locally but that's not it yet either.


Gabor


Re: Friday afternoon in Oslo

2008-04-01 Thread Gabor Szabo
I have arrived to Oslo this morning.

http://www.szabgab.com/blog/2008/04/1207076595.html


on Friday I'll finish at Linpro at 17:00 and then I'd be glad to join the
rest of you.


Gabor


Re: My Perl QA Hackathon Wishlist

2008-03-28 Thread Gabor Szabo
On Wed, Mar 26, 2008 at 12:44 PM, Ovid [EMAIL PROTECTED] wrote:
 --- Gabor Szabo [EMAIL PROTECTED] wrote:

   I wonder if it would be possible to take the existing .*Unit
   libraries
   of Java and .Net and
   create some wrapper around them (or a replacement) so people with
   existing tests
   written in those testing system would start producing TAP results.

  In theory, this shouldn't be too hard.  In practice, there are issues.
  I'm going to write a bit of negative stuff here, but that's not to say
  that we can't or shouldn't do this because I think this is a great
  idea.  It's just that the testing worlds involved have different views
  on how things work and as we all know, these views are often
  religious in nature.

Just as the choice of language is religious in many cases.
I have a feeling for example that the Python community will have a very hard
time to accept Parrot even if it has advantages over other things they
might have.
Just because it comes from Perl people.
The PHP community seems has less negative feeling towards Perl.

I think this is also reflected in the adoption of TAP. It seems PHP
people are more ready
to use TAP than Python people.

.NET and Java people might have no such negative feelings towards Perl.
They just think it is not strong enough compared to their enterprise language.

That means getting them accepting TAP is quite an uphill battle. Making it a
drop-in replacement or a wrapper around their current system might be
a key issue
in adoption.

The other one would be a nice GUI for TAP aggregation and reporting.

I have copied some of the responses to the why use Perl for testing?
question to
http://perl-qa.hexten.net/wiki/index.php/Why_use_Perl_for_Testing
It would be nice if others would help putting together a good document.

regards
   Gabor


Re: My Perl QA Hackathon Wishlist

2008-03-28 Thread Gabor Szabo
On Fri, Mar 28, 2008 at 3:28 PM, Gergely Brautigam
[EMAIL PROTECTED] wrote:
 Why do I have the feeling that I'm part of a Borg cube ? :D

I don't know but I should re-read my sentences *before* I send them.
It seems my English gets worse by the hour. Sorry for that.

Gabor


Re: Is FIT fit for purpose?

2008-03-28 Thread Gabor Szabo
There were already many good answers here, let me just add my perspective
probably just repeating the previous comments.

I am not a Fit expert and I have never used that with real customers
but I did use
something resembling it as I think all of you have.

Occasionally I organize a QA Day for QA managers and show them various
techniques along with Ran Eilam. He is a Fit expert and I am CC-ing him hoping
he'll be able to share his experience.

During the QA Day I first show them the evolution of testing[1] that
is basically an
introduction to TAP. We build a flow-based test and then re-factor it
so that we can
move the input and the expected output into external files in some user
writable format. (a CSV file).
Effectively these are already FIT tables.

Then comes Ran and shows the FIT approach from the direction of the
tables which is
how users see it.

What I am trying to say is that if you can re factor your code to have your
input and expected output in some data structure, you'll also be able to move it
out to an external file. That file can be written by the users.

I think chromatic mentioned that you should use FIT for acceptance tests and
not for unit tests.
May I disagree here.
I think every test is a unit test. Just the size of the units is different.

For programmers the size of the unit is a function.
For the integration engineer (or the programmer when she is doing
integration between parts of the code)
it might be modules. They call it integration test but that's just bigger units.

When the customer test the product we call it acceptance test. In the
end I think it does not matter.
I can wrap each use-case in a single function (even if that function
needs to call external applications)
and then we are back in unit-test mode where you first write a flow
then create an array with input
and expected output values and then you can move it to external files.


So I think we all use a FIT-like approach when we separate test data
from test code.
We just did not call that FIT and did not put the data in spreadsheets.


Gabor
[1] The slides are here:
http://www.szabgab.com/talks/qa_in_opensource/slides/tap.xul


Re: Hackathon logistics

2008-03-26 Thread Gabor Szabo
On Wed, Mar 26, 2008 at 6:38 AM, Salve J Nilsen [EMAIL PROTECTED] wrote:
   *) Whiteboards, markers  erasers.
  
   Lots of whiteboards for taking notes.  At least one whiteboard just for
   projects being worked on, the grid at BarCamps is an example.

  We'll have at least 5 rooms with whiteboards, in addition to lots of space
  to set up brownboards. We'll also have a dedicated area for managing and
  discussing schedule-related stuff and one or two quiet areas for those who
  need a break.

If we can use the projectors in the classes we can hook up one of the computers
and use the wiki as our white board.

Gabor


Re: My Perl QA Hackathon Wishlist

2008-03-26 Thread Gabor Szabo
On Wed, Mar 26, 2008 at 11:16 AM, Nicholas Clark [EMAIL PROTECTED] wrote:

  Now, does anyone know a student?

I tried to spam all the local universities but with the current USD
exchange rate
people get about 30-40% less this year than 2 years ago...

ok, I know its not about  the money.

Gabor


Topics in Oslo

2008-03-26 Thread Gabor Szabo
On the Topics page
http://perl-qa.hexten.net/wiki/index.php/Oslo_QA_Hackathon_2008_Topics
I have started to move the topics suggested by people to a separate place
and started to include a list of interested people.

I think the talks can be removed from this page as they are already
available on the main
page and we should collect only the topics and 'sign up' for each
topic in which we
are interested. Later we can also reorder the topics according to
importance/interest.
Right now they are there in a semi random order.

Gabor


Re: Hackathon logistics

2008-03-26 Thread Gabor Szabo
On Wed, Mar 26, 2008 at 6:03 PM, Salve J Nilsen [EMAIL PROTECTED] wrote:
 Michael G Schwern said:

  David Golden wrote:
  
   I'm curious to try git, if anyone is up for teaching it.
  

  I had the same thoughts.  My concern is that we'll be spending time
   futzing with git rather than hacking on QA stuff.

  I approve of this message.

  On a related note, do we (the hackathon attendees collectively) have the
  necessary commit bits for all the QA-related projects we'd like to futz
  around with?


Post your topic in the Topics section of

http://perl-qa.hexten.net/wiki/index.php/Oslo_QA_Hackathon_2008_Topics
along with the link to the repository it is located in so others interested can
check if they have access to it.

Gabor


  1   2   >