TAPx::Parser: 'wait' and 'exit' status on Windows
It looks like TAPx::Parser is now working on Windows (thanks Corion!), even though some tests fail. TAPx-Parser-0.50_05 may fix the whitespace issue with the Windows tests, but the wait status still appears to be broken (set in TAPx::Parser::Iterator). After the handle used with open3 is finished, $? appears to have the wait status on OS X and other *nix operating systems, but not on Windows. Any thoughts? One person has told me that they don't think it's applicable under Windows, but I don't think that's correct. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: TAPx::Parser: 'wait' and 'exit' status on Windows
On 1/15/07, Ovid <[EMAIL PROTECTED]> wrote: It looks like TAPx::Parser is now working on Windows (thanks Corion!), even though some tests fail. TAPx-Parser-0.50_05 may fix the whitespace issue with the Windows tests, but the wait status still appears to be broken (set in TAPx::Parser::Iterator). After the handle used with open3 is finished, $? appears to have the wait status on OS X and other *nix operating systems, but not on Windows. Any thoughts? One person has told me that they don't think it's applicable under Windows, but I don't think that's correct. I wasnt aware that open3() was even reliable on win32. Yves -- perl -Mre=debug -e "/just|another|perl|hacker/"
Re: TAPx::Parser: 'wait' and 'exit' status on Windows
--- demerphq <[EMAIL PROTECTED]> wrote: > I wasnt aware that open3() was even reliable on win32. Well, it appears to keep STDIN and STDERR in synch and I desperately needed that. I don't know of any other way of doing this reliable (my hack to keep them in synch by overriding an internal Test::Builder function worked just fine, but that only works with tests using Test::Builder. I really think that TAPx::Parser is great and is something I'd love to see get moved into CORE, but until I can nail down a few final Windows issues, I can't see anyone agreeing to this. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
What the 'runtests' output looks like
(It would help if this is sent from the correct email address) Hi all, If you've not checked out TAPx::Parser lately, here's what the 'runtests' output looks like: http://publius-ovidius.livejournal.com/222624.html Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: Dealing with balls o' mud (was: Re: Test::Builder feature request)
Another vote here for "Working Effectively with Legacy Code" On Jan 14, 2007, at 10:35 AM, Michael G Schwern wrote: ... (where's my refactoring browser!?) http://e-p-i-c.sourceforge.net/ Eclipse plugin for Perl. Provides "extract subroutine" using Devel::Refactor. I believe Jeff Thalhammer is working on adding perl::Critic support to EPIC as well. At absolute minimum, with a big ball of mud, you can do dumb high level "exact input/output" tests of the sort which would normally be frowned upon. Yes, and, you need not stop at "exact" input/output. Putting automated end-to-end tests in place can indeed cover a good deal of the code - these would be tests that could also be called "acceptance" or "integration" tests. Using the web app example: - login - attempt login with bad credentials (should fail) - Add item to shopping cart. - Remove item from cart Etc. You can run many of these tests every 5 minutes all day, every day, and use them under something like Nagios or NetSaint, etc. as part of a monitoring system. More about balls o'mud: - Add "seams" as described in "Working Effectively with Legacy Code." Seams are places in the code where you can alter its behavior without editing at that place (once the seam is on.) For example, replacing an expression like ( $dollars .. $donuts ) with a subroutine call: Utils->get_range_of_items($dollars,$donuts) means you can now makes changes in get_range_of_items() which could be in a seperate (well tested) class. Perhaps the most interesting area (to me) about balls o'mud is the question of how to decide what refactorings and improvements are worth the effort. On a big ball o' mud this is a very hard problem. It requires that one: 1. Estimate the effort/cost of a refactoring/improvement. and 2. Estimate the value of a refactoring/improvement. Since the ball o' mud is, by definition, hard to understand, these estimates are even harder than usual. --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
What the 'runtests' output looks like
Hi all, If you've not checked out TAPx::Parser lately, here's what the 'runtests' output looks like: http://publius-ovidius.livejournal.com/222624.html Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Config files for TAPx::Parser
I have two different types of runtime configuration files needed for TAPx::Parser. One is the 'execrc' file which one can use to get fine-grained control for running every type of test. The other is the '.runtestsrc' file which will allow you to specify default behavior for 'runtests', such as which colors to use with colored output, what switches to use by default, etc. I'm thinking about embedding YAML::Tiny for this task and just using that subset of YAML for the file format. Anyone think of any objections to this? I know many people are not fans of YAML. Other suggestions are welcome, but they need to be lightweight and either core modules or easy to embed. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: What the 'runtests' output looks like
On 1/15/07, Ovid <[EMAIL PROTECTED]> wrote: Hi all, If you've not checked out TAPx::Parser lately, here's what the 'runtests' output looks like: http://publius-ovidius.livejournal.com/222624.html it's nice to see color output (too bad it won't work on windows--cmd.exe's problem, not yours.) i'd like to see skipped tests, failed todo, and unexpectedly successful todo tests in colors other than pass or fail. have a look at the pugs smoke output for some ideas: http://smoke.pugscode.org/. you'll see yellow for unexpected success, darker green for expected todo failures, and lighter green for skipped tests. there may be other color schemes out there for displaying test results, but if there are, i'm not aware of them. in any case, different things should appear different, and similar things should appear the same. this will help to preserve the principle of least surprise. ~jerry