Re: Dealing with balls o' mud (was: Re: Test::Builder feature request)
Another vote here for "Working Effectively with Legacy Code" On Jan 14, 2007, at 10:35 AM, Michael G Schwern wrote: ... (where's my refactoring browser!?) http://e-p-i-c.sourceforge.net/ Eclipse plugin for Perl. Provides "extract subroutine" using Devel::Refactor. I believe Jeff Thalhammer is working on adding perl::Critic support to EPIC as well. At absolute minimum, with a big ball of mud, you can do dumb high level "exact input/output" tests of the sort which would normally be frowned upon. Yes, and, you need not stop at "exact" input/output. Putting automated end-to-end tests in place can indeed cover a good deal of the code - these would be tests that could also be called "acceptance" or "integration" tests. Using the web app example: - login - attempt login with bad credentials (should fail) - Add item to shopping cart. - Remove item from cart Etc. You can run many of these tests every 5 minutes all day, every day, and use them under something like Nagios or NetSaint, etc. as part of a monitoring system. More about balls o'mud: - Add "seams" as described in "Working Effectively with Legacy Code." Seams are places in the code where you can alter its behavior without editing at that place (once the seam is on.) For example, replacing an expression like ( $dollars .. $donuts ) with a subroutine call: Utils->get_range_of_items($dollars,$donuts) means you can now makes changes in get_range_of_items() which could be in a seperate (well tested) class. Perhaps the most interesting area (to me) about balls o'mud is the question of how to decide what refactorings and improvements are worth the effort. On a big ball o' mud this is a very hard problem. It requires that one: 1. Estimate the effort/cost of a refactoring/improvement. and 2. Estimate the value of a refactoring/improvement. Since the ball o' mud is, by definition, hard to understand, these estimates are even harder than usual. --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: Dealing with balls o' mud (was: Re: Test::Builder feature request)
--- Michael G Schwern <[EMAIL PROTECTED]> wrote: > I've thought things like that in the past, innocent refactorings, and > broke shit. Especially since they have to be done by hand (where's > my refactoring browser!?) > > At absolute minimum, with a big ball of mud, you can do dumb high > level "exact input/output" tests of the sort which would normally be > frowned upon. That's what we're doing at work to provide some measure of safety on a mid-sized app that we're cleaning up. However, that still doesn't address the issue that sometimes you absolutely don't want to run the code because some of the things it does can be dangerous or problematic, hence the (admittedly awful) hack of sometimes modifying the code to check to see if we're testing. That hack, however, is easier to implement than the innocent refactorings. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Dealing with balls o' mud (was: Re: Test::Builder feature request)
[EMAIL PROTECTED] wrote: > 1. To test the code, you need to change it. > 2. Before changing the code, you should test it. > > How do we resolve these two opposites ? We change as little as > possible. *snip* > A lot of my more recent thoughts about testing and development have > come from a wonderful book Working Effectively with > Legacy Code by Michael Feathers" The most memorable line from > that book (that I've read so far - I'm still in the first 25%) can be > paraphrased - *clicky* on my wishlist. > I would also posit that no matter how bad a codebase, there is > _always_ sonething you can do without causing damage - in the 800 > line subroutine, take a chunk, place it in a function in a namespace, > and test that in isolation. Take another chunk, repeat till you have > a 500 line subroutine with some semi-meaningful calls to nicely > tested functions. And so on. I've thought things like that in the past, innocent refactorings, and broke shit. Especially since they have to be done by hand (where's my refactoring browser!?) At absolute minimum, with a big ball of mud, you can do dumb high level "exact input/output" tests of the sort which would normally be frowned upon. For example, got a big web app? Write a script to do something with it and save the resulting HTML. [2] Then test that doing the same thing again produces exactly that HTML. [1] These tests are fragile and ugly and don't provide good coverage but they can warn you if something changed. Then you can refactor the mud into something more testable with a bit of a safety net. [1] Cleverness is necessary for dynamic content, such as a timestamp on the page. Someone at OSCON mentioned a testing module which could force time to stand still for testing but I can't find it. [2] Or use something like Selenium.
RE: Test::Builder feature request
I know this is iseveral days old , but we in Oz get the weekend before almost anyone else so bear with me. When dealing with a BoM (Ball of Mud), there is a fundamental collision of two concerns here. 1. To test the code, you need to change it. 2. Before changing the code, you should test it. How do we resolve these two opposites ? We change as little as possible. A lot of my more recent thoughts about testing and development have come from a wonderful book Working Effectively with Legacy Code by Michael Feathers" The most memorable line from that book (that I've read so far - I'm still in the first 25%) can be paraphrased - 'Whatever the difficulties with a BoM codebase , never let "best" be the enemy of "better"' I would posit the _none_ of use have _perfectly_ clean codebases we deal with from day to day - they occupy a space from 'almost perfect' to 'abandon all hope'. I would also posit that no matter how bad a codebase, there is _always_ sonething you can do without causing damage - in the 800 line subroutine, take a chunk, place it in a function in a namespace, and test that in isolation. Take another chunk, repeat till you have a 500 line subroutine with some semi-meaningful calls to nicely tested functions. And so on. Enough preaching - the oven says my pies are ready!! Leif -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, 12 January 2007 6:49 PM To: perl-qa@perl.org Subject: RE: Test::Builder feature request --- [EMAIL PROTECTED] wrote: > I think code under test that has "if I'm under test" statements is > intrinsically weak. You want to test "what it does", not "what it does > when under test". Changing the code for testing means your not really > testing it , your testing a variation of it. I completely agree. In a perfect world, that's extremely sensible. But I wonder, has no one here ever sat down to the daunting task of taking a Ball of Mud and trying to test it? So you want to clean up that Ball, what do you do? Before refactoring, write tests to verify all behavior, including bugs. That's pretty much testing dogma and it's dogma I generally subscribe to. That's where the problem comes in. You're looking at an 800 line subroutine, no strict, no warnings, and scattered hither and yon throughout that subroutine are Bad Things to have happen while testing. Not everything is that easy to override or mock. So that leaves on in a dilemma. Not only are the tests excruciatingly difficult to write, but the mere act of running them is dangerous. Customers potentially get rebilled, accounts get deleted, support tickets get sent out, and so on. So I guess that while everyone else has the luxury of working with systems which are clean enough that it's not too expensive to work around issues like this, I'll shelve this suggestion and go back to the real world and continue trying to clean up this legacy code. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/ -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.410 / Virus Database: 268.16.10/625 - Release Date: 13/01/2007 -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.410 / Virus Database: 268.16.10/625 - Release Date: 13/01/2007 ** IMPORTANT The contents of this e-mail and its attachments are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you received this e-mail in error, please notify the HPA Postmaster, [EMAIL PROTECTED], then delete the e-mail. This footnote also confirms that this e-mail message has been swept for the presence of computer viruses by Ironport. Before opening or using any attachments, check them for viruses and defects. Our liability is limited to resupplying any affected attachments. HPA collects personal information to provide and market our services. For more information about use, disclosure and access see our Privacy Policy at www.hpa.com.au **
Re: Test::Builder feature request
chromatic wrote: the "Star Trek: Generations" fallacy. You steal a spaceship, which flies through space, to fly through space to a planet, flying through space, where a temporal anomaly, which also flies through space, deflected by a supernova, which you flew through space in your spaceship which flies through space, passes close enough to the planet, because both of them fly through space, that you can jump off a bridge into it.) I saw that movie, but I never realized it was so complicated! (I blame drugs.) jimk
Re: Test::Builder feature request
chromatic wrote: > (I know; it's not exactly what you were asking. I just wanted to get that in > a public mailing list so I could call that the "Star Trek: Generations" > fallacy. You steal a spaceship, which flies through space, to fly through > space to a planet, flying through space, where a temporal anomaly, which also > flies through space, deflected by a supernova, which you flew through space > in your spaceship which flies through space, passes close enough to the > planet, because both of them fly through space, that you can jump off a > bridge into it.) Its all tachyons, man. > I'm sympathetic to the argument that your test file is going to be ugly when > the code you're testing is ugly, but it seems to me that containing that > ugliness within the specific test file until you can refactor the test file > is much better than bolting another ad-hoc feature onto a testing system > which already makes way to many underspecified assumptions and would be > fairly difficult to replace with something nicer, someday. I agree. There's a simple, existing solution to this problem and its not one we want to encourage anyway. The situation would be different if the same people didn't control the tests and the code being tested, and thus required some sort of coordination, but they do control both sides. And it won't work anyway. Having $ENV{RUNNING_TESTS} = 1 should presumably indicate that we're running in a testing environment. Ok, but how does Test::Builder know that? Answer: it doesn't. Loading Test::Builder does not imply tests are being run. An example of this, off the top of my head, is Jifty which always loads Test::Builder (for hacky reasons, I admit). I'm sure with a little thought one could find more, but even if not I do not want to the loading of Test::Builder to imply we're running in a test environment. It limits Test::Builder. Ok, what about setting it when a plan is initiated? Surely that means tests are being run? Consider Test::AtRuntime, a Test::Builder derived module designed to embed tests in the code to be executed during normal operations to help catch and diagnose errors. Basically an extension of the run-time assert() concept. Test::Builder is loaded. There's a plan. Test functions are being executed. But the system is operating normally. A similar problem would occur if I ever got around to moving Carp::Assert over to using Test::Builder for its guts. This might seem like nit-picking for a corner case, but Test::Builder is designed to be generic. The more automatic "convenience" features which get built into it the less generic it is. The less generic it is the less neato testing modules can be spawned from it. So I push back at them and encourage them to be built in one level up on top of Test::Builder.
Re: Test::Builder feature request
Hi, On Friday 12 January 2007 01:49, [EMAIL PROTECTED] wrote: > You don’t have to use objects to get the same end effects as mocking > objects. Right! Now, as my "devil's advocate" signature tried to show, this thread is for the fun of the discussion. I'm sure all of use, and Ovid more than the rest of us, knows what good quality code should look like. I was just wondering if we should be flexible, which could be a quality, or not flexible and be more "right" though that could mean loosing some of the perlishness we all like. Cheers, Nadim.
RE: Test::Builder feature request
--- [EMAIL PROTECTED] wrote: > I think code under test that has "if I'm under test" statements is > intrinsically weak. You want to test "what it does", not "what it > does when under test". Changing the code for testing means your not > really testing it , your testing a variation of it. I completely agree. In a perfect world, that's extremely sensible. But I wonder, has no one here ever sat down to the daunting task of taking a Ball of Mud and trying to test it? So you want to clean up that Ball, what do you do? Before refactoring, write tests to verify all behavior, including bugs. That's pretty much testing dogma and it's dogma I generally subscribe to. That's where the problem comes in. You're looking at an 800 line subroutine, no strict, no warnings, and scattered hither and yon throughout that subroutine are Bad Things to have happen while testing. Not everything is that easy to override or mock. So that leaves on in a dilemma. Not only are the tests excruciatingly difficult to write, but the mere act of running them is dangerous. Customers potentially get rebilled, accounts get deleted, support tickets get sent out, and so on. So I guess that while everyone else has the luxury of working with systems which are clean enough that it's not too expensive to work around issues like this, I'll shelve this suggestion and go back to the real world and continue trying to clean up this legacy code. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
RE: Test::Builder feature request
You don’t have to use objects to get the same end effects as mocking objects. Mocking is a technique for OO code to achieve the following aim - "the code under test should not be changed to test it" The code _should_ (I'd prefer _must_ but lets not get into absolutes just yet) NOT be aware its under test. There are a number of ways to do this. If the code under test uses a Mail object of some kind, creating a mock object for it to use is pretty simple. If the code under test makes a function call to a module (in Perl-world) or a library (in C/C++-world), you should provide a test library for it to call, that has exactly the same API as the production library, plus extra functions to setup what it does when the API is used. For example, say the Mail API has a send() function. You code under test calls 'send(some args)' at some point. The test harness calls the test libraries 'setup_send()' function to do lots of interesting things at various times - pretend the mail went out ok, pretend the mail bounced, pretend lots of stuff. In the perl world, (recommended) use the Mock:: modules from CPAN. Or (not recommended) the real Mail module could be .../Prod/Mail.pm, and the test version could be ../Test/Mail.pm, and you get to play stupid PERL5PATH games. Or In C/C++, you can change the LD_LIBRARY_PATH for linking the real or test versions. If your code under test makes a system() call to send mail, you get to play path tricks again so that a dummy mail program is called instead of the real one. This may mean you have to chroot() to an env where everything is fake. I think code under test that has "if I'm under test" statements is intrinsically weak. You want to test "what it does", not "what it does when under test". Changing the code for testing means your not really testing it , your testing a variation of it. Leif -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, 12 January 2007 11:04 AM To: perl-qa@perl.org Subject: Re: Test::Builder feature request Nadim Khemir writes: > On Thursday 11 January 2007 18:04, Ovid wrote: > > > > Just one, Shouldn't the mailer "object" be mocked and the mail > > > sending checked? > > > absolutely, but how do you know to mock it or really send the email > > unless you know that you're being run by tests? > > Aren't you mixin contexts here? The code to be checked is sending mail > (right?). The test framework mocks the mail object so only the test > code needs to do something special and I believe the test code knows > the test code is running. You're assuming that the mail-sending code is an object, and separate from other things that should be run in the test. In bad code (which Ovid stated this is) those aren't reasonable assumptions to make! > > ... when one is working with ugly code, sometimes it can be very > > difficult to refactor complicated bits out so that they can be > > easily overridden by tests > > Yeah, you have a bunch of ugly code and the best way is to make it > even more ugly by making it aware of the testing. Sometimes. At least in the short term. > I've never seen any requirement that says: "when testing, don't do > this and that". Me neither. But I've seen lots of requirements that don't make any mention of testing at all. And lots of code that doesn't even have any requirements about anything. But suppose some code does a bunch of processing and then finally interacts with the outside world in some way (sending an e-mail, completing a financial transaction, controllig a robot) during testing it may be readily apparent that it would be disruptive to have that action occur. > I'm actually wondering if code which has knowledge of it being tested > is testable at all! well ' it's not. because you can never test the > "send mail" feature. Sure. But you can at least test everything else, all the processing up to the point which sends the mail. And you could in the test environment put the data that would be in the mail somewhere else, which can be tested, so that just leaves the actual mail transport being untested -- and that's probably being done with a Cpan module or something which has been tested elsewhere. > I'm sure you're not inventing this but one actually made the code > worse whith that kind of hack. IMHO, just enabling this kind of code > is going against everything you want to achive, testable applications. No, it's enabling you to test other code that is near an action which absolutely cannot be run in a test environment, thereby making more of the application testable. Smylers -- No vir
Re: Test::Builder feature request
On Thursday 11 January 2007 06:30, Ovid wrote: > Quite often people will write code which tests to see if > $ENV{HARNESS_ACTIVE} is true. For example, this allows them to not > email support from their code while testing. This variable is set in > Test::Harness. However, this causes a problem when someone > accidentally does this: > > perl t/email_support.t > > You can verify this behavior by running this with 'perl' and 'prove'. > It will fail when run through Perl. > > use Test::More tests => 1; > ok $ENV{HARNESS_ACTIVE}, 'running in the test harness'; Hm. You're asking for a general purpose way *in your test file* to tell *if someone is running your test file*. I wonder if it would be possible to put code *in your test file* that detects when someone has run it. (I know; it's not exactly what you were asking. I just wanted to get that in a public mailing list so I could call that the "Star Trek: Generations" fallacy. You steal a spaceship, which flies through space, to fly through space to a planet, flying through space, where a temporal anomaly, which also flies through space, deflected by a supernova, which you flew through space in your spaceship which flies through space, passes close enough to the planet, because both of them fly through space, that you can jump off a bridge into it.) I'm sympathetic to the argument that your test file is going to be ugly when the code you're testing is ugly, but it seems to me that containing that ugliness within the specific test file until you can refactor the test file is much better than bolting another ad-hoc feature onto a testing system which already makes way to many underspecified assumptions and would be fairly difficult to replace with something nicer, someday. It's not as if your test file won't be full of ugly code anyway, in this situation. Encouraging other people to fill their code with ugly hacks seems suboptimal. I sleep at night just fine thinking that having to work around ugly code is painful. Perhaps it will encourage them to fix the ugliness. It *should* hurt when you have different code paths in your application for and against testing. My preference would be repeated electric shocks, but I'll live with a general sense of nausea. -- c
Re: Test::Builder feature request
Nadim Khemir writes: > On Thursday 11 January 2007 18:04, Ovid wrote: > > > > Just one, Shouldn't the mailer "object" be mocked and the mail > > > sending checked? > > > absolutely, but how do you know to mock it or really send the email > > unless you know that you're being run by tests? > > Aren't you mixin contexts here? The code to be checked is sending mail > (right?). The test framework mocks the mail object so only the test > code needs to do something special and I believe the test code knows > the test code is running. You're assuming that the mail-sending code is an object, and separate from other things that should be run in the test. In bad code (which Ovid stated this is) those aren't reasonable assumptions to make! > > ... when one is working with ugly code, sometimes it can be very > > difficult to refactor complicated bits out so that they can be > > easily overridden by tests > > Yeah, you have a bunch of ugly code and the best way is to make it > even more ugly by making it aware of the testing. Sometimes. At least in the short term. > I've never seen any requirement that says: "when testing, don't do > this and that". Me neither. But I've seen lots of requirements that don't make any mention of testing at all. And lots of code that doesn't even have any requirements about anything. But suppose some code does a bunch of processing and then finally interacts with the outside world in some way (sending an e-mail, completing a financial transaction, controllig a robot) during testing it may be readily apparent that it would be disruptive to have that action occur. > I'm actually wondering if code which has knowledge of it being tested > is testable at all! well ' it's not. because you can never test the > "send mail" feature. Sure. But you can at least test everything else, all the processing up to the point which sends the mail. And you could in the test environment put the data that would be in the mail somewhere else, which can be tested, so that just leaves the actual mail transport being untested -- and that's probably being done with a Cpan module or something which has been tested elsewhere. > I'm sure you're not inventing this but one actually made the code > worse whith that kind of hack. IMHO, just enabling this kind of code > is going against everything you want to achive, testable applications. No, it's enabling you to test other code that is near an action which absolutely cannot be run in a test environment, thereby making more of the application testable. Smylers
Re: Test::Builder feature request
On Thursday 11 January 2007 18:04, Ovid wrote: >> Just one, Shouldn't the mailer "object" be mocked and the mail >> sending checked? >absolutely, but how do you know to mock it or really send the email >unless you know that you're being run by tests? Aren't you mixin contexts here? The code to be checked is sending mail (right?). The test framework mocks the mail object so only the test code needs to do something special and I believe the test code knows the test code is running. On Thursday 11 January 2007 18:54, Ovid wrote: > Well, as a general rule, one wants to minimize any reliance on > knowledge of testing in one's code, that is an understatement. > but the fact remains that when one > is working with ugly code, sometimes it can be very difficult to > refactor complicated bits out so that they can be easily overridden by > tests (this tends not to be the case with clean code). Thus, when one > has really bad code that shouldn't be run while testing, it can be > useful to have such an environment variable. > Note that I see no problem with testing hooks which make code easier to > override, but code shouldn't have knowledge of tests per se. That > being said, in the real world, it's not always practical to avoid this, > particularly when you are adding tests to an old, untested code base > and you need to get tests in place prior to refactoring. Yeah, you have a bunch of ugly code and the best way is to make it even more ugly by making it aware of the testing. I've never seen any requirement that says: "when testing, don't do this and that". I'm actually wondering if code which has knowledge of it being tested is testable at all! well ' it's not. because you can never test the "send mail" feature. I'm sure you're not inventing this but one actually made the code worse whith that kind of hack. IMHO, just enabling this kind of code is going against everything you want to achive, testable applications. If someone would ask me, I'd vote for removing that feature alltogether. It might make it more difficulte to be "practical and wrong" but I would live with that. Nadim, the devil's advocate :)
Re: Test::Builder feature request
--- Paul Johnson <[EMAIL PROTECTED]> wrote: > Now I can see uses for knowing whether or not you are being run as > part > of an installation, or in some automated environment, and I can > imagine > someone would have a use for HARNESS_ACTIVE, though I can't see it > myself, but I'm not sure this is it. > > And I'm afraid I didn't understand the paragraph about RUNNING_TESTS. > Perhaps it's just me, but if you don't get any sensible comments on > that you might want to try that paragraph again. Well, as a general rule, one wants to minimize any reliance on knowledge of testing in one's code, but the fact remains that when one is working with ugly code, sometimes it can be very difficult to refactor complicated bits out so that they can be easily overridden by tests (this tends not to be the case with clean code). Thus, when one has really bad code that shouldn't be run while testing, it can be useful to have such an environment variable. Note that I see no problem with testing hooks which make code easier to override, but code shouldn't have knowledge of tests per se. That being said, in the real world, it's not always practical to avoid this, particularly when you are adding tests to an old, untested code base and you need to get tests in place prior to refactoring. Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: Test::Builder feature request
On Thu, Jan 11, 2007 at 09:04:54AM -0800, Ovid wrote: > --- Nadim Khemir <[EMAIL PROTECTED]> wrote: > > > On Thursday 11 January 2007 15:30, Ovid wrote: > > > Quite often people will write code which tests to see if > > > $ENV{HARNESS_ACTIVE} is true. For example, this allows them to not > > > email support from their code while testing. This variable is set > > in > > > Test::Harness. However, this causes a problem when someone > > > accidentally does this: > > > > > >... > > > > > > Thoughts? > > > > Just one, Shouldn't the mailer "object" be mocked and the mail > > sending checked? > > Absolutely, but how do you know to mock it or really send the email > unless you know that you're being run by tests? If I needed to know for sure whether I was running a test or not, I think I would make sure the test specified that somehow. Whether that was by directly setting an environment variable, or by using a package or by mocking a mailer or whatever, I think I would want to take the responsibility for that. Now I can see uses for knowing whether or not you are being run as part of an installation, or in some automated environment, and I can imagine someone would have a use for HARNESS_ACTIVE, though I can't see it myself, but I'm not sure this is it. And I'm afraid I didn't understand the paragraph about RUNNING_TESTS. Perhaps it's just me, but if you don't get any sensible comments on that you might want to try that paragraph again. -- Paul Johnson - [EMAIL PROTECTED] http://www.pjcj.net
Re: Test::Builder feature request
--- Nadim Khemir <[EMAIL PROTECTED]> wrote: > On Thursday 11 January 2007 15:30, Ovid wrote: > > Quite often people will write code which tests to see if > > $ENV{HARNESS_ACTIVE} is true. For example, this allows them to not > > email support from their code while testing. This variable is set > in > > Test::Harness. However, this causes a problem when someone > > accidentally does this: > > > >... > > > > Thoughts? > > Just one, Shouldn't the mailer "object" be mocked and the mail > sending checked? Absolutely, but how do you know to mock it or really send the email unless you know that you're being run by tests? Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: Test::Builder feature request
On Thursday 11 January 2007 15:30, Ovid wrote: > Quite often people will write code which tests to see if > $ENV{HARNESS_ACTIVE} is true. For example, this allows them to not > email support from their code while testing. This variable is set in > Test::Harness. However, this causes a problem when someone > accidentally does this: > >... > > Thoughts? Just one, Shouldn't the mailer "object" be mocked and the mail sending checked? cheers, Nadim
Test::Builder feature request
Quite often people will write code which tests to see if $ENV{HARNESS_ACTIVE} is true. For example, this allows them to not email support from their code while testing. This variable is set in Test::Harness. However, this causes a problem when someone accidentally does this: perl t/email_support.t You can verify this behavior by running this with 'perl' and 'prove'. It will fail when run through Perl. use Test::More tests => 1; ok $ENV{HARNESS_ACTIVE}, 'running in the test harness'; Since I have ',r' and ',t' bound to ':!perl %' and ':!prove -lv %' in vim, it's easy for me to mistype this since those keys are next to each other. I am going to add the HARNESS_ACTIVE environment variable to TAPx::Harness, but having something like $ENV{RUNNING_TESTS} in Test::Builder so that users can be trained to know if tests are really running or not. As a workaround, you can fake it with this: package My::Test::More; use Test::Builder::Module; @ISA = qw(Test::Builder::Module); use Test::More; @EXPORT = @Test::More::EXPORT; $ENV{HARNESS_ACTIVE} = 1; 1; And then just 'use My::Test::More tests => $tests' in your code. That will always set the environment variable and provide some measure of safety. That protects someone accidentally running 'perl t/test.t', but clearly the harness really isn't active then, so it's a bit of a hack. Thoughts? Cheers, Ovid -- Buy the book -- http://www.oreilly.com/catalog/perlhks/ Perl and CGI -- http://users.easystreet.com/ovid/cgi_course/
Re: Test::Builder feature request...
Michael G Schwern wrote: > On 2/9/06, Geoffrey Young <[EMAIL PROTECTED]> wrote: > >> > This works: >> >> yes, excellent randy. thanks for that. it still seems a little >> hackish but >> that's ok - hackish works for me if it means I can do what I want and >> nobody >> else needs to do extra work :) >> >> I made some tweaks to your format and added a few minor notes here >> >> http://people.apache.org/~geoff/test-more-separately.tar.gz > > > A less hackish version of plan.t is... > > use Test::More; > my $TB = Test::More->builder; > $TB->no_ending(1); > > plan tests => 3; > > print qx!perl t/response.pl!; cool, thanks. I've updated the example package to include that format as well. --Geoff
Re: Test::Builder feature request...
On 2/9/06, Geoffrey Young <[EMAIL PROTECTED]> wrote: > This works: yes, excellent randy. thanks for that. it still seems a little hackish but that's ok - hackish works for me if it means I can do what I want and nobody else needs to do extra work :) I made some tweaks to your format and added a few minor notes here http://people.apache.org/~geoff/test-more-separately.tar.gz A less hackish version of plan.t is... use Test::More; my $TB = Test::More->builder; $TB->no_ending(1); plan tests => 3; print qx!perl t/response.pl!;
Re: Test::Builder feature request...
Randy W. Sims wrote: Adam Kennedy wrote: Randy W. Sims wrote: Adam Kennedy wrote: This works: ---test.pl--- use Test::More tests => 1; my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl!; $Test->current_test($counter + 1); But why 1? Why not 5? or 10? It has to be set to the number of tests run in the other process. I don't know if there is a way to do something like no_plan for the sub process... I don't think so... Every time pass(), ok(), etc is called it updates the counter. In the sub process there is no way to pass back the internal counter, so you have to update the counter manually. Well, that would be why you allow the sub-process's plan code to run as normal. When you get the fragment back, you can update it from the header or the foot, but not print the actual header/footer. You mean capture the output from the child process? Then allow the parent to generate the test output from the captured & parsed output of the child? That would mean for any lengthy child process there would be a pause until the child completed; only then would the parent output the results from the child. Otherwise, I guess you could Tee the output from the child. Is that what you mean by getting the fragment back? Well, what I had actually intended originally for my implementation was that you would split off the server process and have STDOUT/ERROR writing to a known file. Then either explicitly or at END time, the main script would collect up the files for however many async things it had spawned off, and attach them to the end. So rather than having the tests intermingling down the test output, they'd be in a seperate block at the end. But then in doing that I was trying for the most simple approach I could think of. So by getting the fragment back I'd been thinking more in terms of loading the fragments after the forked server was finished. Adam K
Re: Test::Builder feature request...
>> One of the problems is going to be numbering, surely? but it shouldn't need to be, right? I mean, TAP is merely a protocol and there shouldn't be a requirement that the bookkeeping happen in the same process as the TAP emitting process I wouldn't think. in fact, if someone were implementing my own TAP interpretation now without any knowledge of how Test::More works - as in, say I need a java TAP interpretation and handed it off to someone - I would expect the java interpretation to just spit out the proper stuff and have Test::Harness interpret those results solo. and, in actuality, the Test::Harness::TAP docs seem to indicate that is all that is required. but I understand why the perl implementations have this tie. I just wanted to be able to work around it. > This works: yes, excellent randy. thanks for that. it still seems a little hackish but that's ok - hackish works for me if it means I can do what I want and nobody else needs to do extra work :) I made some tweaks to your format and added a few minor notes here http://people.apache.org/~geoff/test-more-separately.tar.gz thanks all. --Geoff
Re: Test::Builder feature request...
Adam Kennedy wrote: Randy W. Sims wrote: Adam Kennedy wrote: This works: ---test.pl--- use Test::More tests => 1; my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl!; $Test->current_test($counter + 1); But why 1? Why not 5? or 10? It has to be set to the number of tests run in the other process. I don't know if there is a way to do something like no_plan for the sub process... I don't think so... Every time pass(), ok(), etc is called it updates the counter. In the sub process there is no way to pass back the internal counter, so you have to update the counter manually. Well, that would be why you allow the sub-process's plan code to run as normal. When you get the fragment back, you can update it from the header or the foot, but not print the actual header/footer. You mean capture the output from the child process? Then allow the parent to generate the test output from the captured & parsed output of the child? That would mean for any lengthy child process there would be a pause until the child completed; only then would the parent output the results from the child. Otherwise, I guess you could Tee the output from the child. Is that what you mean by getting the fragment back? It also means you get the numbers in order in the main tests too, because of the reprocessing. The counter gets out of order in the snippet I sent if any tests are added before the child process. That can be fixed with the code below (Alt: use an $ENV variable to pass $counter), but you still need to know how many tests are running in the child ahead of time. I guess you could write out the counter to a file in the child & read it back in the parent, or any other normal means of IPC. All of this would presumably be wrapped in a Test::* module to hide the ugly. ---plan.t--- use Test::More tests => 3; pass('Parent: Begin'); my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl $counter!; $Test->current_test($counter + 1); pass('Parent: End'); __END__ ---response.t--- use Test::More no_plan => 1; my $Test = Test::More->builder; $Test->no_header(1); my $counter = shift; $Test->current_test($counter); pass('Child'); __END__
Re: Test::Builder feature request...
On 2/8/06, Adam Kennedy <[EMAIL PROTECTED]> wrote: > Geoffrey Young wrote: > > hi all :) > > > > there's a feature split I'm itching for in Test::Builder, etc - the > > ability to call is() and have it emit TAP free from the confines of > > plan(). not that I don't want to call plan() (or no_plan) but I want to > > do that in a completely separate perl interpreter. for example, I want > > to do something that looks a bit like this > > > > use Test::More tests => 1; > > > > print qx!perl t/response.pl!; > > > > where response.pl makes a series of calls to is(), ok(), whatever. > > while this may seem odd it's actually not - I'd like to be able to > > plan() tests within a client *.t script but have the responses come from > > one (or more) requests to any kind of server (httpd, smtp, whatever). > > > > currently in httpd land we can do this by calling plan() and is() from > > within a single server-side perl script, but the limitation there is > > that you can only do that once - if I want to test, say, keepalives I > > can't have a single test script make multiple requests each with their > > own plan() calls without things getting tripped up. > > > > so, I guess my question is whether the plan->is linkage can be broken in > > Test::Builder/Test::Harness/wherever and still keep the bookkeeping in > > tact so that the library behaves the same way for the bulk case. or > > maybe at least provide some option where calls to is() don't bork out > > because there's no plan (and providing an option to Test::More where it > > doesn't send a plan header). > > > > so, thoughts or ideas? am I making any sense? > > > > --Geoff > > One of the problems is going to be numbering, surely? > > I've just started myself mucking around with some ideas where I wanted > to fork off a server process and then test in BOTH halves of a > connection at the same time. It sounds like something relatively similar > to what you need to do. > > One of the things I didn't really like about generating fragments is you > don't really get a chance to count each set, only the total (or worse, > no plans at all). > > What I think might be a useful approach is being able to "merge" > fragments to test output. > > So the lines from the external fragment would be parsed in, checked (in > plan terms) and then re-emitted into the main test (which would have a > plan totallying the whole group). A long time ago, I suggested (and implemented) the idea of nested test numbers. The idea being that your output looks like 1 # ok 2.1 # ok 2.2 # ok 2.3 # ok 3.1.1.1 # ok ... you get the idea the only rule would be that a.b.c.d must come before a.b.c.d+1 in the output. Each block can have a plan if you like then you just create a block for each process/thread that will emit test results. I've a feeling that Test::Harness would barf on the above output but if you prefix all the numbers with . then it's happy. Of course it would be good to have a version of TH that also undertands these nested test number properly, the . thing just lets you keep backward compatibility. So this solves the present problem and it also solves the problem of it being a pain to have a plan when you have data driven testing (#tests = #data x #tests per datum and other adjustments and don't forget those data that get an extra test etc etc). You can also put a group of tests into a subroutine and just plan for 1 test for each time the sub is called. Anyway, I hereby suggest it again but this time without an implementation. The last time, the biggest part of the implementation was rewiring Test::Builder to use a blessed ref rather than lexical variables for it's object attributes but now TB is like that by default, the rest shouldn't be too hard :) F
Re: Test::Builder feature request...
Adam Kennedy wrote: This works: ---test.pl--- use Test::More tests => 1; my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl!; $Test->current_test($counter + 1); But why 1? Why not 5? or 10? It has to be set to the number of tests run in the other process. I don't know if there is a way to do something like no_plan for the sub process... I don't think so... Every time pass(), ok(), etc is called it updates the counter. In the sub process there is no way to pass back the internal counter, so you have to update the counter manually. __END__ ---response.pl--- use Test::More no_plan => 1; Test::More->builder->no_header(1); Test::More->builder->no_ending(1); BTW, not sure if the no_ending() is needed. It works with and without in this case. pass ('this was a passing test'); ___END___ The problem was the test.pl file counter was never incremented so it never saw the planned test.
Re: Test::Builder feature request...
This works: ---test.pl--- use Test::More tests => 1; my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl!; $Test->current_test($counter + 1); But why 1? Why not 5? or 10? __END__ ---response.pl--- use Test::More no_plan => 1; Test::More->builder->no_header(1); Test::More->builder->no_ending(1); pass ('this was a passing test'); ___END___ The problem was the test.pl file counter was never incremented so it never saw the planned test.
Re: Test::Builder feature request...
On Feb 8, 2006, at 12:41, Geoffrey Young wrote: with your suggestion I'm almost there: 1..1 ok 1 - this was a passing test # No tests run! What parts do you want left out? Best, David
Re: Test::Builder feature request...
Adam Kennedy wrote: Geoffrey Young wrote: hi all :) there's a feature split I'm itching for in Test::Builder, etc - the ability to call is() and have it emit TAP free from the confines of plan(). not that I don't want to call plan() (or no_plan) but I want to do that in a completely separate perl interpreter. for example, I want to do something that looks a bit like this use Test::More tests => 1; print qx!perl t/response.pl!; where response.pl makes a series of calls to is(), ok(), whatever. while this may seem odd it's actually not - I'd like to be able to plan() tests within a client *.t script but have the responses come from one (or more) requests to any kind of server (httpd, smtp, whatever). currently in httpd land we can do this by calling plan() and is() from within a single server-side perl script, but the limitation there is that you can only do that once - if I want to test, say, keepalives I can't have a single test script make multiple requests each with their own plan() calls without things getting tripped up. so, I guess my question is whether the plan->is linkage can be broken in Test::Builder/Test::Harness/wherever and still keep the bookkeeping in tact so that the library behaves the same way for the bulk case. or maybe at least provide some option where calls to is() don't bork out because there's no plan (and providing an option to Test::More where it doesn't send a plan header). so, thoughts or ideas? am I making any sense? --Geoff One of the problems is going to be numbering, surely? This works: ---test.pl--- use Test::More tests => 1; my $Test = Test::More->builder; my $counter = $Test->current_test; print qx!perl t/response.pl!; $Test->current_test($counter + 1); __END__ ---response.pl--- use Test::More no_plan => 1; Test::More->builder->no_header(1); Test::More->builder->no_ending(1); pass ('this was a passing test'); ___END___ The problem was the test.pl file counter was never incremented so it never saw the planned test.
Re: Test::Builder feature request...
Geoffrey Young wrote: hi all :) there's a feature split I'm itching for in Test::Builder, etc - the ability to call is() and have it emit TAP free from the confines of plan(). not that I don't want to call plan() (or no_plan) but I want to do that in a completely separate perl interpreter. for example, I want to do something that looks a bit like this use Test::More tests => 1; print qx!perl t/response.pl!; where response.pl makes a series of calls to is(), ok(), whatever. while this may seem odd it's actually not - I'd like to be able to plan() tests within a client *.t script but have the responses come from one (or more) requests to any kind of server (httpd, smtp, whatever). currently in httpd land we can do this by calling plan() and is() from within a single server-side perl script, but the limitation there is that you can only do that once - if I want to test, say, keepalives I can't have a single test script make multiple requests each with their own plan() calls without things getting tripped up. so, I guess my question is whether the plan->is linkage can be broken in Test::Builder/Test::Harness/wherever and still keep the bookkeeping in tact so that the library behaves the same way for the bulk case. or maybe at least provide some option where calls to is() don't bork out because there's no plan (and providing an option to Test::More where it doesn't send a plan header). so, thoughts or ideas? am I making any sense? --Geoff One of the problems is going to be numbering, surely? I've just started myself mucking around with some ideas where I wanted to fork off a server process and then test in BOTH halves of a connection at the same time. It sounds like something relatively similar to what you need to do. One of the things I didn't really like about generating fragments is you don't really get a chance to count each set, only the total (or worse, no plans at all). What I think might be a useful approach is being able to "merge" fragments to test output. So the lines from the external fragment would be parsed in, checked (in plan terms) and then re-emitted into the main test (which would have a plan totallying the whole group). Adam K
Re: Test::Builder feature request...
>> so, thoughts or ideas? am I making any sense? > > > Yes, you are. *whew* :) > I think that the subprocess can load Test::More and > friends like this: > > use Test::More no_plan => 1; > Test::More->builder->no_header(1); cool, thanks. > > That will set No_Plan, Have_Plan, and No_Header to true, silencing the > "Gotta have a plan!" error and the "1.." message at the end. with your suggestion I'm almost there: 1..1 ok 1 - this was a passing test # No tests run! http://people.apache.org/~geoff/test-more-separately.tar.gz if you want to try... --Geoff
Re: Test::Builder feature request...
On Feb 8, 2006, at 11:41, Geoffrey Young wrote: so, I guess my question is whether the plan->is linkage can be broken in Test::Builder/Test::Harness/wherever and still keep the bookkeeping in tact so that the library behaves the same way for the bulk case. or maybe at least provide some option where calls to is() don't bork out because there's no plan (and providing an option to Test::More where it doesn't send a plan header). so, thoughts or ideas? am I making any sense? Yes, you are. I think that the subprocess can load Test::More and friends like this: use Test::More no_plan => 1; Test::More->builder->no_header(1); That will set No_Plan, Have_Plan, and No_Header to true, silencing the "Gotta have a plan!" error and the "1.." message at the end. HTH, David
Test::Builder feature request...
hi all :) there's a feature split I'm itching for in Test::Builder, etc - the ability to call is() and have it emit TAP free from the confines of plan(). not that I don't want to call plan() (or no_plan) but I want to do that in a completely separate perl interpreter. for example, I want to do something that looks a bit like this use Test::More tests => 1; print qx!perl t/response.pl!; where response.pl makes a series of calls to is(), ok(), whatever. while this may seem odd it's actually not - I'd like to be able to plan() tests within a client *.t script but have the responses come from one (or more) requests to any kind of server (httpd, smtp, whatever). currently in httpd land we can do this by calling plan() and is() from within a single server-side perl script, but the limitation there is that you can only do that once - if I want to test, say, keepalives I can't have a single test script make multiple requests each with their own plan() calls without things getting tripped up. so, I guess my question is whether the plan->is linkage can be broken in Test::Builder/Test::Harness/wherever and still keep the bookkeeping in tact so that the library behaves the same way for the bulk case. or maybe at least provide some option where calls to is() don't bork out because there's no plan (and providing an option to Test::More where it doesn't send a plan header). so, thoughts or ideas? am I making any sense? --Geoff