Re: Test::Aggregate - Speed up your test suites
Ovid wrote: >> Why not just load Perl once and fork for the execution of each test >> script. You can pre-load modules before you fork. > > Forking is also more likely to be used for parallelization. Often code > requires sweeping changes before it can be run in parallel. So this > means we're reduced to running the code sequentially and forking > doesn't offer a huge advantage and can mask hidden state assumptions > like when naughty code is munging globals such as filehandles or > built-in globals. > Also, since forking is only emulated on Windows, it's not reliable > (I've had it crash and burn more than once). I prefer to avoid writing > modules that are limited to specific platforms. > > (I'm not saying forking is a bad solution, just a different one). > > Finally, Test::Aggregate is designed to have tests run with minimal > changes. For many tests, just move them to the aggregate directory. > No worries about which modules to preload or anything like that. > > Finally, if you think my code is such a bad idea, I'm sure folks would > welcome alternatives. Nono, I was just wondering why that approach, it just seemed quite odd. You've now explained that quite nicely. Actually a large part of my initial reaction was due to the use of the word "concatenation". Looking at the module documentation I see that it's not anywhere near as simplistic as that. Aggregating tests is something that I do a lot of, it's just that normally I'm writing data driven tests - and on larger code bases the module load time can end up taking a non-trivial time. I only care about loading the modules as a part of the test for the first couple of tests; the other ones I just use Test::Depends or something to skip if that module fails to load. So, in the general case I can probably pre-load the lib/ modules for all but a few specially marked tests. However the usual problematic boundary between the harness and the test is there. How do you solve this for Test::Aggregate, is it by making it one test at the TAP level for each aggregated test? Sam
Re: Test::Aggregate - Speed up your test suites
# from Ovid # on Tuesday 01 January 2008 00:12: >> Either way, it is glaringly bad code. >> >> a. any call to slurp() doesn't pass a filename -- screams of evil >> b. 2-arg form of open -- banned >> c. non-lexical filehandles -- banned > >This is the sort of stuff that tests are designed to catch, but stuff >this bad *might* get missed with tight process boundaries. >... >(such as the time someone was parsing > Data::Dumper output without considering that I may have set > $Data::Dumper::Indent to a different value than the default). What are the chances that tests aggregated in this way will *actually* catch the $D::D::Indent issue? The bad assumptions might still work, right? >> Perhaps you're trying to address the "code makes global state >> assumptions" issue? Well, I think that might be borrowing trouble. > >I'm not directly trying to address it, but it's a side-benefit and my >real-world experience (as opposed to just sitting back and thinking >about it), tells me that I gain more than I lose. Yeah, you caught me thinking again. Too much is left to accident (of both order and omission.) You *might* find some of these sorts of bugs, but you'll be looking at them through the same puzzling lens (i.e. "what the? ... makes no sense!") and scratching your head just as much as you would if they were to manifest in a normal test. If your mission is to speed up the tests, lumping them all into one file and running that accomplishes this -- but at the cost of any distributed or parallel options. And there is a side-effect (whether or not you claim it as a benefit.) If your mission is to un-wtf the code, a tool that parses it (and finds non-local()'d globals as lvalues, distant lexicals, etc) has the ability to point you directly at the offending code. I bet concatenating all of the tests in the CPAN together would find some issues, but at the cost of how many false positives? And, how many would it miss? >> How could one test variations on that singleton's >> parameters with T::A? > >Read the docs. I explicitly address this issue. Uh... "Be careful"? So, how does one actually deal with it? --Eric -- Chicken farmer's observation: Clunk is the past tense of cluck. --- http://scratchcomputing.com ---
Re: Test::Aggregate - Speed up your test suites
On 01/01/2008, Ovid <[EMAIL PROTECTED]> wrote: > --- Eric Wilhelm <[EMAIL PROTECTED]> wrote: > > > Do you happen to have another example? This one looks to me like > > poorly > > written code in the test (or are you citing this as code in the > > product?) > > What??? That's the point! > > > Either way, it is glaringly bad code. > > > > a. any call to slurp() doesn't pass a filename -- screams of evil > > b. 2-arg form of open -- banned > > c. non-lexical filehandles -- banned > > This is the sort of stuff that tests are designed to catch, but stuff > this bad *might* get missed with tight process boundaries. When you're > working with teams of programmers (and you do, virtually, if you use > CPAN modules), it's not uncommon to find code which makes global state > assumptions (package variables, Perl's built-ins, etc.) > > If you want, I could come up with far more subtle examples of code > which demonstrates this, but I suspect we'll have to agree to disagree. > This is a real-world problem I've encountered before and will > encounter again (such as the time someone was parsing Data::Dumper > output without considering that I may have set $Data::Dumper::Indent to > a different value than the default). Ah this reminds me. One of these days someone needs to write a robust DD output validator. I tried to convince MJD it would be a great example for HOP parser technology and i think I almost succeeded yves -- perl -Mre=debug -e "/just|another|perl|hacker/"
Re: Test::Aggregate - Speed up your test suites
On Tuesday 01 January 2008 00:20:20 Ovid wrote: > Not if you want your code to run under 5.005. Some people still have > that issue. Yeah, but I can count the number of people who have that issue and simultaneously have permission to install new modules on one hand and still have all of my fingers left over. -- c
Re: Test::Aggregate - Speed up your test suites
Oh, and if you're going to take my *deliberately* bad example to task ... --- Eric Wilhelm <[EMAIL PROTECTED]> wrote: > > sub slurp { > >open FH, "< $file" or die $!; > >do { $/ = undef; } > > } > b. 2-arg form of open -- banned Not if you want your code to run under 5.005. Some people still have that issue. And you missed that $/ was not localized, a far more serious error :) Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Perl and CGI - http://users.easystreet.com/ovid/cgi_course/ Personal blog - http://publius-ovidius.livejournal.com/ Tech blog - http://use.perl.org/~Ovid/journal/
Re: Test::Aggregate - Speed up your test suites
--- Eric Wilhelm <[EMAIL PROTECTED]> wrote: > Do you happen to have another example? This one looks to me like > poorly > written code in the test (or are you citing this as code in the > product?) What??? That's the point! > Either way, it is glaringly bad code. > > a. any call to slurp() doesn't pass a filename -- screams of evil > b. 2-arg form of open -- banned > c. non-lexical filehandles -- banned This is the sort of stuff that tests are designed to catch, but stuff this bad *might* get missed with tight process boundaries. When you're working with teams of programmers (and you do, virtually, if you use CPAN modules), it's not uncommon to find code which makes global state assumptions (package variables, Perl's built-ins, etc.) If you want, I could come up with far more subtle examples of code which demonstrates this, but I suspect we'll have to agree to disagree. This is a real-world problem I've encountered before and will encounter again (such as the time someone was parsing Data::Dumper output without considering that I may have set $Data::Dumper::Indent to a different value than the default). > Perhaps you're trying to address the "code makes global state > assumptions" issue? Well, I think that might be borrowing trouble. I'm not directly trying to address it, but it's a side-benefit and my real-world experience (as opposed to just sitting back and thinking about it), tells me that I gain more than I lose. > Consider e.g. a singleton which is customizable upon creation > (by-design it assumes that only one set of parameters applies from > creation onward.) How could one test variations on that singleton's > parameters with T::A? Read the docs. I explicitly address this issue. Cheers, Ovid -- Buy the book - http://www.oreilly.com/catalog/perlhks/ Perl and CGI - http://users.easystreet.com/ovid/cgi_course/ Personal blog - http://publius-ovidius.livejournal.com/ Tech blog - http://use.perl.org/~Ovid/journal/
Re: buildbot - an experiment
On Dec 31, 2007, at 4:24 PM, David Cantrell wrote: On Sat, Dec 29, 2007 at 05:51:50PM -0500, James E Keenan wrote: How might this be used to perform smoke-testing for a project like Parrot, where we want to test on many combinations of operating system, platform and C compiler? If anyone can give me an idiots' guide to how to grab the most recent source tree, build it, and test it, then I can test it on the same boxes as I do CPAN testing, plus maybe a couple of others. Also, any updates to the wiki would be helpful: http://perl-qa.hexten.net/wiki/index.php/Buildbot --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/