Re: buildbot - an experiment
I've set up a parrot buildmaster/slave, currently located at: http://buildbot.eigenstate.net:8040/ --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: Testing print failures
On Saturday 05 January 2008 14:00:41 nadim khemir wrote: > Do you happend to know something about > replacing 'print' with XS code short of patching perl (which doesn't sound > like a good idea) Sure, swap the pp_print function pointer in the opcode array before you compile the code you wish to behave differently. -- c
Re: buildbot - an experiment
On Jan 5, 2008, at 4:20 PM, Eric Wilhelm wrote: Is there any sort of build_ok/test_ok matrix for "$svn_rev x $platform" for parrot? Distributed, cross-platform projects tend to suffer from "oh yeah, trunk is broken on $platform right now" (i.e. "as of 10 minutes ago"), which is hard to know if you're not "in the know". Well this is exactly the sort of problem buildbot is supposed to handle: - someone sets up a central buildmaster which watches the SVN repo and sends out change notices to the appropriate build slaves (maybe some slaves only get changes once a day, others get every commit, etc.) - many people set up build slaves, on different platforms - the buildmaster shows the results from all slaves on a web page, and perhaps also on an IRC channel, email list, etc. --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: buildbot - an experiment
# from Matisse Enzer # on Saturday 05 January 2008 13:25: >Turns out the parrot build/test failed on SVN revision 24566, but >passed in revision 24567 > >So, I am gonna see if i can make a buildbot config to build and test >parrot, using an SVN polling configuration where I'll try and check >the SVN repo every N minutes and then do a checkout/build/test if >there are no further commits for another X minutes. Is there any sort of build_ok/test_ok matrix for "$svn_rev x $platform" for parrot? Distributed, cross-platform projects tend to suffer from "oh yeah, trunk is broken on $platform right now" (i.e. "as of 10 minutes ago"), which is hard to know if you're not "in the know". --Eric -- [...proprietary software is better than gpl because...] "There is value in having somebody you can write checks to, and they fix bugs." --Mike McNamara (president of a commercial software company) --- http://scratchcomputing.com ---
Re: Testing print failures
# from Nicholas Clark # on Saturday 05 January 2008 14:24: >Not tested, but, can you > >1: grab the address of print's op from PL_ppaddr >2: store it somewhere useful >3: replace it in PL_ppaddr with your own function That would be cool. >Your own function calls the original, and then before returning, > checks the return value on the stack. If it indicates fatal, then > check the calling context. If that's void, croak. > >Otherwise return normally. Hmm, can it also have a lexical pragma with $^H fun? Sketch: use fatal; while(...) { ... print "foo\n"; } no fatal; And now the aforementioned Perl::Critic class will have an even harder time dealing with that ;-) --Eric -- But as soon as you hear the Doppler shift dropping in pitch, you know that they're probably going to miss your house, because if they were on a collision course with your house, the pitch would stay the same until impact. As I said, that one's subtle. --Larry Wall --- http://scratchcomputing.com ---
Re: Testing print failures
On Sat, Jan 05, 2008 at 11:00:41PM +0100, nadim khemir wrote: > day an the answer was 'no'. Do you happend to know something about > replacing 'print' with XS code short of patching perl (which doesn't sound > like a good idea) Not tested, but, can you 1: grab the address of print's op from PL_ppaddr 2: store it somewhere useful 3: replace it in PL_ppaddr with your own function Your own function calls the original, and then before returning, checks the return value on the stack. If it indicates fatal, then check the calling context. If that's void, croak. Otherwise return normally. Nicholas Clark
Re: Testing print failures
On Saturday 05 January 2008 20.21.59 Eric Wilhelm wrote: > Even if it weren't a system handle, in what situation does print() > return false? > > 1. Closed handle > 2. Unopened handle > 3. Disk full > > Unless I've missed one, you don't need to check the return value of > print. > > I will even say it is wrong to do so in most situations. The exception > would be unattended long-running processes where lots of print() calls > will happen before a close(). Yes 'print STDOUT' will fail with $! > = "No space left on device" when stdout is redirected, but I hardly > think catching that is worth sprinkling so many 'or die "$!"' > statements around in your code.[1] Wrong because of the sprinkling ? > [1] If anything, at least a print() replacement via some XS code which > throws errors wouldn't cost you an arm+leg at runtime (plus it saves > the bleeding eyeballs.) I of course agree that writing 'or die $!' a million time is a painful and unaesthetic. In the object oriented modules I write, I have started using code reference instead for 'print' so there is only one place to test. 'print' is not overridable and you can't simply use 'Fatal' on it either. I asked p5p if there were chances that 'print' would become overridable some day an the answer was 'no'. Do you happend to know something about replacing 'print' with XS code short of patching perl (which doesn't sound like a good idea) > For #3, you need to check the return on close(). > > For #1 and #2 you need to have 100% coverage with warnings enabled and > test NoWarnings (and check the return code of open().) Standard filehandles are neither opened nor closed often by developer. The above still applies for the other filehandles. But in that case I, other can be less anxious if they want to, would still think that checking the return value of 'print' is worth it. For example, the prints are done to a journaling log, would it make sense to continue working on the journaled files knowing that the final write is going to fail anyway? There is a difference between a journaling log and printing to STDOUT. No doubt my example is extreme and only that example may need extreme precausions. Cheers, Nadim.
Re: buildbot - an experiment
Turns out the parrot build/test failed on SVN revision 24566, but passed in revision 24567 So, I am gonna see if i can make a buildbot config to build and test parrot, using an SVN polling configuration where I'll try and check the SVN repo every N minutes and then do a checkout/build/test if there are no further commits for another X minutes. --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: Testing print failures
On Saturday 05 January 2008 15.08.55 Michael G Schwern wrote: > nadim khemir wrote: > > print 'hi' or carp q{can't print!} ; > > I'm not even going to wade into the layers of neurosis demonstrated in this > post, but if you want to throw an error use croak(). No more testing at 3 AM for me of course the test fails because I use carp instead for croak. As for the layers of neurosis, the only anxiety is the one created by your own delusions. I see only a test like an other. Nadim.
Re: Testing print failures
# from nadim khemir # on Saturday 05 January 2008 03:53: >print 'hello' ; > >triggers the wrath of InputOutput::RequireCheckedSyscalls with the >message "Return value of flagged function ignored". > >... > >There is no chance that P::C could know I'm writting on a system > handle that would require a static analysis that would take ages. If > possible at all. Even if it weren't a system handle, in what situation does print() return false? 1. Closed handle 2. Unopened handle 3. Disk full Unless I've missed one, you don't need to check the return value of print. I will even say it is wrong to do so in most situations. The exception would be unattended long-running processes where lots of print() calls will happen before a close(). Yes 'print STDOUT' will fail with $! = "No space left on device" when stdout is redirected, but I hardly think catching that is worth sprinkling so many 'or die "$!"' statements around in your code.[1] For #3, you need to check the return on close(). For #1 and #2 you need to have 100% coverage with warnings enabled and test NoWarnings (and check the return code of open().) This probably means one of your tests needs to fill a disk (or simulate it) to get the open/close checks to 100% coverage. From my fstab: tmpfs /mnt/1MB tmpfs noauto,user,size=1M,rw,mode=777 0 0 [1] If anything, at least a print() replacement via some XS code which throws errors wouldn't cost you an arm+leg at runtime (plus it saves the bleeding eyeballs.) --Eric -- Entia non sunt multiplicanda praeter necessitatem. --Occam's Razor --- http://scratchcomputing.com ---
Re: buildbot - an experiment
On Jan 4, 2008, at 5:56 PM, James E Keenan wrote: David Cantrell wrote: On Tue, Jan 01, 2008 at 08:23:52PM -0500, James E Keenan wrote: David Cantrell wrote: If anyone can give me an idiots' guide to how to grab the most recent source tree, build it, and test it, then I can test it on the same boxes as I do CPAN testing, plus maybe a couple of others. svn co https://svn.perl.org/parrot/trunk/ parrot_test cd parrot_test perl Configure.pl make ... /usr/local/bin/perl /home/david/parrot_test/tools/build/pmc2c.pl --c subproxy.pmc Cannot restore overloading on HASH(0x823a074) (package Parrot::Pmc2c::Emitter) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/_retrieve.al) line 328, at /home/david/parrot_test/tools/build/dynpmc.pl line 199 make[1]: *** [all] Error 255 make[1]: Leaving directory `/home/david/parrot_test/src/dynpmc' make: *** [dynpmc.dummy] Error 2 That's on Linux. There' probably not much point me testing it on more obscure platforms right now -) FWIW, the checkout and build and test also got failures for me on: Linux 2.6.22.14-72.fc6 #1 SMP x86_64 x86_64 x86_64 GNU/Linux using perl, v5.8.8 built for x86_64-linux-thread-multi gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-13) t/stm/basic_mt.2/4 # Failed test (t/stm/basic_mt.t at line 93) # Exited with error code: [SIGNAL 11] # Received: # # Expected: # okay # t/stm/basic_mt.4/4 # Looks like you failed 1 test of 4. t/stm/basic_mt. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/4 subtests (less 1 skipped subtest: 2 okay) Test Summary Report --- t/configure/115-auto_warnings-01.t (Wstat: 0 Tests: 4 Failed: 0) TODO passed: 4 t/src/intlist.t(Wstat: 0 Tests: 4 Failed: 0) TODO passed: 1-4 t/src/io.t (Wstat: 0 Tests: 20 Failed: 0) TODO passed: 16-17, 19 t/stm/basic_mt.t (Wstat: 256 Tests: 4 Failed: 1) Failed test number(s): 2 Non-zero exit status: 1 Files=545, Tests=10580, 1602 wallclock secs ( 4.26 usr 1.88 sys + 1234.14 cusr 53.52 csys = 1293.80 CPU) Result: FAIL Failed 1/545 test programs. 1/10580 subtests failed. --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: buildbot - an experiment
On Jan 4, 2008, at 7:09 AM, nadim khemir wrote: I received an answer from Eric : I wish I did have some kind of comparison. Here's what one user wrote about choosing cabie: http://www.golden-gryphon.com/blog/manoj/blog/2007/11/06/Continuous_Automated_Build_and_Integration_Environment.html I looked here: http://damagecontrol.codehaus.org/Continuous+Integration+Server+Feature+Matrix Maybe Eric could sign up as a Confluence user (http://docs.codehaus.org/signup.action ) and add cabie to that matrix? Seems like he has thought through the answers to all the entries in the table. I say: The more choices the better! --- Matisse Enzer <[EMAIL PROTECTED]> http://www.matisse.net/ - http://www.eigenstate.net/
Re: Testing print failures
nadim khemir wrote: > print 'hi' or carp q{can't print!} ; I'm not even going to wade into the layers of neurosis demonstrated in this post, but if you want to throw an error use croak(). -- ...they shared one last kiss that left a bitter yet sweet taste in her mouth--kind of like throwing up after eating a junior mint. -- Dishonorable Mention, 2005 Bulwer-Lytton Fiction Contest by Tami Farmer
Re: Testing print failures
On Sat, Jan 05, 2008 at 12:53:35PM +0100, nadim khemir wrote: > Next problem is coverage. Nothing upsets me more than a 99.8% coverage. I'd > almost prefere a 80% coverage to 99.8%. > > So I tried to test that case with > > { > use IO::File; > my $current_fh = select ; > > my $fh = new IO::File; # not opened > select $fh ; > > throws_ok > { > $object->DoPrint() ; > } > qr/can't print!/, 'print failed' ; > > select $current_fh ; > } > > with DoPrint looking something like: > > print 'hi' or carp q{can't print!} ; I'm not quite sure what you are getting at here. If you really want to test the return value of every print statement (and personally, I can think of much better things to do with my time), and you want coverage of both successful and unsuccessful prints (and other similar functions, I suppose), then you'll have to fake up or otherwise arrange for both successful and unsuccessful calls. This appears to be what you are doing. If you are saying that you want to code for failed prints, but don't want to actually test them, but still want 100% coverage (I don't think this is what you are saying because it makes no sense), then you will have to cheat. # uncoverable condition right note:I want 100% coverage without testing this print 'hi' or carp q{can't print!} ; But if you do this and then test a failed print you will get a coverage error because you did something you said was impossible. I think I must be missing something. What is it that is stopping you from getting your final 0.2% coverage? -- Paul Johnson - [EMAIL PROTECTED] http://www.pjcj.net
Testing print failures
With the advent of intensive coverage tests and zealous Perl::Critic policies, testing even simple things is getting messy. even a moundain: print 'hello' ; triggers the wrath of InputOutput::RequireCheckedSyscalls with the message "Return value of flagged function ignored". This is not new, the problem and discussion have been around for, hmm, centuries. If one can't call a system function on a system handle, chances are hell broke loose and Finland won the european song contest another time. There is no chance that P::C could know I'm writting on a system handle that would require a static analysis that would take ages. If possible at all. Also, someone could select another filehandle and there might be disk space shortage or the filehandle could be close or any other state that would make our friendly 'print' fail. OK, let's stop going round this. It's an error to not check the return value so let's just do that. Of course a well place 'no critic' would have calmed the zealous policy but we like to do things right here. Next problem is coverage. Nothing upsets me more than a 99.8% coverage. I'd almost prefere a 80% coverage to 99.8%. So I tried to test that case with { use IO::File; my $current_fh = select ; my $fh = new IO::File; # not opened select $fh ; throws_ok { $object->DoPrint() ; } qr/can't print!/, 'print failed' ; select $current_fh ; } with DoPrint looking something like: print 'hi' or carp q{can't print!} ; It may look silly to give a string to carp, that carp is goign to display and probably have the same problem with but someone may be catching all these nasty exception that never happend (do I hear someone say that this kind of sentence almost guaranty that exception during the first customer demo?) Of course this doesn't work. First, Test::NoWarnings feast on yout terminal forcing you to scroll for half an hour before wou make sense of the mess that failed tests dump on your terminal. This is not the test framework fault, it's the terminal (do I miss 1980 borlan IDE where the messages were neatly packed till you clicked on them to see details) Second, the test passes!! t/011_interactionNOK 13/0# Failed test 'print failed' # at t/011_interaction.t line 147. # expecting: Regexp ((?-xism:can't print!)) # found: normal exit Obviously I'm doing something wrong. Has someone ever bothered with this test? Is there a better way to do it? Has someone written a test module to do that? Cheers, Nadim.