On Thu, Jun 19, 2008 at 12:56:09PM +0200, Moritz Lenz wrote:
> Ovid wrote:
> > However, from a testing perspective, I'm wondering why
> > there is such a huge number of skipped tasks.  Shouldn't those be TODO
> > tasks?  
> 
> Mostly because the skipped tests are parse failures or throw runtime
> errors, because crucial parts are not yet implemented in the compiler.

Moritz has it absolutely correct.  Since we don't have a
complete parser yet, we have to skip over (avoid parsing)
those constructs that the compiler doesn't understand yet.
Otherwise, we never get to execute any of the tests.

> > Skips are really easy to ignore and it's virtually impossible
> > to know if they'll ever pass but TODO tests unexpectedly succeeding
> > tend to stand out.

I think that "skip" in the "building a compiler" context is really
different than in typical testing, so it's okay if there are a few
more of them.  At any rate, we're keeping an eye on the skip numbers
also (which is why they're also reported in the .csv), and I'm planning
to improve my test harness to display the skip messages so that
we can see when a skipped test is a likely candidate to become
unskipped.

> Other solutions I can think of are:
>  * make fudge wrap skipped tests in eval-blocks and mark them as TODO.

IIRC, fudge already does something like this -- use 
#?rakudo eval "reason" instead of #?rakudo skip "reason" .
Perhaps we should be doing this now (but I haven't tried it
myself to know where the gotchas are).

Thanks!

Pm

Reply via email to