Re: Thoughs on Theory.pm

2005-11-11 Thread Rob Kinyon
On 10/13/05, Dave Whipp [EMAIL PROTECTED] wrote:
 (ref: http://svn.openfoundry.org/pugs/docs/notes/theory.pod)

  theory Ring{::R} {
  multi infix:+   (R, R -- R) {...}
  multi prefix:-  (R -- R){...}
  multi infix:-   (R $x, R $y -- R) { $x + (-$y) }
  multi infix:*   (R, R -- R) {...}
  # only technically required to handle 0 and 1
  multi coerce:as (Int -- R)  {...}
  }
  
   This says that in order for a type R to be a ring, it must
   supply these operations.  The operations are necessary but
   not sufficient to be a ring; you also have to obey some
   algebraic laws (which are, in general, unverifiable
   programmatically), for instance, associativity over
   addition: C(a + b) + c == a + (b + c).

 I started thinking about the in general, unverifiable programmatically
 bit. While obviously true, perhaps we can get closer than just leaving
 them as comments. It should be possible to associate a
 unit-test-generator with the theory, so I can say:

I've been thinking about this. If we have a type inferencer, we know
that any function we define to be N,N-N will be that way. The only
items we have to show is for all a in N, there is a unique succesor
which is also in N and that for all b in N, b != 1, there is a unique
predecessor. If we have that, then we get the laws of arithmetic. Is
it possible to put that into the type inferencer if the types are
defined as iterators? Kind of like Church numbers, but in P6 notation.
I think that by defining our types as iterators, we can satisfy the
succ/pred requirements of Peano's, meaning we have the arithmetic
rules.

Rob


Re: Thoughs on Theory.pm

2005-10-14 Thread Luke Palmer
On 10/13/05, Dave Whipp [EMAIL PROTECTED] wrote:
 I started thinking about the in general, unverifiable programmatically
 bit. While obviously true, perhaps we can get closer than just leaving
 them as comments. It should be possible to associate a
 unit-test-generator with the theory, so I can say:

 theory Ring(::R) {
 ...
 axiom associative(R ($a, $b, $b)) {
   is_true( ((a+b)+c) - (a+(b+c)) eqv R(0) );
 }
 ...
 }

 And then say for type T, please generate 1000 random tests of T using
 axioms of Ring.

The points about hand-crafted edge-case tests are good, but that's not
to say that this isn't a good idea.  Putting the properties inline
with the theory is good documentation (plus, as you say, it could be
used to generate pre- and postconditions).

By the way, Haskell already did this. 
http://www.haskell.org/ghc/docs/latest/html/libraries/QuickCheck/Test.QuickCheck.html

:-)

Luke


Re: Thoughs on Theory.pm

2005-10-13 Thread David Storrs


On Oct 13, 2005, at 6:45 PM, Dave Whipp wrote:

I started thinking about the in general, unverifiable  
programmatically bit. While obviously true, perhaps we can get  
closer than just leaving them as comments. It should be possible to  
associate a unit-test-generator with the theory, so I can say:

[...]
And then say for type T, please generate 1000 random tests of T  
using axioms of Ring.


In an ultra-slow debug mode, the aximons could be propagated as  
post conditions on every public mutator of T, so that we test them  
continuously in a running application (or tes suite).




While I like the idea, I would point out that 1000 tests with  
randomly generated data are far less useful than 5 tests chosen to  
hit boundary conditions.


--Dks



Re: Thoughs on Theory.pm

2005-10-13 Thread Dave Whipp

David Storrs wrote:

While I like the idea, I would point out that 1000 tests with  randomly 
generated data are far less useful than 5 tests chosen to  hit boundary 
conditions.


I come from a hardware verification background. The trend in this 
industry is driven from the fact that the computer can generate (and) 
run 1000 random tests more quickly than a human can write 5 directed 
tests. And a quick question: just what are the boundary cases of 
a+(b+c)==(a+b)+c for a generic Ring type?


Of course, in the hardware world we give hints (constraints) to the 
generators to bias it towards interesting cases. Plus the tools use 
coverage data to drive the tests towards uncovered code (not entirely 
automatic). Finally, we have tools that can spend 48+ hours analyzing a 
small block (statically) to find really good set of tests.