[Haskell-cafe] Graphical graph reduction

2008-02-25 Thread Mike Thyer

Kai wrote:

Thank you all for showing interest and responding.


Check out http://thyer.name/lambda-animator/. Requires Java.


Wow, this is SUCH a cool tool. Best discovery in a long time! I think
I need to brush up on my lambda-calculus, because it took me some time
to figure out what settings to use to get something similar to
Haskell. Function strategy substitute by name, Argument strategy
call by need and Reduce to weak head normal form seem to work OK,
though.


Yes those are the correct settings for a Haskell-like reduction order.


One issue, though. When trying out my infinite-list-of-fibs example, I
have an auxiliary function

(define nth (lambda (n xs) (if (= n 0)(head xs)(nth (- n 1) (tail xs)

and this is not behaving the way I want, i.e. like the Haskell function

nth n (x:xs) = if n == 0 then x else nth (n-1) xs

because it doesn't do pattern matching, i.e. it doesn't force the
argument to be evaluated until it fits the x:xs pattern. I can't
figure out to simulate this eager pattern in the lisp that the
lambda-animator uses. This means that if I e.g. do a (nth 8 fibs) in
the animator, I get a long string of tail's before it actually starts
expanding the fibs list...


You can use the seq function to force arguments to be evaluated.  For example:

(define nth (lambda (n xs) (seq xs if (= n 0)(head xs)(nth (- n 1) (tail xs)

The input language is a curried dialect of Scheme.  The seq function only takes 
two arguments.  It reduces its first argument and then returns its second.  
This ensures the pair at the root of xs is
evaluated before the evaluation of the function body continues.

You can similarly use seq to prevent an unbounded build up of suspended 
computations in the fibs list, for example:

(define head-strict-cons (lambda (h t) (seq h cons h t)))
(define zipWith (lambda (f xs ys) (head-strict-cons (f (head xs) (head ys)) 
(zipWith f (tail xs) (tail ys)
(define fibs (letrec ((fibs (cons 1 (cons 1 (zipWith + fibs (tail fibs)) 
fibs))
(define nth (lambda (n xs) (seq xs if (= n 0)(head xs)(nth (- n 1) (tail xs)
(define fib (lambda (n) (nth n fibs)))

You'll see that the size of the graph is bounded and that no more than three 
elements of the infinite fibs list exist at any one time.

Lambda Animator doesn't automatically do any of the deforestation that a 
Haskell compiler will, so there are more reduction steps and larger graphs in 
the reduction of the above program than you would
expect with a compiled Haskell program.

Mike

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: The dreaded layout rule

1999-07-29 Thread Mike Thyer

If the scanning stage pairs the tokens it returns with
their positions, then scanning can be done once before
parsing begins.  I've done this with a parser implemented
with parser combinators, these combinators then decide
whether or not to accept a token based on which token
it is and how far it is indented.  I think this means
the grammar being parsed is a context sensitive one,
since the state of the parser is represented by more than
just a single stack.  We need a stack telling us what to
do next, and a stack of indentation levels, although
the way in which these stacks grow and shrink is related
they could not be replaced by a single stack, so the
grammar is not context free.

Now that I write this, I think that we could combine these 
stacks as a stack of stacks, although this isn't how I did it.  
I don't think this satisfies the requirements for a context free 
grammar (CFG) but I don't have a definition to hand at the moment.

Mike

Simon Marlow wrote:
 I believe you're right in that a true two-phase implementation of 
 the Haskell grammar is impossible.  This is consistent with Haskell's 
 policy of making life easy for programmers and hard for compiler 
 writers :)






Re: Another bug in the 98 Report?

1999-06-24 Thread Mike Thyer

Malcolm writes:

 Unfortunately, the example given in the Report is nothing like as clear
 - in fact, I still don't understand it.  Perhaps someone could explain
 it to me?
 
f x = let
 h y = let
  p z = z
  in p
 in h
 
  Here, the definition of p is indented less than the indentation of the
  enclosing context, which is set in this case by the definition of h.
 
 To me, it looks like it is not the *definition* of p that is indented
 less, but the *use*.  Is that right?

There appears to be a bug in the html version of the report,
the ps and pdf versions show this example of illegal syntax as

f x = let
 h y = let
  p z = z
   in p
  in h

Mike