No, I can't say off hand if seq-hnf would keep eta valid, either.
Neither do I know how to implement seq-hnf efficiently. :)

As far as eta for other types, yes, I'll take it if I can get it's easily.
But I'm also pretty happy with encoding all the other data types within
the lambda calculus (which was how Haskell behaved before seq).

        -- Lennart

On Feb 12, 2007, at 22:05 , Claus Reinke wrote:

Adding seq ruins eta reduction.  For normal order lambda calculus
we have '\x.f x = f' (x not free in f). If we add seq this is no longer true.

isn't that a problem of seq (and evaluation in Haskell generally) not being strict enough (ie, forcing evaluation only to weak head normal form rather than to head normal form)?

for instance:

   seq (\x->_|_ x) () = () =/= _|_ = seq _|_ ()

but (assuming a variant, seq-hnf, forcing to hnf instead):
   seq-hnf (\x-> _|_ x ) () = _|_ = seq-hnf _|_ ()

I don't have my "phone book" here, but didn't Barendregt have a discussion of what kind of normal form would be an appropriate choice for the meaning of lambda terms, with hnf being "good" and whnf or nf being "bad"? reduction to whnf is a pragmatic choice, with a long history, and its own calculus, which is not quite the ordinary lambda calculus.

but it's been ages since I learned these things, so I might be misremembering.

Claus

I'm a fan of eta, it makes reasoning easier.  It also means
the compiler can do more transformations.

I also like eta conversion, as well as its variations for non- function types. what they have in common is that the expanded form provides syntactic/structural evidence for properties that are only semantically present in the reduced form. for instance, if we add non-functions to the calculus, eta has to be constrained with type information for f - as the expanded form promises that we have a function, holding more information than the reduced form.

variations of eta for non-function types, this allows us to make functions/ contexts less strict (the kind of "borrowing information from the future" so often needed in cyclic programming, or in dealing with other potentially infinite structures):

   lazy_id_pair x = (fst x,snd x)    -- lazy_id_pair _|_ = (_|_,_|_)

vs

   strict_id_pair (a,b) = (a,b)    -- strict_id_pair _|_ = _|_

at the expense of not having laws like:
x = (fst x,snd x) -- not true in general, even if we know that x::(a,b)

x = x >>= return -- not true in general, even if x :: Monad m => m a

   x = \y-> x y    -- not true in general, even if x :: a -> b

we still have the inequalities, though - the expanded form being more
defined than the reduced form.

   x :: t ]= (expand-at-t x) :: t    -- eta conversion at type t

so compilers could use eta-expansion, but not eta-reduction (not without an approximate analysis of an undecidable property). do you happen to have an example in mind where eta-reduction would be beneficial as a compiler transformation, but analysis (to demonstrate that the expanded functional expression terminates successfully) impractical?

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to