Re: [Haskell-cafe] What's the motivation for η rules?

2011-01-03 Thread Dan Doel
On Monday 03 January 2011 9:23:17 pm David Sankel wrote:
 The following is a dependent type example where equality of open terms
 comes up.
 
 z : (x : A) → B
 z = ...
 
 w : (y : A) → C
 w = z
 
 Here the compiler needs to show that the type B, with x free,
 is equivalent to C, with y free. This isn't always decidable, but eta
 rules, as an addition to beta and delta rules, make more of
 these equivalence checks possible (I'm assuming this is
 what extensionality means here).

Extensionality of functions (in full generality) lets you prove the following:

  ext f g : (forall x. f x = g x) - f = g

This subsumes eta for every situation it applies in, because:

  ext f (\x - f x) (\x - refl (f x)) : f = (\x - f x)

although making the equality that type checking uses extensional in this 
fashion leads to undecidability (because determining whether two types are 
equal may require deciding if two arbitrary functions are equal at all points, 
which is impossible in general).

The reverse is not usually the case, though, because even if we have eta:

  eta f : f = (\x - f x)

and the premise:

  pointwise : forall x. f x = g x

we cannot use pointwise to substitute under the lambda and get

  necessary : (\x - f x) = (\x - g x)

Proving necessary would require use of ext to peel away the binder 
temporarily, but ext is what we're trying to prove.

So, although eta is often referred to as extensional equality (confusingly, if 
you ask me), because it is not part of the computational behavior of the terms 
(reduction of lambda terms doesn't usually involve producing eta-long normal 
forms, and it certainly doesn't involve rewriting terms to arbitrary other 
extensionally equal terms), it is usually weaker than full extensionality in 
type theories.

 What would be example terms for B and C that would require invoking the eta
 rule for functions, for example?

It could be as simple as:

  z : T f
  z = ...

  w : T (\y - f y)
  w = z

On the supposition that those types aren't reducible. For instance, maybe we 
have:

  data T (f : Nat - Nat) : Set where
whatever : T f

Without eta, the types aren't equal, so this results in a type error.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the motivation for η rules?

2011-01-03 Thread Stefan Holdermans
David,

Conor wrote:

 But the news is not all bad. It is possible to add some eta-rules
 without breaking the Geuvers property (for functions it's ok; for
 pairs and unit it's ok if you make their patterns irrefutable).

Note that, in Haskell, for functions it's only ok if you leave seq and the like 
out of the equation: otherwise undefined and (\x. undefined x) can be told 
apart using seq.  (That's a minor detail, though.)

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the motivation for η rules?

2010-12-30 Thread Conor McBride

Hi

Thus invoked...

On 28 Dec 2010, at 23:29, Luke Palmer wrote:


Eta conversion corresponds to extensionality; i.e. there is nothing
more to a function than what it does to its argument.

Suppose f x = g x for all x.  Then using eta conversion:

f = (\x. f x) = (\x. g x) = g

Without eta this is not possible to prove.  It would be possible for
two functions to be distinct (well, not provably so) even if they do
the same thing to every argument -- say if they had different
performance characteristics.  Eta is a controversial rule of lambda
calculus -- sometimes it is omitted, for example, Coq does not use it.
It tends to make things more difficult for the compiler -- I think
Conor McBride is the local expert on that subject.


...I suppose I might say something.

The motivation for various conversion rules depends quite a lot on one's
circumstances. If the primary concern is run-time computation, then
beta-rules (elimination construct consumes constructor) and definitional
expansion (sometimes delta), if you have definition, should do all the
work you need. I'm just wondering how to describe such a need. How about
this property (reminiscent of some results by Herman Geuvers).

Let = be the conversion relation, with whatever rules you've chucked in,
and let -- be beta+delta reduction, with --* its reflexive-transitive
closure. Suppose some closed term inhabiting a datatype is convertible
with a constructor form

  t = C s1 .. sn

then we should hope that

  t --* C r1 .. rn   with  ri = si, for i in 1..n

That is: you shouldn't need to do anything clever (computing backwards,
eta-conversion) to get a head-normal form from a term which is kind
enough to have one. If this property holds, then the compiler need only
deliver the beta-delta behaviour of your code. Hurrah!

So why would we ever want eta-rules? Adding eta to an *evaluator* is
tedious, expensive, and usually not needed in order to deliver values.
However, we might want to reason about programs, perhaps for purposes
of optimization. Dependent type theories have programs in types, and
so require some notion of when it is safe to consider open terms equal
in order to say when types match: it's interesting to see how far one
can chuck eta into equality without losing decidability of conversion,
messing up the Geuvers property, or breaking type-preservation.

It's a minefield, so tread carefully. There are all sorts of bad
interactions, e.g. with subtyping (if - subtyping is too weak,
(\x - f x) can have more types than f), with strictness (if
p = (fst p, snd p), then (case p of (x, y) - True) = True, which
breaks the Geuvers property on open terms), with reduction (there
is no good way to orientate the unit type eta-rule, u = (), in a
system of untyped reduction rules).

But the news is not all bad. It is possible to add some eta-rules
without breaking the Geuvers property (for functions it's ok; for
pairs and unit it's ok if you make their patterns irrefutable). You
can then decide the beta-eta theory by postprocessing beta-normal
forms with type-directed eta-expansion (or some equivalent
type-directed trick). Epigram 2 has eta for functions, pairs,
and logical propositions (seen as types with proofs as their
indistinguishable inhabitants). I've spent a lot of time banging my
head off these issues: my head has a lot of dents, but so have the
issues.

So, indeed, eta-rules make conversion more extensional, which is
unimportant for closed computation, but useful for reasoning and for
comparing open terms. It's a fascinating, maddening game trying to
add extensionality to conversion while keeping it decidable and
ensuring that open computation is not too strict to deliver values.

Hoping this is useful, suspecting that it's TMI

Conor


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the motivation for η rules?

2010-12-28 Thread Stefan Monnier
One way to look at it is that β rules are the application of an
eliminator (e.g. function application) to its corresponding constructor
(the lambda expression), whereas η rules correspond to the application
of a constructor to its corresponding eliminator.

E.g.

   λ y . (x y)=  x
   if x then True else False  =  x
   (π₁ x, π₂ x)   =  x

IOW there's no need for a motivation: those rules just appear naturally.


Stefan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe