When I was last on the best rooftop in the Mid Upper West Side of
Manhattan I was told of the work on logical relations.  I did not
know of this three decades old line of work.  I have grabbed

http://homepages.inf.ed.ac.uk/gdp/publications/Par_Poly.pdf

To me, the style is comfortable and the matter feels solid.
That is, the authors and I know some of the same kennings.

Thanks Lispers!  Thanks Haskellers!

oo--JS.


On Thu, 16 Aug 2012, Jay Sulzberger wrote:



On Wed, 15 Aug 2012, wren ng thornton <w...@freegeek.org> wrote:

On 8/13/12 5:42 PM, Jay Sulzberger wrote:
One difficulty which must impede many who study this stuff is
that just getting off the ground seems to require a large number
of definitions of objects of logically different kinds. (By
"logic" I mean real logic, not any particular formalized system.)
We must have "expressions", values, type expressions, rules of
transformation at the various levels, the workings of the
type/kind/context inference system, etc., to get started.
Seemingly Basic and Scheme require less, though I certainly
mention expressions and values and types and
objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction
to Scheme.

Indeed, starting with Haskell's type system is jumping in at the deep end. And there isn't a very good tutorial on how to get started learning type theory. Everyone I know seems to have done the "stumble around until it clicks" routine--- including the folks whose stumbling was guided by formal study in programming language theory.

However, a good place to start ---especially viz a vis Scheme/Lisp--- is to go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far fewer moving parts. You have type expressions, term expressions, term reduction, and that's it.

Yes.  The simply-typed lambda-calculus presents as a different
sort of thing from the "untyped" lambda calculus, and the many
complexly typed calculi.

I'd add the list of components of the base machine of STLC these
things:

1. The model theory, which is close to the model theory of the
  Lower Predicate Calculus.

2. The explication of "execution of a program", which is more
  subtle than anything right at the beginning of the study of
  the Lower Predicate Calculus.  It certainly requires a score
  of definitions to lay it out clearly.

But, to say again, yes the STLC can, like linear algebra 101, be
presented in this way: The machine stands alone in bright
sunlight.  There are no shadows.  Every part can be seen plainly.
The eye sees all and is satisfied.

ad 2: It would be worthwhile to have an introduction to STLC
which compares STLC's explication of "execution of a program"
with other standard explications, such as these:

1. the often not explicitly presented explication that appears in
  textbooks on assembly and introductory texts on computer hardware

2. the usually more explicitly presented explication that appears
  in texts on Basic or Fortran

3. the often explicit explication that appears in texts on Snobol

4. various explications of what a database management system does

5. explications of how various Lisp variants work

6. explications of how Prolog works

7. explications of how general constraint solvers work, including
  "proof finders"


Other lambda calculi add all manner of bells and whistles, but STLC is the core of what lambda calculus and type systems are all about. So you should be familiar with it as a touchstone. After getting a handle on STLC, then it's good to see the Barendregt cube. Don't worry too much about understanding it yet, just think of it as a helpful map of a few of the major landmarks in type theory. It's an incomplete map to be sure. One major landmark that's suspiciously missing lays about halfway between STLC and System F: that's Hindley--Milner--Damas, or ML-style, lambda calculus.[2]

8. explication of how Hindley--Milner--Damas works


After seeing the Barendregt cube, then you can start exploring in those various directions. Notably, you don't need to think about the kind level until you start heading towards LF, MLTT, System Fw, or CC, since those are were you get functions/reduction at the type level and/or multiple sorts at the type level.

Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the starting point and then add some nice things like algebraic data types and type classes (neither of which are represented on the Barendregt cube). This theory is still relatively simple and easy to understand, albeit in a somewhat ad-hoc manner.

Unexpectedly, to me, missing word in explications of algebraic
data types and "pattern matching": "magma".


Modern "Haskell" lives somewhere beyond the top plane of the cube. We have all of polymorphism (aka System F, aka second-order quantification; via -XRankNTypes), most of type operators (i.e., extending System F to System Fw; via type families etc), some dependent types (aka first-order quantification; via GADTs), plus things not represented on the cube (e.g., (co)inductive data types, type classes, etc). Trying to grok all of that at once without prior understanding of the pieces is daunting to be sure.

Yes.



[1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the implicational fragment of intuitionistic propositional logic. Of course, you can add products (tuples), coproducts (Either), and absurdity to get natural deduction for the full intuitionistic propositional logic.

[2] That is: STLC extended with rank-1 second-order quantification, which is a sweet spot in the tapestry of expressivity, decidability, et al. I don't think anyone knows exactly _why_ it's so sweet, but it truly is.

--
Live well,
~wren

Thanks, wren!

oo--JS.



_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to