### Lifted products

```

I don't like Phil's suggestion to have non-lifted products:

* It messes up the uniform semantics for algebraic data types (all lifted).
For example

a) You have to explain that

f ~(z,a) = ...  is the same as  f (z,a) = ...
but
g ~(z:a) = ...  is NOT the same as  f (z:a) = ...

b) You have to explain that if

f (Foo y) = ...

then f is strict if Foo is one of a multi-constructor data type, but
non-strict otherwise (unless "..." is strict in y!)

(Unless non-lifted products are a different construct, which complicates
the language, and

* An alternative is, I suppose, to have both standard, lifted algebraic data
types, and a new form of data construction, namely non-lifted tuples.  I like it
not!

* Lennart says that if the non-lifted products can also have strictness
annotations then it requires parallel evaluation.  I think it's rather
amazing that one can implement non-lifted products without parallelism; doing
so in the presence of strictness annoations makes my head hurt.  I bet
Lennart is right.

Efficiency was not the only reason for having lifted tuples; semantic uniformity
was a major one.

Incidentally, a much less invasive way to achieve what Phil wants
would be to say that there's a ~ stuck on every pattern from a
single-constructor data type (or built-in tuple type?).  Myself, I'd dislike
this, esp if there was no way to "undo" it and recover strict matching, but it
solves Phil's problem without adding new data types.

Simon

```

```

(This message assumes we head for the strictness-annotation-on-constructor-arg
solution. I'll respond to Phil's comments in my next msg.)

The problem with polymorphic strictness
~~~
John asks what the problem is with strict constructor args.  As Lennart and
Kevin say, the problem only really arises with function types; for example

data Foo = MkFoo !(Int - Int)

Operationally the idea is that you evaluate the function before building the
constructor.  That places some new constraints on implementations, but I suspect
it can always be done.

More seriously, as Lennart says, Haskell says that _|_ = (\x - _|_).
Now, there is no way to find out whether the function given as an
argument to MkFoo is a function which always returns bottom. Consider

case MkFoo (\x - a complicated calculation involving x, which
always fails to terminate)of
MkFoo f - 0

If the implementation just "evaluates the function" and then wraps it in a MkFoo,
then the result of this expression is just 0.  But if _|_ = (\x - _|_),
and MkFoo really is strict, then the result should be _|_.

So, as Lennart says, if we allow constructors to be strict in functions
then we have to change the semantics to distinguis _|_ from (\x - _|_).
I, for one, am deeply reluctant to do so; I certainly have no good handle on
the consequences of doing so.  Does anyone else?

The problem shows up if a constructor is strict in a polymoprhic position:

data Baz a = MkBaz !a !a

(consider Baz (Int - Int))

All this applies equally to polymorphic seq too, of course.

An alternative
~~
We already have a good mechanism for dealing with problems like this; it's

class Data a where
seq :: a - b - b
-- Other things too?

There would be an instance for class Data on every algebraic data type,
automatically derived.

Then we could write

data Data a = Baz a = MkBaz !a !a

and everything is fine, because now Baz can only be applied to data types, not
functions.  And we get seq too.  The annotation in the MkBaz can be explained by
translation to seq.

Implementations are free to implement seq with a single batch of polymorphic
code if they want, of course.

Ain't that easy?  The only tiresome thing is having to write Data a = in places
where you want a strictness annotation on a polymorphic constructor arg.  But I
don't mind that one bit.

The only infelicity is that in the special case of single constructors with a
single strict arg (ie the kind we need for ADTs) there is no need for the
arg to be in class data:

data Abstract a = MkAbstract !a

is perfectly ok semantically and pragmatically.  I suppose one could allow
the (Data a =) constraint to be omitted in this special case.  Or give a
different syntax for ADT decls, as I suggested before.

Simon

```

```

I think there is another problem with having strict constructors. It messes
up parametricity even more than fix does. There are two reasons why this
would be a shame:

* Parametricity is cool. It lets you prove lots of interesting theorems
for very little work. These theorems help with program transformation
and the like.

* Some compilers use parametricity. In particular, the justification for
cheap deforestation method (foldr-build) comes from parametricity. If
parametricity is weakened too much the transformation may become unsafe.

One way to introduce strictness is to use overloading and have a class
Strict with an operation
strict : a - a
defined for each type in the class (not including functions unless their
semantics changes, nor unlifted products if they get introduced). Then a
strict constructor would have a class restriction and these would provide
the standard mediation for parametricity.

John.

```

```

So, as Lennart says, if we allow constructors to be strict in functions
then we have to change the semantics to distinguish _|_ from (\x - _|_).
I, for one, am deeply reluctant to do so; I certainly have no good handle on
the consequences of doing so.  Does anyone else?

I thought this inequality was one of the distinguishing characteristics of
lazy functional programming relative to the standard lambda-calculus. To
quote from Abramsky's contribution to "Research Topics in Functional

Let O == (\x.xx)(\x.xx) be the standard unsolvable term. Then

\x.O = O

in the standard theory, since \x.O is also unsolvable; but \x.O is in
weak head normal form and hence should be distinguished from O in our
"lazy" theory.

Gerald

```

### Type signatures

```

Folks,

Warren Burton makes what appears to me to be a Jolly Sensible suggestion about
the syntax of type signatures.  Haskell already has many dual ways of doing
things (let/where, case/pattern-matching).  Warren proposes an alternative
syntax for type signatures.

Simon

--- Forwarded Message

Date:Fri, 01 Oct 93 11:30:10 -0800
From:Warren Burton [EMAIL PROTECTED]
To:  [EMAIL PROTECTED]
cc:  [EMAIL PROTECTED]

Simon,

brought to mind another question.

Do you know why Haskell allows
f a b c = exp
which almost means the same thing as
f = \a - \b - \c - exp
(ignoring the monomorphism restriction), but does not allow
f Int Char (Stk Thing) :: [Thing]
for
f :: Int - Char - Stk Thing - [Thing]

When teaching functional programming I always find the
f :: Int - Char - Stk Thing - [Thing]
form confusing for students, particularly when the function is defined
using the
f a c b = exp
form.

[..omitted...]

--- End of Forwarded Message

```

### re. Arrays and Assoc

```

Thomas Johnsson says:

If I recall correctly, the := to be used in array comprehensions was a
consession to the FORTRAN/Id/Sisal community, so that array comprehensions
would look more like they were used to.

Both Arvind and I think this is notation is awful, and I don't recall
either of us ASKING for it, so this was probably someone else's idea
of a ``concession'' to the Id community!

Nikhil

```

### Re: Arrays and Assoc

```

1. We should get rid of Assoc.
I agree wholeheartedly!  Do we have tp consider backwards
compat?

2. Arrays should be lazier.
I agree again.  But I think both kinds should be provided.

3. AccumArray should mimic foldr, not foldl.
Right!

-- Lennart

```

### Re: re. Arrays and Assoc

```

Nikhil says,

| Thomas Johnsson says:
|
| If I recall correctly, the := to be used in array comprehensions was a
| consession to the FORTRAN/Id/Sisal community, so that array comprehensions
| would look more like they were used to.
|
| Both Arvind and I think this is notation is awful, and I don't recall
| either of us ASKING for it, so this was probably someone else's idea
| of a ``concession'' to the Id community!
|
| Nikhil

All right!  I'm sorry!  ;-)

As I recall, Nikhil is right that neither he nor Arvind asked for this.
Some scientific programmers of my acquaintance did, though.  Id uses
= for this purpose, together with square brackets around the index.
This, of course, was not possible for Haskell.  The motivation was not
so much a "concession" to the Id community, as a concern for the

[((i,j), (f i j, g i j)) |

versus

[(i,j) := (f i j, g i j) |

or Id's

{matrix (1,N),(1,N) | [i,j] = (f i j, g i j) ||

(if I have that somewhere close to right).  The use of := for pairing
(or if you like, binding, or single-assignment) rather that assignment
did have a precedent in Val and Sisal.

All this syntax may seem of little consequence now, but at the time,
there was a genuine concern about the unpalatability of some choices
of syntax to a large community of programmers.

--Joe

```

### Re: Arrays and Assoc

```

John Launchbury says:
1. We should get rid of Assoc.

When explaining my programs to other people I find this is a point of
confusion. Imagine exaplaining array construction, "When I define an array,
the comprehension produces a list of index/value pairs, only they are not
written as pairs--these's this special type called Assoc. Oh, and don't be
confused by :=. That's not assignment. It is an infix pairing operator."
All of this is entirely unnecessary. Pairs have been used in maths for
decades to represent exactly this sort of thing. I simply do not believe
that [Assoc a b] provides me with any better information than [(a,b)].
Worse, I often find myself having to redefine standard pair functions on
elements of Assoc.

I agree.
If I recall correctly, the := to be used in array comprehensions was a
consession to the FORTRAN/Id/Sisal community, so that array comprehensions
would look more like they were used to.
But := is a bit unintuitive if you're thinking e.g. FORTRAN:
a = array[1 := 2, 2 := 4]
does *not* mean 1 is assigned to 2, etc!

But I think we can have the cake and eat it too, if we get rid of the
restriction (which I never liked) that operators beginning with : must be a
constructor: just define
a := b = (a,b)

[ While I'm at it: we should also get rid of the lower/uppercase
restrictions on constructor/nonconstructor names.
]

2. Arrays should be lazier.

I'm expecting Lennart to agree with me here as LML has the Right Thing. I
am convinced that there is no semantic problem with this, and I think that
even Simon isn't horrified at the implementation implications. The ability
to define arrays by self reference is just as important as it is for lists.

I'm not exactly sure what you mean here. It is allready possible to define

I am assuming that the fact that lazy indexes provide a better match with
laziness elsewhere is clear, but I am willing to expand on this point if
someone wants.

3. AccumArray should mimic foldr, not foldl.

This is tied up with the last point. The only advantage I can see with the
present scheme would be if the array element could be used as the
accumulator while the array was under construction. However, as arrays are
non-strict in their *elements* this seems to be of no benefit. It seems to
me highly sensible that the structure of the computation at each point
should reflect the structure of the input sequence (i.e. the elements are
in the same order). Furthermore, if a lazy operation is used (such as (:))
then the result becomes available early (assuming point 2. above).

Again I wholeheartedly agree.
Let me just remind people what the LML arrays does:

example:
lmlarray 1 3 f list =
array [ 1:= f [ x | (1,x) - list],
2:= f [ x | (2,x) - list],
3:= f [ x | (3,x) - list]
]
where array is like the ordinary Haskell array constructor function.
In the implementation, the filtering needs to be done only once
and not n times, where n is the size of the array.
[ If anyone wants to know how this is done, I could expand on this. ]

It seems to me that it is a bit more general to apply f to the entire
list accumulated at each index, rather than as an operator for foldr.

-- Thomas

```

### Re: Arrays and Assoc

```

John Launchbury says,
| Here are three comments directed particularly at Haskell 1.3 people, but
| obviously open to general feedback.
|
| 1. We should get rid of Assoc.
|
| When explaining my programs to other people I find this is a point of
| confusion. Imagine exaplaining array construction, "When I define an array,
| the comprehension produces a list of index/value pairs, only they are not
| written as pairs--these's this special type called Assoc. Oh, and don't be
| confused by :=. That's not assignment. It is an infix pairing operator."
| All of this is entirely unnecessary. Pairs have been used in maths for
| decades to represent exactly this sort of thing. I simply do not believe
| that [Assoc a b] provides me with any better information than [(a,b)].
| Worse, I often find myself having to redefine standard pair functions on
| elements of Assoc.

Mea maxima culpa.  I must admit that the reason for introducing Assoc
was syntactic.  Making a semantic distinction between pairs and assocs
for a syntactic purpose should have set off alarms; somehow, I managed
to ignore them.

At the time this decision was made, arrays and array syntax were something
of a contentious issue.  Even the use of infix ! for indexing was a
source of anguish for potential users of arrays, and the fear was that
pair syntax in "array comprehensions" would be unwieldy, particularly
for multidimensional arrays.  Consider a matrix of pairs (a typical
construction in scientific mesh algorithms).

problem.  Thomas suggests that we could drop the syntactic restrictions
on constructor and nonconstructor symbols and define (:=) as a pairing
function.  That almost does the job, but there are some programs that
pattern-match Assocs.  Also, I think there will be objection in some
quarters to dropping the separation of name spaces.  Here are two more
possibilities:

2.  Provide a way to declare synonyms for constructors, and
use it to equate := with (,).

3.  Don't provide such a general facility, but hack in :=
as a special case (rather like prefix minus).

| 2. Arrays should be lazier.
|
| I'm expecting Lennart to agree with me here as LML has the Right Thing. I
| am convinced that there is no semantic problem with this, and I think that
| even Simon isn't horrified at the implementation implications. The ability
| to define arrays by self reference is just as important as it is for lists.
| I am assuming that the fact that lazy indexes provide a better match with
| laziness elsewhere is clear, but I am willing to expand on this point if
| someone wants.

I agree, but I also agree with Lennart that both sorts of arrays are needed.
too many scientific programs couldn't live without them (or else effects).
Pragmatically, the accumulations in these programs were almost always
sums.  (histogramming, Monte Carlo tallying)  People needed to be convinced
that this could be done efficiently.

| 3. AccumArray should mimic foldr, not foldl.
|
| This is tied up with the last point. The only advantage I can see with the
| present scheme would be if the array element could be used as the
| accumulator while the array was under construction. However, as arrays are
| non-strict in their *elements* this seems to be of no benefit. It seems to
| me highly sensible that the structure of the computation at each point
| should reflect the structure of the input sequence (i.e. the elements are
| in the same order). Furthermore, if a lazy operation is used (such as (:))
| then the result becomes available early (assuming point 2. above).
|
| John.
|

Agreed again.  The historical reason for the choice of foldl should be
evident from the remarks above.

Since all of these decisions had to do with Id arrays, I'm pleased
to hear from Nikhil that pH people are thinking along the same lines
as John and Lennart.  Consensus!

--Joe

```

### Re: Arrays and Assoc

```

But I think we can have the cake and eat it too, if we get rid of the
restriction (which I never liked) that operators beginning with : must be a
constructor: just define
a := b = (a,b)

Unfortunately that won't work if := had been used in patterns. I think
backward compatibility is an issue. The standard technique of supporting
Assoc but with compiler warnings will probably have to be used.

---

I'm not exactly sure what you mean here. It is allready possible to define

Haskell arrays are strict in the indices. That is, the whole of the
defining list is consumed and the indices examined before the array becomes
available. Thus, a recursive array definition in which the *index
calculation* depends on the earlier part of the array gives bottom. The
current definition allows for a recursive definition so long as it is only
the values of the array elements which depend on the array. This is not
always sufficient.

---

Let me just remind people what the LML arrays does:

example:
lmlarray 1 3 f list =
array [ 1:= f [ x | (1,x) - list],
2:= f [ x | (2,x) - list],
3:= f [ x | (3,x) - list]
]
where array is like the ordinary Haskell array constructor function.
...
It seems to me that it is a bit more general to apply f to the entire
list accumulated at each index, rather than as an operator for foldr.

If you want the list you can supply (:) and []. If not, you supply the
operations, and the intermediate list never gets built.

John.

```

### Re: Lifted products

```

Oops!  I should have underlined in my last message where I wrote
`newtype' instead of `datatype'.  As a result, Simon seems to have
completely misunderstood my proposal.  Sorry about that.

Simon seems to think I am proposing that if one writes

datatype  T a_1 ... a_k = C t_1 ... t_n

that one gets unlifted tuples.  I am *not* proposing this.  What I
propose is that if one writes

newtype  T a_1 ... a_k = C t_1 ... t_n

then one gets unlifted tuples.  I'm not stuck on the keyword
`newtype', anything other than `datatype' will do.

Simon writes of my true proposal (which he mistakenly labels an
alternative) `I like it not'.  But doesn't say why.  In particular, he
seems not to have hoisted on board that my proposal is just a
*generalisation* of his proposal to write

newtype  T a_1 ... a_k = C t.

to declare a type isomorphic to an existing type.

In particular, if one wants to create a type `New a' isomorphic to an
existing type, Simon would write (by his latest proposal)

datatype  Data a = New a = MakeNew a

whereas I would write

newtype  New a = MakeNew a

So my alternative is simpler in some ways.

Simon also notes that strictness declarations don't seem sensible
for unlifted constructors.  Indeed.  Ban them.  (Again, this is an
argument against something I never proposed.)

I think Simon's other points about ~ patterns are spurious.  But I
don't want to rebut them, because now that I've pointed out that he
misunderstood my proposal, perhaps he no longer holds to them.  Simon
(or anyone else), if you have further arguments against what I *did*

All in the spirit of a quest for the perfect Haskell!  -- P

```

```

Gerald Ostheimer notes that in Abramsky and Ong's lazy lambda calculus
that (\x - bottom) differs from bottom.  That's correct.

But just because they call it `lazy' doesn't mean that it really is
the essence of laziness.  I prefer to use the more neutral name `lifted
lambda calculus' for their calculus.

An example of a perfectly good lazy language in which neither products
nor functions are lifted is Miranda (a trademark of Research
Software Limited).

Hope this clarifies things,  -- P

```

### Recursive type synonyms

```

While we are proposing things, here's a further suggestion.
a restriction, and makes the language definition simpler.
It is fully backward compatible.

The suggestion is:

Remove the restriction that type synonym
declarations must not be recursive.

In other words, one could write things like

type  Stream a  =  (a, Stream a)

which is equivalent to the type (a, (a, (a, ...))).

The only reason we included the restriction at the time was

(a)  it makes unification easier to implement
(b)  it was more standard
(c)  there didn't seem any compelling reason *not*
to include the restriction.

Guy Steele has since pointed out several compelling examples
where it would be *much* easier not to have the restriction,
and I've encountered a few myself.  Let's trash it!

The obvious way to go is for someone to implement it first, to
make sure it's not difficult.  Mark Jones, have you tried this
in Gofer yet?

Cheers,  -- P

---
Department of Computing Sciencetel: +44 41 330 4966
University of Glasgow  fax: +44 41 330 4913
Glasgow G12 8QQ, SCOTLAND

```

### Arrays and Assoc

```

John Launchbury makes the suggestion, inter alia, that Haskell 1.3
`should get rid of Assoc.'

Reading some of the followup messages, I see that there is some
division on this point.  Those closer to the scientific applications
community, such as Nikhil and Joe Fasel's acquaintances, seem to be
warmed by the familiar sight of `:=', whereas the more
pure-mathematically motivated commentators seem to find the (assuredly
equivalent) pair constructor more congenial.

John's suggestion will definitely stop old code dead in its tracks
(namely, in the type-checker).

Clearly, what's needed to satisfy all parties and make Haskell 1.3 the
rousing success that it deserves to be is to introduce a class
`Associator' with methods `key', `image', `associate', `toPair',
`toAssoc'.  Then the array prelude functions could be redefined in
terms of the class by (1) pattern-matching on `toAssoc assoc' instead
of `assoc' for each variable assoc :: Assoc, and (2) replacing
explicit applications of the constructor `:=' by `associate'.  I don't
think user code would have to change, but users might wonder about the
new inferred type constraints on their array code.

Of course, to recover efficiency, all Haskell implementors will have
to treat the class `Associator' specially so that no dictionary usage
is actually produced (as long as the users haven't perversely
introduced their own instances, which suggests some wondrous new
interpretations of the concept `array').

I intended this message to be humorous when I started, but I'm
beginning to think this is a reasonable approach to such matters.  So
let's generalize with wild abandon: what would be the consequences of
automatically deriving an class abstraction for _every_ Haskell data
type?  Even function types are eligible via the abstract operation
`apply'.  What new vistas now unfold?

-
Dan Rabin   I must Create a System
Department of Computer Scienceor be enslav'd by another Man's.
P.O. Box 208285 I will not Reason  Compare:
New Haven, CT 06520-8285  my business is to Create.

[EMAIL PROTECTED] -- William Blake, `Jerusalem'
-

```

```

I have been following this discussion with interest and I'd like
some clarification.

But just because they call it `lazy' doesn't mean that it really is
the essence of laziness.

What is really been called `lazy' and how is the `essence of
laziness' defined?

Also, forgive my ignorance, but what does it mean that 'products
or functions are lifted'?

Thanks,

Sergio Antoy
Dept. of Computer Science
Portland State University
P.O.Box 751
Portland, OR 97207
voice +1 (503) 725-3009
fax   +1 (503) 725-3211
internet [EMAIL PROTECTED]

```