typekind inference

1996-03-11 Thread smk

% You can apply any constructor e1 with a kind (k1 - k2), be it a
% variable, constant, or application, to any other constructor e2 of
% kind k1, forming a (possibly partial) application (e1 e2), as described
% on pages 31--32 of the (draft) report.  As you say, this permits currying.

Yes, but these pages don't say anything about the restriction
w.r.t. type synonyms, and I think that's the place where this should
be stated -- it's a restriction on type( expression)s, not a
restriction on type synonym declarations.
(And what about type identifiers coming from other modules;
can we curry them iff they weren't originally type synonyms?)

I would suggest to solve this on the level of kinds, e.g.
distinguish the kinds k1-k2 and k1=k2; the latter would be the one
that has always to be fully applied, and kind inference would always
infer the former.  Type application (f a) would allow both kinds
k1-k2 and k1=k2 as the kind for f.
The kind of a type synonym T a1...an = rhs would be k1=...=kn=k, etc.
(Strictly, this gives more kinds then we need and want,
but this would only matter if we had explicit kind declarations.)

Considering my original problem about the interaction between type
inference and kind inference, there was a little misunderstanding with
what I meant by types (\x.x-x) etc.
Sure, these beasts don't exist in the syntax, but I was talking about
the (static) semantics --- there's conceptually a difference between
types and type expressions, e.g. () and (()) are different type
expressions but denote the same type.  In the static semantics such a
thing as (\x.x-x) does exist, it's simply the semantic value the type
constructor T of the synonym
type T x = x - x
is bound to.

I can elaborate a bit with the earlier example:
data App f a = A (f a)
value = A ()
As explained by Lennart and Mark, the value declaration does not
type-check, because e.g. App (\f.f) () would violate the
no-partial-applications-of-type-synonyms principle ((\f.f) would
correspond to a type synonym).
However, there is no partial application of a type synonym
in the above program TEXT, it only exists "semantically" when one
tries to infer a type for value.
The restriction on the currying of type synonyms is therefore
a semantic restriction on types, not just a syntactic restriction on
type expressions.
I don't think this is obvious.

Having said all that, I'm probably a bit spoilt by the style of the
SML definition which I know quite well.

Stefan Kahrs

typekind inference

1996-03-09 Thread smk

%But there are no types `(\x-x)' or `(\x-())' in Haskell.

%So the expression does not typecheck (at least that is
%my understanding of how it works).
%You might think that
%  type I x = x
%and then using I alone would give you the type `(\x-x)',
%but partial application of type synonyms is not allowed,
%thus sidestepping the problem (thanks to Mark Jones for
%that little trick).

Are partial applications of data-types allowed then?
If not, the higher kinds would not make sense.

If they do, what kind of partial applications are allowed?
Just the data-type identifier without arguments, or is it possible
to use them curried (the kind (*-*)-*-* suggests currying).

The report has a pointer to a paper by Mark Jones, where this is
presumably explained.  But I don't think pointing to papers is good
enough for a language definition.  At least it should be clear from
the report what a type is.

Stefan Kahrs

Haskell 1.3, monad expressions

1996-03-08 Thread smk


add another form of statement for monad expressions:

stmts - ...
 if exp

which is defined for MonadZero as follows:

do {if exp ; stmts} = if exp then do {stmts}
  else zero

Based on this, one can define list comprehensions by

[ e | q1,...,qn ] = do { q1' ; ... ; qn'; return e }

where either  qi' = if qi  (whenever qi is an exp)
or  qi' = qi  (otherwise).

Stefan Kahrs

Re: Haskell 1.3 (lifted vs unlifted)

1995-09-12 Thread smk

John Hughes mentioned a deficiency of Haskell:
  OK, so it's not the exponential of a CCC --- but 
  Haskell's tuples aren't the product either, and I note the proposal to 
  change that has fallen by the wayside. 

and Phil Wadler urged to either lift BOTH products and functions,
or none of them.

My two pence:
If functions/products are not products and exponentials of a CCC, you
should aim for the next best thing: an MCC, a monoidal closed category.
But Haskell's product isn't even monoidal:

There is no type I such that A*I and A are isomorphic.
The obvious candidate (in a lazy language) would be
the empty type 0, but A*0 is not isomorphic to A but to the lifting of A.

Another problem: the function space  A*B - C  should be naturally
isomorphic to  A - (B - C).  What does the iso look like?
One half is the obvious curry function:

curry f x y = f(x,y)

But what is the other half?  Apparently, it should be either

uncurry1 f (x,y) = f x y


uncurry2 f (~(x,y)) = f x y

Which one is right depends on which one establishes
the isomorphism.  Consider the definition

f1 (x,y) = ()

uncurry1 (curry f1) undef =
undef =
f1 undef

while on the other hand:
uncurry2 (curry f1) undef =
curry f1 (p1 undef, p2 undef) =
f1(p1 undef,p2 undef) =
() =/=
f1 undef

This suggests that uncurry2 is wrong and uncurry1 is right, but for

f2 (~(x,y)) = ()

the picture is just the other way around.
BTW  It doesn't help to employ "seq" in the body of curry.

Looks rather messy.
Can some of this be salvaged somehow?

Stefan Kahrs

Defining datatype selector functions

1993-11-19 Thread smk

Mark's suggestion for declaring the selector functions
within the datatype already exists in the language OPAL
(developed at TU Berlin).  Besides selectors, you also get the
test predicates there.

Example: the declaration

DATA list == empty
 :: (first:char, rest: list)

introduces a type list, constructors empty and ::,
selector functions first and rest, and also test predicates
empty? and ::? .

Concerning Mark's comment that

MJ  o  Datatype definitions are already rather complicated.  In one
MJ concise notation, they allow us to specify:
MJ   - a new type constructor,
MJ   - a family of new value constructors, and
MJ   - (implicitly, and often neglected) a means of pattern
MJ matching against values of the new type.

I should add that a datatype definition does even more:
it also solves a recursive type equation (involving a coproduct type)
and it abstracts type variables from the declaration.
One could separate these concepts, although the lack of explicit
type variable abstraction would only be apparent if Haskell had
parameterised modules.

Stefan Kahrs

Re: re. 1.3 cleanup: patterns in list comprehensions

1993-10-15 Thread smk

   Parsing Haskell list comprehensions deterministically ((LA)LR) is currently very
   hard, since both "pat - exp" (or also "pat gd - exp", as suggested by Thomas)
   and "exp" are allowed as qualifiers in a list comprehension. Patterns and
   expressions can look very much alike.  Could one possibly expand "exp" to
   "if exp" in Haskell 1.3 list comprehensions?  Only to make deterministic
   parsing easier...

I once had the same problem when writing a Miranda-ish compiler.
The simple solution is to use "exp" instead of "pat" in the parsing
grammar (in qualifiers), and when it later (e.g. encountering - )
becomes clear that the beast has got to be a pattern,
you check it semantically.  This works, because patterns
are syntactically a subclass of expressions.

One should not make the parsing method too much influence the language
design, PASCAL is a bad example for this.

Stefan Kahrs

Polymorphic recursive calls possible via type classes

1993-07-28 Thread smk

Phil's (Robin Milner's ??) example typechecks in ML.

%Incidentally, there has been a lot of work done on Milner's type
%system extended to allow polymorphic recursion; this system is
%sometimes called ML+.  One motivating example -- which I first
%heard from Milner -- is:
%  data  Trie x = Atom x | List [Trie x]
%  maptrie :: (a - b) - Trie a - Trie b
%  maptrie f (Atom x)=  Atom (f x)
%  maptrie f (List xts)  =  List (map (maptrie f) xts)

The reason why there is no problem is that type Trie has a regular
recursion.  One could modify the example as follows to get the desired effect:

   data Trie x = Atom x | List (Trie [x])

   maptrie f (Atom x) = Atom (f x)
   maptrie f (List xts) = List (maptrie (map f) xts)

Frankly, I don't find this example a particularly convincing
justification for adding power of the type system.

Here is my favourite example,
de Bruijn lambda-calculus without any natural numbers:

data Opt x = ONE | An x
data Term x = Var x | App (Term x)(Term x) | Lam (Term(Opt x))

substitute :: (a - Term b) - Term a - Term b

substitute f (Var v) = f v
substitute f (App a b) = App (substitute f a)(substitute f b)
substitute f (Lam t) =
Lam (substitute (\x -
case x of {
ONE - Var ONE;
An y - substitute (Var.An) (f y) }
) t)

Again this does not typecheck, because Term has an irregular recursion
and substitute uses proper instances of its type in the last clause.

Stefan Kahrs

Re: Successor patterns in bindings and n+k patterns

1993-05-19 Thread smk

Another strange thing about n+k patterns.

Its definition uses = , but = is not part of the class Num.
Does that mean that n+k patterns have to be instances of class Real?

One could leave it class Num, if the translation were expressed
in terms of "signum" rather than "=".

Can one misuse the feature of n+k-patterns to simulate
n*k+k' patterns?  [I am talking about weird user-defined
instances of Num.]

Stefan Kahrs

type inference and semi-unification

1993-02-26 Thread smk

Lennart mentioned this recently; the situation is as follows:

1. semi-unification (the general one) is undecidable
2. it is semi-decidable
3. the semi-decision procedure for semi-unification can be augmented
   with heuristics that finds almost all cases in which it fails
   (this is a kind of modified occur-check). It is a "well-behaving"

You can modify an ordinary (unification-based) type-inference
algorithm as follows to get the more general types:
when type-checking a recursive definition, say

letrec  f = ... f . f ... 

use a fresh type variable for every occurrence of f and proceed as
usual.  If for the remaining parts (the ...) the type-checking has
succeeded, the algorithm inspects what happened to all those type
variables for f, i.e. which types have been substituted for them.
In a unification-based setting you had to unify them, here: you have
to semi-unify them, i.e. suppose the types of those three occurrences
are as follows

letrec  f = ... f . f ... 
t1  t2  t3

then we have to solve the semi-unification problem (notice the
difference between using and defining occurrence)
t1  t2
t1  t3
i.e. we have to find substitutions sigma, tau1, tau2, such that
tau1(sigma(t1)) = sigma(t2)
tau2(sigma(t1)) = sigma(t3)

The type sigma(t1) is the type for our f, and we have to apply sigma
throughout the range of our type-inference.

One can enhance the performance of the algorithm by the usual
Haskell/Miranda dependency analysis for letrecs (to be honest, I'm not
sure that it is really an improvement -- the semi-unification
algorithm does very similar things anyway), which basically reduces
bigger semi-unification problems to smaller ones.

Stefan Kahrs

Strictness in Haskell

1992-04-08 Thread smk

I like John's idea with a class Strict, but I think there should also be
a second class Eval for computing whnf's:

class Strict a where
 strict :: a - Bool

class Eval a where
 eval :: a - Bool

Example: for Complex we get:

instance Strict a = Strict (Complex a) where
 strict (a:+b) = strict a  strict b

instance Eval (Complex a) where
 eval (a:+b) = True

Based on this we can define Miranda's "force" and "seq":

force:: Strict a = a - a
force x | strict x = x

seq:: Eval a = a - b - b
seq x y | eval x = y


Division, remainder, and rounding functions

1992-02-17 Thread smk


Your definition of divFloorRem (and probably divCeilingRem as well)
doesn't seem to be quite right, because I end up with
(-4) `mod` (-3) == -4
because divTruncateRem (-4) (-3) is (1,-1).

The condition in the "if" should be something like
signum r == -signum d
rather than r0. (Something like this is in the definition of
`mod` on page 89 in 1.2beta.)

Stefan Kahrs