Re: Monomorphism, monomorphism...

2001-10-24 Thread Hannah Schroeter

Hello!

On Mon, Oct 08, 2001 at 07:38:09PM +, Marcin 'Qrczak' Kowalczyk wrote:
 Mon, 8 Oct 2001 11:35:48 +0200, Hannah Schroeter [EMAIL PROTECTED] pisze:

  Now, with the typical dictionary implementation of type classes,
  this wouldn't really be too difficult.

 Dictionaries would have to be make hashable and comparable. For a sane
 semantics you can't compare their identities, because dictionaries
 of instances of the same type might be created independently in
 separate modules and would be treated as different. So we would either
 need to extend the representation of each dictionary to include a
 runtime identification of the type, or somehow guarantee to share
 the representation of equal dictionaries.

I don't think so. Why not create a dictionary record while compiling
the associated instance (which may, by the H'98 definition, occur
only once in the program)? Then the instance is associated with one
symbol (in the C linker sense), which you refer at all times you must
conjure up a dictionary to pass it into functions. The symbol
is resolved to the same address on every reference by the linker,
voilà, there's the identity based dictionary identification.
And hashing (void*)'s isn't too difficult, either.

And I think just referring to the same dictionary record over and over
is also cheaper than creating new dictionary records from time to time.

I.e. compile

instance Num Int where
  ...

- code for the methods, and
dict_num_int:
(some tag that it's a dictionary which is static and must
never be touched by the GC)
reference to method for (+)
reference to method for (-)
...

foo :: Num a = a - a
foo x = x + 1

foo_impl(struct dict_num* num_dict, HS_OBJ* x, Continuation* c)
{
HS_OBJ* one = num_dict-fromInteger(INTEGER_SMALL_LIT(1));
HS_OBJ* result = num_dict-plus(x, one);
TAIL_CALL_CONTINUATION(c, result);
}

bar :: Int - Int
bar x = foo x
bar_impl(HS_OBJ* x, Continuation* c)
{
// as a tail call
foo_impl(dict_num_int, x, c);
 
}

And here, you use the same dictionary address as on every other call site
that has to pass a Num a = a argument from a monomorphic Int value.

  C++ templates require having source available in each place a template
  is used.

  No. The standard specifies exported templates. It's only that
  nearly no implementation supports that part of the ISO standard.

 Ok, I was talking about the traditional model of compilation of C++.
 That's why nobody fully implemented 'export' yet: it doesn't fit it.

It would fit, one just would have to give up the one-source-at-a-time-
and-produce-only-traditional-object-code compilation model.

 [...]

  But at least, C++ templates have pattern matching ([partial]
  specialization!) and are Turing complete, albeit in a very
  strange way of expression.

 That's why they have a limited depth of recursion (I think 31 is the
 required minimum).

Yes, of course. It's a similar problem like in Cayenne, where type checking
could take infinite time, too. (And what about current ghc Haskell,
with multiple parameter type classes, overlapping instances, perhaps
even -fallow-undecidable-...?!)

  The C++ way wouldn't work for Haskell in general because sometimes
  the depth of instances created is not bounded statically,

  Can you name such a case in standard H'98?

 Polymorphic recursion:

 f :: Show a = Int - a - String
 f n x | null (drop n str) = f n (x,x)
   | otherwise = str
   where
 str = show x

Oh... Good example, thanks.

 [...]

Kind regards,

Hannah.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-24 Thread Lennart Augustsson

Hannah Schroeter wrote:

 I don't think so. Why not create a dictionary record while compiling
 the associated instance (which may, by the H'98 definition, occur
 only once in the program)?

One problem is that certain Haskell programs need an unbounded
number of instances.

-- Lennart




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-24 Thread Marcin 'Qrczak' Kowalczyk

Wed, 24 Oct 2001 10:36:22 +0200, Hannah Schroeter [EMAIL PROTECTED] pisze:

 Why not create a dictionary record while compiling the associated
 instance (which may, by the H'98 definition, occur only once in
 the program)?

Instances with contexts are commonly represented as functions which
make dictionaries from other dictionaries. So although the function
itself is only one, it may be applied many times to the same argument,
and we want to unify results of the application, pretending that there
exists only one instance for Eq Int.

 Yes, of course. It's a similar problem like in Cayenne, where type checking
 could take infinite time, too. (And what about current ghc Haskell,
 with multiple parameter type classes, overlapping instances, perhaps
 even -fallow-undecidable-...?!)

The same.

module Test where
class C a where f :: a
instance C a = C a
main = print (f :: ())

[qrczak ~]$ ghc -c -fglasgow-exts -fallow-undecidable-instances Test.hs

Test.hs:4:
Context reduction stack overflow; size = 21
Use -fcontext-stack20 to increase stack size to (e.g.) 20
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`C ()' arising from use of `f' at Test.hs:4
`f at [()]' arising from use of `f' at Test.hs:4
When generalising the type(s) for main

-- 
 __(  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTÊPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-10 Thread Marcin 'Qrczak' Kowalczyk

09 Oct 2001 13:55:04 -0700, Carl R. Witty [EMAIL PROTECTED] pisze:

 The TREX paper from Mark Jones and Benedict Gaster (I hope I
 have the names right) had both extensible records and extensible
 variants (extensible variants being what you would need to implement
 downcasts),

I don't think so. Here is how the requirement can be formulated:

We need some type T such that it's possible to define a family of
functions for arbitrary choices of A:
upA   :: A - T
downA :: T - Maybe A
satisfying downA (upA a) = Just a. We want to choose the type for T
before deciding the exact set of types for A.

-- 
 __(  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTÊPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-10 Thread Carl R. Witty

Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes:

 09 Oct 2001 13:55:04 -0700, Carl R. Witty [EMAIL PROTECTED] pisze:
 
  The TREX paper from Mark Jones and Benedict Gaster (I hope I
  have the names right) had both extensible records and extensible
  variants (extensible variants being what you would need to implement
  downcasts),
 
 I don't think so. Here is how the requirement can be formulated:
 
 We need some type T such that it's possible to define a family of
 functions for arbitrary choices of A:
 upA   :: A - T
 downA :: T - Maybe A
 satisfying downA (upA a) = Just a. We want to choose the type for T
 before deciding the exact set of types for A.

It is true that extensible variants do not give you a single, closed
type T with this property.  

However, I think it is likely that any program you wanted to write
using these functions could instead be written in a system with
extensible variants.  Functions would be polymorphic over the exact
variant type, and the typechecker would stitch together all the
requirements on the variant type so that when compiling main, it
could decide what the actual type was.  (I say likely because I'm
not certain that the type inference described in the Gaster/Jones
paper would work the way I want it to, since I've never used a system
that implemented extensible variants.)

Carl Witty


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-10 Thread Ashley Yakeley

At 2001-10-10 03:59, Marcin 'Qrczak' Kowalczyk wrote:

We need some type T such that it's possible to define a family of
functions for arbitrary choices of A:
upA   :: A - T
downA :: T - Maybe A
satisfying downA (upA a) = Just a. We want to choose the type for T
before deciding the exact set of types for A.

I'm convinced extensible datatypes are the cleanest and most in-spirit 
extenstion to Haskell to solve this.

data T = _;

...

data T |= MkAT A;
upA = MkAT;
downA (MkAT a) = Just a;
downA _ = Nothing;


-- 
Ashley Yakeley, Seattle WA


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Extensible downcasts impossible in Haskell? (was Re: Monomorphism, monomorphism...)

2001-10-09 Thread Marcin 'Qrczak' Kowalczyk

Tue, 9 Oct 2001 10:50:19 +1300, Tom Pledger [EMAIL PROTECTED] pisze:

 I'm curious about this impossibility.
 
   - Is it well known?  If so, would someone please refer me to a paper
 or posting which explains it?

I don't know. I'm not even sure if some clever encoding couldn't
express it, but I can't imagine how it could do it and I would guess
that it's impossible.

It's not easy to formulate the question precisely but I'm quite
sure that it's a well defined problem.

You want to be able to embed arbitrary Haskell types in subtypes.
So it doesn't suffice to make a universal type with many useful
concrete types under constructors, which emulates dynamically
typed languages with a fixed number of types like Lisp or Erlang
(I think). Because an interface of abstract types not included in
this set may have binary methods or alike, so you can't just wrap the
interface in a tuple of functions with types taken from the included
set only.

   - Does it just affect Haskell 98, or does it have deep implications
 for any future language extensions?

IMHO it's independent from most extensions.

The only extension I know of which could be used for implementing
downcasts is Dynamic. Some extensible algebraic types (where definition
of constructors is spread among an open set of modules) would allow
downcasts too I think.

   - How does it relate to the alternative record mechanism idea you
 mentioned a while ago?
 
 http://haskell.org/pipermail/haskell/2000-December/000213.html

My mechanism doesn't allow downcasts, although it promotes a style
of programming which probably wants to use downcasts sometimes.

It's described as a translation to Haskell 98 with multiparameter
classes, fundeps, the ability to gather label names from all
modules and associate each name with an unique class, and some
small adjustments of the type system (related to the termination of
typechecking). These extensions don't provide anything useful for
implementing downcasting, so I couldn't implement downcasts without
changing the compiler even if I specified them somehow.

-- 
 __(  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTÊPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-09 Thread Carl R. Witty

Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes:

 Since OO languages often use subtypes to emulate constructors of
 algebraic types, they need downcasts. In Haskell it's perhaps less
 needed but it's a pity that it's impossible to translate an OO scheme
 which makes use of downcasts into Haskell in an extensible way
 (algebraic types are closed).

I agree.  The TREX paper from Mark Jones and Benedict Gaster (I hope I
have the names right) had both extensible records and extensible
variants (extensible variants being what you would need to implement
downcasts), but only the extensible records part of the paper was
implemented in Hugs.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-08 Thread Hannah Schroeter

Hello!

On Sun, Oct 07, 2001 at 11:29:09AM +, Marcin 'Qrczak' Kowalczyk wrote:
 [...]

 Shouldn't the compiler be able to limit the recomputations to
 one per instance of the class?

 It would require very unusual implementation. Currently the compiler
 doesn't need to build dictionaries indexed by types at runtime and
 even doesn't need to recognize which types are the same at runtime.

Now, with the typical dictionary implementation of type classes,
this wouldn't really be too difficult.

foo :: Num a = a
foo = 2*3*5*7

creates a hash map, where the key is the dictionary argument and
the value is the deferred (call by need) calculation of 2*3*5*7 for
that particular Num instance, so it could be transformed to sth like

createHashMap :: (Hashable key) = value - IO HashMap key value
createHashMap defaultValue = ...

gethash :: HashMap key value - key - value

puthash :: HashMap key value - key - value - IO ()

foo_internal :: HashMap ...
foo_internal = createHashMap foo_impl

foo :: NumDict a - a
foo dict = (gethash foo_internal dict) dict

-- foo_impl gets called only once per dict, as it replaces its entry
-- in foo_internal by the lazy value of foo for that particular
-- instance and returns that.
foo_impl :: NumDict a - a
foo_impl dict =
let
value = (*) dict (fromInteger dict 2) ...
hashSlot _ = value
in unsafePerformIO $ do
puthash foo_internal dict hashSlot
return value

Yes, it creates some burden, but for fast hash maps, that shouldn't
be completely unfeasible. Other programming languages use hash
lookups for every function call and get performance by dynamic
code generation techniques or other means.

And we could still specialize cases where foo is applied in
known type contexts (e.g. when the context of one particular usage
definitely suggests Integer, the compiler could even statically
evaluate the whole of foo).

 I guess I'm biased here by my knowledge of templates in C++,
 which can be used to implement something very similar to type
 classes in Haskell.

 C++ templates require having source available in each place a template
 is used.

No. The standard specifies exported templates. It's only that
nearly no implementation supports that part of the ISO standard.

With exported templates, you specify only signatures in the headers,
together with appropriate placement of the keyword export.
The implementation is in .cc files, just like for non-template
methods.

 I agree that it would be useful for Haskell compilers to
 compile some instances that way when optimization is turned on,
 but the traditional way to implement classes uses a single compiled
 version which is passed various dictionaries at runtime.

Isn't there some experimental thing that implements type classes
by complete specialization of all code that uses polymorphic
values?

 It works because Haskell's rules of overloading are defined more
 semantically. C++ templates are closer to syntactic macros.

Yep. C++ templates are a castrated version of Lisp macros :-)
But at least, C++ templates have pattern matching ([partial]
specialization!) and are Turing complete, albeit in a very
strange way of expression.

 The C++ way wouldn't work for Haskell in general because sometimes
 the depth of instances created is not bounded statically,

Can you name such a case in standard H'98?

 unlike C++
 templates which are instantiated at compile time. And it doesn't
 work with extensions like existential quantification, where types
 which differ in instance dictionaries can be packed into values of
 a single type, so the instance to use *must* be chosen dynamically
 (data flow can't be statically determined in general).

Right. A collection of existentially typed values is similar to
abstract class Base ... class Derived1 : Base ... class Derived2 : Base,
Collection[Base] with values of the different derived classes in
typical OO languages.

 It can be only an optimization (and perhaps it's sometimes more code
 bloat than optimization), it doesn't fit as the primary model IMHO.

Seems so, at least when language extensions come into play.

 [...]

Kind regards,

Hannah.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-08 Thread Marcin 'Qrczak' Kowalczyk

Mon, 8 Oct 2001 11:35:48 +0200, Hannah Schroeter [EMAIL PROTECTED] pisze:

 Now, with the typical dictionary implementation of type classes,
 this wouldn't really be too difficult.

Dictionaries would have to be make hashable and comparable. For a sane
semantics you can't compare their identities, because dictionaries
of instances of the same type might be created independently in
separate modules and would be treated as different. So we would either
need to extend the representation of each dictionary to include a
runtime identification of the type, or somehow guarantee to share
the representation of equal dictionaries.

 C++ templates require having source available in each place a template
 is used.
 
 No. The standard specifies exported templates. It's only that
 nearly no implementation supports that part of the ISO standard.

Ok, I was talking about the traditional model of compilation of C++.
That's why nobody fully implemented 'export' yet: it doesn't fit it.

 Isn't there some experimental thing that implements type classes
 by complete specialization of all code that uses polymorphic
 values?

I'm not sure. I argued for a kind of pragma between INLINE and
SPECIALIZE, which would tell the compiler to export the unfolding for
making specializations in modules which need them, but don't bother
with inlining each call. SimonPJ agreed that it's a good idea and is
perhaps working on this.

I was only able to change ghc to mark internal instance functions as
inline. The effect isn't exactly right.

 But at least, C++ templates have pattern matching ([partial]
 specialization!) and are Turing complete, albeit in a very
 strange way of expression.

That's why they have a limited depth of recursion (I think 31 is the
required minimum).

 The C++ way wouldn't work for Haskell in general because sometimes
 the depth of instances created is not bounded statically,
 
 Can you name such a case in standard H'98?

Polymorphic recursion:

f :: Show a = Int - a - String
f n x | null (drop n str) = f n (x,x)
  | otherwise = str
  where
str = show x

 Right. A collection of existentially typed values is similar to
 abstract class Base ... class Derived1 : Base ... class Derived2 : Base,
 Collection[Base] with values of the different derived classes in
 typical OO languages.

Except that there are no downcasts (neither checked with a Maybe
result nor unchecked unsafe).

Since OO languages often use subtypes to emulate constructors of
algebraic types, they need downcasts. In Haskell it's perhaps less
needed but it's a pity that it's impossible to translate an OO scheme
which makes use of downcasts into Haskell in an extensible way
(algebraic types are closed).

-- 
 __(  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTÊPCZA
QRCZAK


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Extensible downcasts impossible in Haskell? (was Re: Monomorphism, monomorphism...)

2001-10-08 Thread Tom Pledger

Marcin 'Qrczak' Kowalczyk writes:
 :
 | Since OO languages often use subtypes to emulate constructors of
 | algebraic types, they need downcasts. In Haskell it's perhaps less
 | needed but it's a pity that it's impossible to translate an OO
 | scheme which makes use of downcasts into Haskell in an extensible
 | way (algebraic types are closed).

Hi.

I'm curious about this impossibility.

  - Is it well known?  If so, would someone please refer me to a paper
or posting which explains it?

  - Does it just affect Haskell 98, or does it have deep implications
for any future language extensions?

  - How does it relate to the alternative record mechanism idea you
mentioned a while ago?

http://haskell.org/pipermail/haskell/2000-December/000213.html

Regards,
Tom

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-07 Thread Marcin 'Qrczak' Kowalczyk

Sat, 6 Oct 2001 22:22:24 -0700, Juan Carlos Arévalo Baeza [EMAIL PROTECTED] pisze:

A pattern which is something other than an identifier.
 
Like defining a function, as opposed to defining a constant?

No: a pattern, e.g. (x,y), Just y, (x:_) etc. A function definition
looks like an identifier applied to patterns, which usually doesn't
look like a pattern. These are two forms of lhs of =.

These yntaxes collide only for identifiers alone. They could be
treated like degenerate patterns (which match anything) or like
functions with no arguments.

- isNil (or any value with a non-empty context in its type) doesn't
 really behave like a constant, but as a function taking a dictionary
 of 'IsNil a' as an implicit argument, so it's recomputed each time
 it it used...
 
Ok... This is an angle I hadn't even approached yet. We're talking
about the internal implementation of the compiler here. Hmmm...

The difference is visible for the programmer not only in speed, but
also by typing rules: pattern bindings are monomorphic, functions
can be polymorphic.

With monomorphic restriction a lhs looking like an identifier is
treated as something between pattern and function: type variables with
class constraints are monomorphic, type variables without conatraints
are polymorphic. And it can be promoted to a fully polymorphic version
by giving a type signature.

Shouldn't the compiler be able to limit the recomputations to
one per instance of the class?

It would require very unusual implementation. Currently the compiler
doesn't need to build dictionaries indexed by types at runtime and
even doesn't need to recognize which types are the same at runtime.

I guess I'm biased here by my knowledge of templates in C++,
which can be used to implement something very similar to type
classes in Haskell.

C++ templates require having source available in each place a template
is used. I agree that it would be useful for Haskell compilers to
compile some instances that way when optimization is turned on,
but the traditional way to implement classes uses a single compiled
version which is passed various dictionaries at runtime.

It works because Haskell's rules of overloading are defined more
semantically. C++ templates are closer to syntactic macros.

The C++ way wouldn't work for Haskell in general because sometimes
the depth of instances created is not bounded statically, unlike C++
templates which are instantiated at compile time. And it doesn't
work with extensions like existential quantification, where types
which differ in instance dictionaries can be packed into values of
a single type, so the instance to use *must* be chosen dynamically
(data flow can't be statically determined in general).

It can be only an optimization (and perhaps it's sometimes more code
bloat than optimization), it doesn't fit as the primary model IMHO.

 class Num a where
   n :: a
   n = 7*11*13
 
'n' here has that type 'Num a = a', doesn't it? Don't tell me
compilers will compute it twice if we use it twice, as in:
 
 n1 = n
 n2 = n

It will probably recognize that the same instance is needed in both
places. But it will almost certainly recompute when n1 and n2 are
defined in separate modules.

It can't perform the multiplication until the type is known.

- When a pattern binds several variables, it can happen that their
 types need different sets of class constraints. Using such a variable
 doesn't fully determine the type to perform the computation in.
 It's thus ambiguous and an error.
 
You're talking about the case '(var1, var2) = expr', right? That's
because var1 and var2 cannot have different contexts?

They can, caused by different sets of type variables present in types
of var1 and var2:

(x, y) = let z = read [10,20] in (z, show z)

Here x :: Read a = a,
 y :: String.

It makes no sense to give y yhe type 'Read a = String'. This type
would be ambiguous because 'a' is absent in the main part of the type.

With monomorphism restriction uses of x determine the way y is
computed. Without monomorphism restriction all uses of x and y have
individually instantiated types, so there is no way to compute y at
all: you don't know on which type it should be done, as in
show (read [10,20])
which is ambiguous.

So it is. I just tried, with Hugs:

Hugs implements this incorrectly. AFAIK it doesn't let usages of
global variables contribute to disambiguation of their types.

2. Define a possibly polymorphic function and perform the given
 computation when it is applied (function binding).
 
In case 2 we wouldn't need the restriction, right?

Yes. Functions are defined one at a time, not by matching patterns;
and their recomputation is expected.

It might happen that by choosing this instance now it will be OK when
f is used. The other possibility of the type for f would take away
some freedom: we can't use this instance later with different choices
for types of list elements, because both 

Re: Monomorphism, monomorphism...

2001-10-06 Thread Marcin 'Qrczak' Kowalczyk

Fri, 5 Oct 2001 19:02:50 -0700, Juan Carlos Arévalo Baeza [EMAIL PROTECTED] pisze:

If a declaration group
 
Meaning something like let g = isNil up there?

Yes, a group of mutually recursive bindings or a single non-recursive
binding (equations inside let or where or at module toplevel).

contains a pattern binding with a nonvariable pattern
 
Meaning... what exactly?

A pattern which is something other than an identifier.

or one [pattern binding] where there is no type signature for
the variable
 
Meaning g = isNil above, without type signature for g?

Yes.

then the context parts of the type schemes derived for the bound
variables must be empty
 
Meaning that in let g = ..., g cannot be
g :: context = type unless the context is explicitly given?

Yes.

Hmmm... This still sounds like nonsensical (as in counterintuitive
and artificial) to me. In a definition like let g = isNil
there cannot be any compelling reason to give g any type
different than the type of isNil.

There are two reasons for the monomorphism restriction:

- isNil (or any value with a non-empty context in its type) doesn't
  really behave like a constant, but as a function taking a dictionary
  of 'IsNil a' as an implicit argument, so it's recomputed each time
  it it used...
  
  This point is silly in this case, because it is already a function
  with an explicit argument! It makes more sense however when the type
  is not of the form a-b. For example in 'n = 7*11*13' letting n have
  the type 'Num a = a' implies that it is recomputed each time it
  is used. Unless the compiler manages to optimize this, but it can't
  in all cases (n can be used with different types in various places).

- When a pattern binds several variables, it can happen that their
  types need different sets of class constraints. Using such a variable
  doesn't fully determine the type to perform the computation in.
  It's thus ambiguous and an error.
  
  It's not ambiguous with monomorphic restriction, where all uses of
  all variables contribute to finding the single type to perform the
  computation in.

  Even with a single variable it can happen that some uses will
  constrain the type enough to determine the instance, and some will
  not. Without monomorphic restrictions some uses are OK, but others
  will stop the compilation. With monomorphic restriction all uses
  contribute to a single type and typing might succeed.

The general trouble with monomorphic restriction is that let bindings
can mean two similar things:

1. Create a single lazily evaluated object with the given definition
   and make it or its components available through variables (pattern
   binding).

2. Define a possibly polymorphic function and perform the given
   computation when it is applied (function binding).

Degenerate forms of these cases look the same in Haskell: a single
identifier is on the left. It's not clear which one the programmer
meant. Often it doesn't matter. It really matters only when classes
are involved: either to determine where to apply implicit arguments
of class dictionaries or to disambiguate instances by making some
types more concrete.

Clean uses three forms of equal sign: = defined a function, := binds
a single lazily evaluated object, and = means = on the toplevel and
:= locally (if I remember the syntax). ML doesn't have this problem
because it doesn't have classes.

but two legal possibilities are
- forall b . [b] - Bool, and
 
Choosing an explicit instance of IsNil. But this sounds
nonsensical to me, too. No instance should be choosing unless the
specific instance type is forced by the definition. Otherwise,
if there are two insances, which one would it choose?

Type inference defined in the traditional way is non-deterministic
in nature.

A good property of a type system is that it should not matter when
various choices are made: as long as we succeed in deriving a type
of the whole program, the overall meaning should be unambiguous.
This property has a name - sorry, I forgot which.

When we choose an instance too early, we might not succeed at all (if
another instance happens to be needed). Unfortunately in this case we
may fail in either case: no choice is sure to be better than the other.

It might happen that by choosing this instance now it will be OK when
f is used. The other possibility of the type for f would take away
some freedom: we can't use this instance later with different choices
for types of list elements, because both uses of g must make the same
choices for type variables with class constraints. It's not visible yet
(before choosing the instance) that the single type 'IsNil a = a'
will expand into something containing quantified type variables without
class constraints which can be instantiated independently!

Of course it may also happen that completely different instances will
be needed when f is used, so choosing the instance now is bad. That's
the point: there is no 

Re: Monomorphism, monomorphism...

2001-10-06 Thread Karl-Filip Faxen

Hi!

Marcin wrote:
 Juan Carlos Arévalo Baeza wrote:
 Karl-Filip wrote:
- a - Bool (without quantification and with IsNil a among the
  predicates).
 
This is something I didn't understand either. Which predicates?

I think isNil a goes to the context of the whole expression
containing the let g = ... (I'm not sure if Karl meant that).

Exactly what I meant! Then it goes into the context of f because f
is defined using function syntax (formal parameters to the left of =)
so everybody knows it's a function so it doesn't hurt if it gets some 
extra arguments after dictionary insertion.

/kff



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-06 Thread Juan Carlos Arévalo Baeza

On 6 Oct 2001 09:31:54 GMT, Marcin 'Qrczak' Kowalczyk wrote:

   First of all, thanks a lot for the detailed response.

Yes, ... yes ... yes

   Well, I'm glad I got some of it right. Gives me hope :)

contains a pattern binding with a nonvariable pattern

  Meaning... what exactly?

A pattern which is something other than an identifier.

   Like defining a function, as opposed to defining a constant?

  Hmmm... This still sounds like nonsensical (as in counterintuitive
  and artificial) to me. In a definition like let g = isNil
  there cannot be any compelling reason to give g any type
  different than the type of isNil.

There are two reasons for the monomorphism restriction:

- isNil (or any value with a non-empty context in its type) doesn't
 really behave like a constant, but as a function taking a dictionary
 of 'IsNil a' as an implicit argument, so it's recomputed each time
 it it used...

   Ok... This is an angle I hadn't even approached yet. We're talking about the 
internal implementation of the compiler here. Hmmm...

   Shouldn't the compiler be able to limit the recomputations to one per instance of 
the class? That sounds perfectly appropriate to me, and it would solve this particular 
problem. Unless I'm missing something here...

 This point is silly in this case, because it is already a function
 with an explicit argument! It makes more sense however when the type
 is not of the form a-b. For example in 'n = 7*11*13' letting n have
 the type 'Num a = a' implies that it is recomputed each time it
 is used.

   Again, what about recomputing it once per instance of Num?

 Unless the compiler manages to optimize this, but it can't
 in all cases (n can be used with different types in various places).

   I guess I'm biased here by my knowledge of templates in C++, which can be used to 
implement something very similar to type classes in Haskell. This would be akin to 
different instantiations of a single template, which should not present any problems 
for the compiler.

   So, 'n = 7*11*13' would have type 'Num a = a', which would mean that it has a 
different value for each instantiation of the context 'Num a'. So, where's the 
problem? It's not as if Haskell didn't already have this functionality in the members 
of a type class:

class Num a where
  n :: a
  n = 7*11*13

   'n' here has that type 'Num a = a', doesn't it? Don't tell me compilers will 
compute it twice if we use it twice, as in:

n1 = n
n2 = n

- When a pattern binds several variables, it can happen that their
 types need different sets of class constraints. Using such a variable
 doesn't fully determine the type to perform the computation in.
 It's thus ambiguous and an error.

   You're talking about the case '(var1, var2) = expr', right? That's because var1 and 
var2 cannot have different contexts? That still sounds unnecessary to me. I mean, the 
tuple result should have its own context, necessary to resolve the tuple itself, and 
each of its elements should acquire its own context as appropriate. Then all contexts 
should be unambiguous. In this case, 'expr' should be able to have type:

expr:: context0 = (context1 = TypeOfVar1, context2 = TypeOfVar2)

   I'm guessing from what I learn that this is not possible in Haskell, right?

 It's not ambiguous with monomorphic restriction, where all uses of
 all variables contribute to finding the single type to perform the
 computation in.

   Exactly. I think I'm starting to get a reasonable handle on this.

 Even with a single variable it can happen that some uses will
 constrain the type enough to determine the instance, and some will
 not. Without monomorphic restrictions some uses are OK, but others
 will stop the compilation. With monomorphic restriction all uses
 contribute to a single type and typing might succeed.

   So it is. I just tried, with Hugs:

---
intToBool :: Int - Bool
intToBool a = a == 0

numToBool :: Num a = a - Bool
numToBool a = a == 0

h :: Num a = a
h = 4

g = h

f = (intToBool g)  (numToBool g)
---

   It complained with:

---
ERROR E:\JCAB\Haskell\TestMonomorphic.hs:14 - Type error in application
*** Expression : intToBool g
*** Term   : g
*** Type   : Integer
*** Does not match : Int
---

   You're saying that this is because, when seeing the line 'g = h', the compiler 
immediately and arbitrarily picks a type for 'g', right? In this case, it's 'Integer'. 
Arbitrarily but not randomly, right? What rules does it follow?

The general trouble with monomorphic restriction is that let bindings
can mean two similar things:

1. Create a single lazily evaluated object with the given definition
 and make it or its components available through variables (pattern
 binding).

2. Define a possibly polymorphic function and perform the given
 computation when it is applied (function binding).

   In case 2 we wouldn't need the restriction, right?

   I guess the only benefit from all this is efficiency, then. I think I see it. Back 
to the