Re: [Haskell-cafe] decoupling type classes

2012-01-17 Thread Dominique Devriese
2012/1/16 Yin Wang yinwa...@gmail.com:
 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

 It can handle this case, although it doesn't handle it as a parametric
 instance. I suspect that we don't need the concept of parameter
 instances at all. We just searches for instances recursively at the
 call site:

 That seems like it could work, but typically, one would like
 termination guarantees for this search, to avoid the type-checker
 getting stuck...

 Good point. Currently I'm guessing that we need to keep a stack of the
 traced calls. If a recursive call needs an implicit parameter X which
 is matched by one of the functions in the stack, we back up from the
 stack and resolve X to the function found on stack.

You may want to look at scala's approach for their implicit arguments.
They use a certain to conservatively detect infinite loops during the
instance search, but I don't remember the details off hand. While
talking about related work, you may also want to take a look at
Scala's implicit arguments, GHC implicit arguments and C++ concepts...


 foo x =
   let overload bar (x:Int) = x + 1
   in \() - bar x


 baz =
  in foo (1::Int)

 Even if we have only one definition of bar in the program, we should
 not resolve it to the definition of bar inside foo. Because that
 bar is not visible at the call site foo (1::int). We should report
 an error in this case. Think of bar as a typed dynamically scoped
 variable helps to justify this decision.

 So you're saying that any function that calls an overloaded function
 should always allow its own callers to provide this, even if a correct
 instance is in scope. Would that mean all instances have to be
 resolved from main? This also strikes me as strange, since I gather
 you would get something like length :: Monoid Int = [a] - Int,
 which would break if you happen to have a multiplicative monoid in
 scope at the call site?

 If you already have a correct instance in scope, then you should have
 no way defining another instance with the same name and type in the
 scope as the existing one. This is the case for Haskell.

Yes, but different ones may be in scope at different places in the code, right?

 But it may be useful to allow nested definitions (using let) to shadow
 the existing instances in the outer scope of the overloaded call.

I considered something like this for instance arguments in Agda, but
it was hard to make the instance resolution deterministic when
allowing such a form of prioritisation. The problem occurred if a
shadower and shadowee instance had slightly different types, such that
only the shadowee was actually type-valid for a certain instance
argument. However, the type information which caused the shadower to
become invalid only became available late in the type inference
process. In such a case, it is necessary to somehow ascertain that the
shadower instance is not chosen, but I did not manage to figure out
how to get this right.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-16 Thread Dominique Devriese
Yin,

2012/1/14 Yin Wang yinwa...@gmail.com:
 On Sat, Jan 14, 2012 at 2:38 PM, Dominique Devriese
 dominique.devri...@cs.kuleuven.be wrote:
 I may or may not have thought about it. Maybe you can give an example
 of parametric instances where there could be problems, so that I can
 figure out whether my system works on the example or not.

 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

 It can handle this case, although it doesn't handle it as a parametric
 instance. I suspect that we don't need the concept of parameter
 instances at all. We just searches for instances recursively at the
 call site:

That seems like it could work, but typically, one would like
termination guarantees for this search, to avoid the type-checker
getting stuck...

 foo x =
   let overload bar (x:Int) = x + 1
   in \() - bar x


 baz =
  in foo (1::Int)

 Even if we have only one definition of bar in the program, we should
 not resolve it to the definition of bar inside foo. Because that
 bar is not visible at the call site foo (1::int). We should report
 an error in this case. Think of bar as a typed dynamically scoped
 variable helps to justify this decision.

So you're saying that any function that calls an overloaded function
should always allow its own callers to provide this, even if a correct
instance is in scope. Would that mean all instances have to be
resolved from main? This also strikes me as strange, since I gather
you would get something like length :: Monoid Int = [a] - Int,
which would break if you happen to have a multiplicative monoid in
scope at the call site?

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-16 Thread Yin Wang
 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

 It can handle this case, although it doesn't handle it as a parametric
 instance. I suspect that we don't need the concept of parameter
 instances at all. We just searches for instances recursively at the
 call site:

 That seems like it could work, but typically, one would like
 termination guarantees for this search, to avoid the type-checker
 getting stuck...

Good point. Currently I'm guessing that we need to keep a stack of the
traced calls. If a recursive call needs an implicit parameter X which
is matched by one of the functions in the stack, we back up from the
stack and resolve X to the function found on stack.


 foo x =
   let overload bar (x:Int) = x + 1
   in \() - bar x


 baz =
  in foo (1::Int)

 Even if we have only one definition of bar in the program, we should
 not resolve it to the definition of bar inside foo. Because that
 bar is not visible at the call site foo (1::int). We should report
 an error in this case. Think of bar as a typed dynamically scoped
 variable helps to justify this decision.

 So you're saying that any function that calls an overloaded function
 should always allow its own callers to provide this, even if a correct
 instance is in scope. Would that mean all instances have to be
 resolved from main? This also strikes me as strange, since I gather
 you would get something like length :: Monoid Int = [a] - Int,
 which would break if you happen to have a multiplicative monoid in
 scope at the call site?

If you already have a correct instance in scope, then you should have
no way defining another instance with the same name and type in the
scope as the existing one. This is the case for Haskell.

But it may be useful to allow nested definitions (using let) to shadow
the existing instances in the outer scope of the overloaded call.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-14 Thread Yin Wang
 Also, you don't seem to have thought about the question of parametric
 instances: do you allow them or not, if you do, what computational
 power do they get etc.?

I may or may not have thought about it. Maybe you can give an example
of parametric instances where there could be problems, so that I can
figure out whether my system works on the example or not.


 I'm surprised that you propose passing all type class methods
 separately. It seems to me that for many type classes, you want to
 impose a certain correspondence between the types of the different
 methods in a type class (for example, for the Monad class, you would
 expect return to be of type (a - m a) if (=) is of type (m a - (a
 - m b) - m b)). I would expect that inferencing these releations in
 each function that uses either of the methods will lead to overly
 general inferenced types and the need for more guidance to the type
 inferencer?

I thought they should be of type (a - m a) and (m a - (a - m b) -
m b)), but I just found that as if they should also work if they were
of type (c - m c) and (m a - (a - m b) - m b)).

It doesn't seem to really hurt. We either will have actually types
when they are called (thus catches type errors). Or if they stay
polymorphic, c will be unified with a when they bind. Also, return and
(=) will be dispatched to correct instances just as before.


 By separating the methods, you would also lose the laws that associate
 methods in a type class, right?

 An alternative to what you suggest, is the approach I recommend for
 using instance arguments: wrapping all the methods in a standard data
 type (i.e. define the dictionary explicitly), and pass this around as
 an implicit argument.

I went quickly through your paper and manual and I like the explicit
way. The examples show that the records seem to be a good way to group
the overloaded functions, so I have the impression that grouping and
overloading are orthogonal features. But in your paper I haven't
seen any overloaded functions outside of records, so I guess they are
somehow tied together in your implementation, which is not necessary.

Maybe we can let the user to choose to group or not. If they want to
group and force further constraints among the overloaded functions,
they can use overloaded records and access the functions through the
records; otherwise, they can define overloaded functions separately
and just use them directly. This way also makes the implementation
more modular.


 For this example, one might also argue that the problem is in fact
 that the Num type class is too narrow, and + should instead be defined
 in a parent type class (Monoid comes to mind) together with 0 (which
 also makes sense for strings, by the way)?

I guess more hierarchies solves only some of the problem like this,
but in general this way complicates things, because the overloaded
functions are not in essence related.


 There is another benefit of this decoupling: it can subsume the
 functionality of MPTC. Because the methods are no longer grouped,
 there is no “common” type parameter to the methods. Thus we can easily
 have more than one parameter in the individual methods and
 conveniently use them as MPTC methods.

 Could you explain this a bit further?

In my system, there are no explicit declarations containing type
variables. The declaration overload g is all that is needed.

For example,

overload g
 ... ...
f x (Int y) = g x y


then, f has the inferred type:

'a - Int - {{ g:: 'a - Int - 'b }} - 'b

(I borrowed your notation here.)

Here it automatically infers the type for g ('a - Int - 'b) just
from its _usage_ inside f, as if there were a type class definition
like:

class G a where
  g :: a - Int - b

So not only we don't need to defined type classes, we don't even need
to declare the principle types of the overloaded functions. We can
infer them from their usage and they don't even need to have the same
principle type! All it takes is:

overload g

And even this is not really necessary. It is for sanity purposes - to
avoid inadvertent overloading.

So if g is used as:

f x y (Int z) = g x z y

then f has type 'a - 'b - Int - {{ g :: 'a - Int - 'b - 'c}} - 'c

Then g will be equivalent to the one you would have defined in a MPTC method.


 I would definitely argue against treating undefined variables as
 overloaded automatically. It seems this will lead to strange errors if
 you write typo's for example.

I agree, thus I will keep the overload keyword and check that the
unbound variables have been declared as overloaded before generating
the implicit argument.


 But the automatic overloading of the undefined may be useful in
 certain situations. For example, if we are going to use Haskell as a
 shell language. Every “command” must be evaluated when we type them.
 If we have mutually recursive definitions, the shell will report
 “undefined variables” either way we order the functions. The automatic
 overloading may solve this problem. The undefined 

Re: [Haskell-cafe] decoupling type classes

2012-01-14 Thread Yin Wang
On Sat, Jan 14, 2012 at 2:38 PM, Dominique Devriese
dominique.devri...@cs.kuleuven.be wrote:
 I may or may not have thought about it. Maybe you can give an example
 of parametric instances where there could be problems, so that I can
 figure out whether my system works on the example or not.

 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

It can handle this case, although it doesn't handle it as a parametric
instance. I suspect that we don't need the concept of parameter
instances at all. We just searches for instances recursively at the
call site:

1. If g has an implicit parameter f, search for values which
matches the name and instantiated type in the current scope.

2. If a value is found, use it as the argument.

3. Check if the value is a function with implicit parameters, if so,
search for values that matches the name and type of the implicit
parameters.

4. Do this recursively until no more arguments contain implicit parameters.


 This coupling you talk about is not actually there for instance
 arguments. Instance arguments are perfectly usable without records.
 There is some special support for automatically constructing record
 projections with instance arguments though.

Cool. So it seems to be close to what I had in mind.


 I am not sure about the exact workings of your system, but I want to
 point out that alternative choices can be made about the workings of
 inferencing and resolving type-class instances such that local
 instances can be allowed. For example, in Agda, we do not infer
 instance arguments and we give an error in case of ambiguity, but
 because of this, we can allow local instances...

Certainly it should report error when there are ambiguities, but
sometimes it should report an error even there is only one value that
matches the name and type. For example,

foo x =
  let overload bar (x:Int) = x + 1
  in \() - bar x


baz =
 in foo (1::Int)

Even if we have only one definition of bar in the program, we should
not resolve it to the definition of bar inside foo. Because that
bar is not visible at the call site foo (1::int). We should report
an error in this case. Think of bar as a typed dynamically scoped
variable helps to justify this decision.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-12 Thread Dominique Devriese
Yin,

2012/1/12 Yin Wang yinwa...@gmail.com:
 I have an idea about type classes that I have been experimenting. It
 appears to be a generalization to Haskell’s type classes and seems to
 be doable. It seems to related the three ideas: type classes, implicit
 parameters, and (typed) dynamic scoping. But I don't know whether it
 is good or not. I hope to get some opinions before going further.

I find your ideas interesting. You may be interested in a related
design which I recently implemented for Agda [2], and an ICFP 2011
paper that presents it [1].

Also, you don't seem to have thought about the question of parametric
instances: do you allow them or not, if you do, what computational
power do they get etc.?

 I have an experimental system which “decouples” the dictionary.
 Instead of passing on a dictionary, it passes individual “implicit
 parameters around. Those implicit parameters are type inferenced and
 they can contain type parameters just as methods in a type class.
 Similarly, they are resolved by their types in the call site's scope.

I'm surprised that you propose passing all type class methods
separately. It seems to me that for many type classes, you want to
impose a certain correspondence between the types of the different
methods in a type class (for example, for the Monad class, you would
expect return to be of type (a - m a) if (=) is of type (m a - (a
- m b) - m b)). I would expect that inferencing these releations in
each function that uses either of the methods will lead to overly
general inferenced types and the need for more guidance to the type
inferencer?

By separating the methods, you would also lose the laws that associate
methods in a type class, right?

An alternative to what you suggest, is the approach I recommend for
using instance arguments: wrapping all the methods in a standard data
type (i.e. define the dictionary explicitly), and pass this around as
an implicit argument.

 The convenience of this approach compared to Haskell’s type classes is
 that we no longer require a user of a type class to define ALL the
 methods in a type class. For example, a user could just define a
 method + without defining other methods in the Num class: -, *, … He
 can use the method + independently. For example, if + is defined on
 the String type to be concatenation, we can use + in another function:

 weirdConcat x y = x + y + y

 This has a utility, because the methods in the Num class don’t “make
 sense” for Strings except +, but the current type class design
 requires us to define them. Note here that weirdConcat will not have
 the type (Num a) = a - a - a, since we no longer have the Num
 class, it is decoupled into separate methods.

For this example, one might also argue that the problem is in fact
that the Num type class is too narrow, and + should instead be defined
in a parent type class (Monoid comes to mind) together with 0 (which
also makes sense for strings, by the way)?

 There is another benefit of this decoupling: it can subsume the
 functionality of MPTC. Because the methods are no longer grouped,
 there is no “common” type parameter to the methods. Thus we can easily
 have more than one parameter in the individual methods and
 conveniently use them as MPTC methods.

Could you explain this a bit further?

 Here g is explicitly declared as “overloaded”, although my
 experimental system doesn’t need this. Any undefined variable inside
 function body automatically becomes overloaded. This may cause
 unintended overloading and it catches bugs late. That’s why we need
 the “overload” declarations.

I would definitely argue against treating undefined variables as
overloaded automatically. It seems this will lead to strange errors if
you write typo's for example.

 But the automatic overloading of the undefined may be useful in
 certain situations. For example, if we are going to use Haskell as a
 shell language. Every “command” must be evaluated when we type them.
 If we have mutually recursive definitions, the shell will report
 “undefined variables” either way we order the functions. The automatic
 overloading may solve this problem. The undefined variables will
 temporarily exist as automatic overloaded functions. Once we actually
 define a function with the same name AND satisfies the type
 constraints, they become implicit parameters to the function we
 defined before. If we call a function whose implicit parameters are
 not associated, the shell reports error very similar to Haskell’s
 “type a is not of class Num …”

The design you suggest seems to differ from Haskell's current
treatment, where functions can refer to other functions defined
further in the file, but still have them resolved statically?

 RELATIONSHIP TO DYNAMIC SCOPING

 It seems to be helpful to think of the “method calls” as referencing
 dynamically scoped variables. They are dispatched depending on the
 bindings we have in the call site's scope (and not the scope where the
 method is defined!). So it 

[Haskell-cafe] decoupling type classes

2012-01-11 Thread Yin Wang
Hi all,

I have an idea about type classes that I have been experimenting. It
appears to be a generalization to Haskell’s type classes and seems to
be doable. It seems to related the three ideas: type classes, implicit
parameters, and (typed) dynamic scoping. But I don't know whether it
is good or not. I hope to get some opinions before going further.

Basically, Haskell’s type classes passes dictionaries around. Each
dictionary contains one or more “methods”. When “names” which belong
to a dictionary are called, we invoke functions that match its
principle type in the call site's scope.

I have an experimental system which “decouples” the dictionary.
Instead of passing on a dictionary, it passes individual “implicit
parameters around. Those implicit parameters are type inferenced and
they can contain type parameters just as methods in a type class.
Similarly, they are resolved by their types in the call site's scope.

The convenience of this approach compared to Haskell’s type classes is
that we no longer require a user of a type class to define ALL the
methods in a type class. For example, a user could just define a
method + without defining other methods in the Num class: -, *, … He
can use the method + independently. For example, if + is defined on
the String type to be concatenation, we can use + in another function:

weirdConcat x y = x + y + y

This has a utility, because the methods in the Num class don’t “make
sense” for Strings except +, but the current type class design
requires us to define them. Note here that weirdConcat will not have
the type (Num a) = a - a - a, since we no longer have the Num
class, it is decoupled into separate methods.

There is another benefit of this decoupling: it can subsume the
functionality of MPTC. Because the methods are no longer grouped,
there is no “common” type parameter to the methods. Thus we can easily
have more than one parameter in the individual methods and
conveniently use them as MPTC methods.



SOME IMPLEMENTATION DETAILS

Here is how it can be implemented. When we see an “undefined” variable
in a function definition which has been declared as “overloaded
function”, we store the function name, and the type variables that are
associated with it. For example,

overload g — (explicitly declare g as an overloaded function)

f x y (String s) = …
…
let z = g x s y in
…
…

We don’t know what x and y are, but we know from the body of f that
their types satisfy this pattern: g ’a String ’b. Thus we store this
pattern constraint as an extra (implicit) argument in the type of f:

f :: a → b → String (exist g: g a String b)

We may have multiple such arguments.

At the call sites of f, we look for a function g in the scope that
satisfies the pattern g ‘a String ’b, but we don’t pass on the
substitution, so they remain polymorphic. Once found, the function is
passed as an extra parameter to f. This is essentially dictionary
passing, but without grouping. It can be also more efficient because
the parameters may be stored in registers.

Here g is explicitly declared as “overloaded”, although my
experimental system doesn’t need this. Any undefined variable inside
function body automatically becomes overloaded. This may cause
unintended overloading and it catches bugs late. That’s why we need
the “overload” declarations.

But the automatic overloading of the undefined may be useful in
certain situations. For example, if we are going to use Haskell as a
shell language. Every “command” must be evaluated when we type them.
If we have mutually recursive definitions, the shell will report
“undefined variables” either way we order the functions. The automatic
overloading may solve this problem. The undefined variables will
temporarily exist as automatic overloaded functions. Once we actually
define a function with the same name AND satisfies the type
constraints, they become implicit parameters to the function we
defined before. If we call a function whose implicit parameters are
not associated, the shell reports error very similar to Haskell’s
“type a is not of class Num …”


RELATIONSHIP TO DYNAMIC SCOPING

It seems to be helpful to think of the “method calls” as referencing
dynamically scoped variables. They are dispatched depending on the
bindings we have in the call site's scope (and not the scope where the
method is defined!). So it is very much similar to the much-hated
dynamic scoping. But the dispatching is more disciplined — it doesn't
just match the name. It must match both the name and the inferred
principle type.

This intuition also explains why local instances shouldn't be allowed,
because if we capture the variables at the definition site, the method
call becomes statically scoped.



-- yin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Decoupling type classes (e.g. Applicative)?

2010-11-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/29/10 09:35 , Dominique Devriese wrote:
 * Only introduce a dependency from type class A to type class B if all
   functions in type class B can be implemented in terms of the
   functions in type class A or if type class A is empty.

Er?  Eq a = Ord a makes perfect sense in context but violates this law.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzQwZUACgkQIn7hlCsL25UXaACghD6I6JnoVZ3LTOsjy86ZWzmO
hq4An06sQPiC2/Xr40xlTAA97xdhACud
=nf0v
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Decoupling type classes (e.g. Applicative)?

2010-10-29 Thread Dominique Devriese
Hi all,

I have a problem with the design of the Applicative type class, and
I'm interested to know people's opinion about this.

Currently, the Functor and Applicative type class are defined like this:

  class  Functor f  where
  fmap:: (a - b) - f a - f b

  class Functor f = Applicative f where
pure :: a - f a
(*) :: f (a - b) - f a - f b

My problem is that in the grammar-combinators library [1], the pure
combinator is too general for me. I would propose a hierarchy like
the following:

  class  Pointed f  where
  pure :: a - f a

  class  ApplicativeC f where
(*) :: f (a - b) - f a - f b

The original type class Applicative can then be recovered as follows,
and the applicative laws can be specified informally in this class's
definition.

  class  (Pointed f, ApplicativeC f, Functor f) = Applicative f where

This would allow me to restrict injected values to stuff I can lift
into Template Haskell later on:

  class  LiftablyPointed f where
  pureL :: (Lift a) - a - f a
  pureL' :: a - Q Exp - f a

  class  (LiftablyPointed f, ApplicativeC) = LiftablyApplicative f where

This problem currently makes it impossible for me to use the (*)
combinator and I have to redefine it under a different name (I
currently use ()). To me the problem seems similar to the well
known example of the inclusion of the fail primitive in the monad
class, where the general opinion seems to be that it was a bad idea to
include fail in the Monad class (see
e.g. the article on the haskell wiki about handling failure [2]).

I've been thinking about the following type class design principles:

* Only include two functions in the same design class if both can be
  implemented in terms of each other.

* Only introduce a dependency from type class A to type class B if all
  functions in type class B can be implemented in terms of the
  functions in type class A or if type class A is empty.

(Disclaimer: I currently do not follow these principles myself ;))

I would like to know people's opinions about this. Are there any
issues with this advice that I don't see? Have other people
encountered similar problems? Any interesting references?

Thanks,
Dominique

Footnotes:
[1]  http://projects.haskell.org/grammar-combinators/
[2]  http://www.haskell.org/haskellwiki/Failure
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Decoupling type classes (e.g. Applicative)?

2010-10-29 Thread Ozgur Akgun
On 29 October 2010 14:35, Dominique Devriese 
dominique.devri...@cs.kuleuven.be wrote:

 I have a problem with the design of the Applicative type class


Sorry for going a bit off-topic, but every-time I see someone complaining
about such things, I remember this proposal:
http://repetae.net/recent/out/classalias.html

Just wanted to say, wouldn't it be nice? :)

Best,
Ozgur
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe