Re: Is it time to start deprecating FunDeps?

2013-05-01 Thread Martin Sulzmann
(1) There's a mechanical way to translate FD programs with
non-overlapping instances to TF (and of course vice versa). For
example, let's reconsider Oleg's example.

 {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
 {-# LANGUAGE TypeFamilies #-}
 {-# LANGUAGE FlexibleContexts #-}

 class Sum x y z | x y - z, x z - y
 instance Sum Int Float Double

 class (SumF1 x y ~ z, SumF2 x z ~ y) = SumF x y z
 type family SumF1 x y
 type family SumF2 x z
 instance SumF Int Float Double
 type instance SumF1 Int Float = Double
 type instance SumF2 Int Double = Float

As we know, GHC's intermediate language only records TF type
improvement but not FD type improvement. Therefore,

 f :: SumF Int Float z = z - z
 f _ = (1.0 :: Double)


 {- fails to type check
 f2 :: Sum Int Float z = z - z
 f2 _ = (1.0 :: Double)
 -}

(2) There's an issue in case of overlapping instances as pointed out by Oleg.
 The source of the issue is that overlapping type class instances
are treated
 differently compared to overlapping type family instances. In principle,
 both formalisms (FD and TF) are equivalent in expressive power.

(3) As Edward points out, there are many good reasons why we want to keep FDs.
 Just compare the above FD program against its TF equivalent!

 I enjoy using both formalisms and would be unhappy if we'd get
rid of one of them :)

-Martin




On Tue, Apr 30, 2013 at 9:18 AM,  o...@okmij.org wrote:

 Anthony Clayden wrote:
 Better still, given that there is a mechanical way to convert FunDeps to
 equalities, we could start treating the FunDep on a class declaration as
 documentation, and validate that the instances observe the mechanical
 translation.

 I think this mechanical way is not complete. First of all, how do you
 mechanically convert something like

 class Sum x y z | x y - z, x z - y

 Second, in the presence of overlapping, the translation gives
 different results: compare the inferred types for t11 and t21. There
 is no type improvement in t21.
 (The code also exhibits the coherence problem for overlapping instances:
 the inferred type of t2 changes when we remove the last instance of
 C2, from Bool to [Char].)

 {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
 {-# LANGUAGE FlexibleInstances, TypeFamilies #-}
 {-# LANGUAGE NoMonomorphismRestriction #-}
 {-# LANGUAGE OverlappingInstances #-}

 module FD where

 class C1 a b | a - b where
   foo :: a - b

 instance C1 [a] [a] where
 foo = id

 instance C1 (Maybe a) (Maybe a) where
 foo = id

 {- -- correctly prohibited!
 instance x ~ Bool = C1 [Char]  x where
 foo = const True
 -}

 t1 = foo a
 t11 = \x - foo [x]
 -- t11 :: t - [t]

 -- Anthony Clayden's translation
 class C2 a b where
   foo2 :: a - b

 instance x ~ [a] = C2 [a]  x where
 foo2 = id

 instance x ~ (Maybe a) = C2 (Maybe a) x where
 foo2 = id


 instance x ~ Bool = C2 [Char]  x where
 foo2 = const True

 t2 = foo2 a
 t21 = \x - foo2 [x]
 -- t21 :: C2 [t] b = t - b


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Is it time to start deprecating FunDeps?

2013-05-01 Thread Martin Sulzmann
Comments see below.

On Wed, May 1, 2013 at 11:13 AM, AntC anthony_clay...@clear.net.nz wrote:
 Martin Sulzmann martin.sulzmann@... writes:

 (1) There's a mechanical way to translate FD programs with
 non-overlapping instances to TF (and of course vice versa).

 Martin, no! no! no! You have completely mis-understood.

 I do _not_ _not_ _not_ want to replace FD's with TF's.

 I want to replace FD's with Equality Constraints.

Ok, that's the bit I've missed, but then I argue that you can't fully
encode FDs.

Consider again the 'Sum' example.

In the FD world:

Sum x y z1, Sum x y z2  == z1 ~ z2

enforced by

Sum x y z | x y - z

In my TF encoding, we find a similar derivation step

SumF1 x y ~ z1, SumF1 x y ~ z2 == z1 ~ z2

So, you're asking can we translate/express FDs using GHC intermediate language
with plain type equations only?

-Martin


 And that's exactly
 because I want to use overlapping.

 (You've also failed to understand that the Sum example is for doing Peano
 Arith. See the solution I've just posted.)





 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell' - class aliases

2008-05-02 Thread Martin Sulzmann


Any chance to express this in terms of a formal (constraint rewrite 
framework).
For example, the Haskell rule,  do *not* display implied superclasses,  
can be

specified as follows. Consider the special case of

class Eq a
class Eq a = Ord a

Eq a, Ord a = Ord a

The above rule only applies *after* type inference took place.

Martin



John Meacham wrote:

This isn't really a response to your email, but I have been mulling the
last few hours away from a computer and wanted to get this stream of
conciousness out when it is fresh in my mind.

The more I think about it, I think 'superclass' is just the wrong
terminology for dealing with class aliases. Superclass implies a strict
partial order on classes, which just isn't the case for class aliases,
for instance

  

class alias Foo a = Foo a = Bar a where ...



Has a defined (if not very useful) meaning with class aliases, but is
really odd if you consider 'Foo a' a superclass. So, I think the
following terminology should be used:

   Context of --+
   alias| The alias -++--- The expansion of the alias
|||
vvv
  

class alias (S1 a .. Sn a) = A a = (C1 a ... Cn a) where
 fd1 = 
 
 fdn = 


^
+  The defaults of the alias



given this, the expansion of 'A a' in any context other than an instance
head is

  

A a -- reduce(S1 a .. Sn a, C1 a ... Cn a)



where reduce is standard entailment reduction on class contexts (like (Eq
a,Ord a, Eq a) reduces to (Ord a))

This expansion is carried out iteratively on all class aliases  until a
fixed point is reached, then all class aliases are deleted from the
result and the remaining context is the final result. (This will always
terminate due to there being a finite number of class aliases that can
be pulled into the expansion)


likewise, for instance declarations:


  

instance A a where ...



-- 

  

foreach C in C1 .. Cn:
   instance (S1 a ... Sn a) = C a where ...



I left out the default methods here. I need to think about them a bit
more to come up with a formal expansion as it is a bit trickier (to
typeset if nothing else), but I hope this is somewhat more clear for
some...

John




  


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-05-05 Thread Martin Sulzmann
Ross Paterson writes:
  On Fri, May 05, 2006 at 08:42:12AM +0100, Simon Peyton-Jones wrote:
   Alas, g simply discards its argument, so there are no further
   constraints on 'a'.  I think Martin would argue that type inference
   should succeed with a=Bool, since there is exactly one solution.  Is
   this an example of what you mean?  
  
  Actually a = Maybe Bool
  
   However, as a programmer I would not be in the least upset if inference
   failed, and I was required to add a type signature.  I don't expect type
   inference algorithms to guess, or to solve recursive equations, and
   that's what is happening here.
  
  I think Martin is assuming the current Haskell algorithm: infer the type
  of the function and then check that it's subsumed by the declared type,
  while you're assuming a bidirectional algorithm.  That makes a big
  difference in this case, but you still do subsumption elsewhere, so
  Martin's requirement for normal forms will still be present.
  

A quick reply:

Subsumption among two type schemes boilds down to
testing whether (roughly)

Annotation_Constraint implies Infered_Constraint
  
given demanded

iff

Annotation_Constraint -  Annotation_Constraint /\ Infered_Constraint
 

So, of course the latter is more suitable for testing than the former.
(effectively, we push 'given' type information down the abstract syntax tree).
In any case, it's entirly unclear to me whether 'stopping' the 
AT inference process at some stage will guarantee complete
inference (in the presence of type annotations etc).

Martin




___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-05-01 Thread Martin Sulzmann
Ross Paterson writes:
  Thanks for clarifying this.
  
  On Sat, Apr 29, 2006 at 05:49:16PM -0400, Manuel M T Chakravarty wrote:
   So, we get the following type for f
   
 f :: (Mul a b, [Result a b] = b) = Bool - a - b - [Result a b]
   
   Given the instance definitions of that example, you can never satisfy
   the equality [Result a b] = b, and hence, never apply the function f.
  
  That would be the case if the definition were
  
   f b x y = if b then x .*. [y] else y
  
  You'd assign f a type with unsatisfiable constraints, obtaining
  termination by deferring the check.  But the definition
  
   f = \ b x y - if b then x .*. [y] else y
  
  will presumably be rejected, because you won't infer the empty context
  that the monomorphism restriction demands.  So the MR raises the reverse
  question: can you be sure that every tautologous constraint is reducible?
  
   So, clearly termination for ATs and FDs is *not* the same.  It's quite
   interesting to have a close look at where the difference comes from.
   The FD context corresponding to (*) above is
   
 Mul a [b] b
   
   Improvement gives us
   
 Mul a [[c]] [c] with b = [c]
   
   which simplifies to 
   
 Mul a [c] c
   
   which is where we loop.  The culprit is really the new type variable c,
   which we introduce to represent the result of the type function
   encoded by the type relation Mul.  So, this type variable is an artifact
   of the relational encoding of what really is a function,
  
  It's an artifact of the instance improvement rule, but one could define
  a variant of FDs without that rule.  Similarly one could have a variant
  of ATs that would attempt to solve the equation [Result a b] = b by
  setting b = [c] for a fresh variable c.  In both cases there is the
  same choice: defer reduction or try for more precise types at the risk
  of non-termination for instances like these.
  

I agree. At one stage, GHC (forgot which version) behaved similarly
compared to Manuel's AT inference strategy. 


Manuel may have solved the termination problem (by stopping after n
number of steps). Though, we still don't know yet whether the
constraints are consistent.

One of the reasons why FD *and* AT inference is tricky is because
inference may be non-terminating although
  - type functions/FD improvements are terminating
  - type classes are terminating

We'll need a more clever analysis (than a simple occurs check) to reject
'dangerous' programs.

Martin




___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-05-01 Thread Martin Sulzmann
Manuel M T Chakravarty writes:
  Martin Sulzmann:
   A problem with ATs at the moment is that some terminating FD programs
   result into non-terminating AT programs.
   
   Somebody asked how to write the MonadReader class with ATs:
   http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html
  
   This requires an AT extension which may lead to undecidable type
   inference:
   http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html
  
  The message that you are citing here has two problems:
  
   1. You are using non-standard instances with contexts containing
  non-variable predicates.  (I am not disputing the potential
  merit of these, but we don't know whether they apply to Haskell'
  at this point.)
   2. You seem to use the super class implication the wrong way around
  (ie, as if it were an instance implication).  See Rule (cls) of
  Fig 3 of the Associated Type Synonyms paper.
  

I'm not arguing that the conditions in the published AT papers result
in programs for which inference is non-terminating.

We're discussing here a possible AT extension for which inference
is clearly non-terminating (unless we cut off type inference after n
number of steps). Without these extensions you can't adequately
encode the MonadReader class with ATs.


  This plus the points that I mentioned in my previous two posts in this
  thread leave me highly unconvinced of your claims comparing AT and FD
  termination.
  

As argued earlier by others, we can apply the same 'tricks' to make
FD inference terminating (but then we still don't know whether
the constraint store is consistent!).

To obtain termination of type class inference, we'll need to be *very*
careful that there's no 'devious' interaction between instances and
type functions/improvement. 

It would be great if you could come up with an improved analysis which
rejects potentially non-terminating AT programs. Though, note that
I should immediately be able to transfer your results to the FD
setting based on the following observation:

I can turn any AT proof into a FD proof by (roughly)

 - replace the AT equation F a=b with the FD constraint F a b
 - instead of firing type functions, we fire type improvement and
   instances
 - instead of applying transitivity, we apply the FD
   rule F a b, F a c == b=c

Similarly, we can turn any FD proof into a AT proof.

Martin

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: termination for FDs and ATs

2006-04-27 Thread Martin Sulzmann
Ross Paterson writes:
  On Thu, Apr 27, 2006 at 12:40:47PM +0800, Martin Sulzmann wrote:
   Yes, FDs and ATs have the exact same problems when it comes to termination.
   The difference is that ATs impose a dynamic check (the occurs check)
   when performing type inference (fyi, such a dynamic check is sketched
   in a TR version of the FD-CHR paper).
  
  Isn't an occurs check unsafe when the term contains functions (type
  synonyms)?  You might reject something that would have been accepted
  if the function had been reduced and discarded the problematic subterm.
  

Correct. Any dynamic check is necessarily incomplete.

Martin
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


termination for FDs and ATs

2006-04-26 Thread Martin Sulzmann

Yes, FDs and ATs have the exact same problems when it comes to termination.
The difference is that ATs impose a dynamic check (the occurs check)
when performing type inference (fyi, such a dynamic check is sketched
in a TR version of the FD-CHR paper).

A problem with ATs at the moment is that some terminating FD programs
result into non-terminating AT programs.

Somebody asked how to write the MonadReader class with ATs:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html

This requires an AT extension which may lead to undecidable type
inference:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html

Martin


Ross Paterson writes:
  To ensure termination with FDs, there is a proposed restriction that
  an instance that violates the coverage condition must have a trivial
  instance improvement rule.  Is the corresponding restriction also
  required for ATs, namely that an associated type synonym definition
  may only contain another associated type synonym at the outermost
  level?  If not, wouldn't the non-terminating example from the FD-CHR
  paper (ex. 6, adapted below) also be a problem for ATs?
  
   class Mul a b where
   type Result a b
   (.*.) :: a - b - Result a b
  
   instance Mul a b = Mul a [b] where
   type Result a b = [Result a b]
   x .*. ys = map (x .*.) ys
  
   f = \ b x y - if b then x .*. [y] else y
  
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://haskell.org/mailman/listinfo/haskell-prime
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


FDs and confluence

2006-04-10 Thread Martin Sulzmann
Ross Paterson writes:
  (see the FunctionalDependencies page for background omitted here)
  
  One of the problems with the relaxed coverage condition implemented
  by GHC and Hugs is a loss of confluence.  Here is a slightly cut-down
  version of Ex. 18 from the FD-CHR paper:
  
   class B a b | a - b
   class C a b c | a - b
  
   instance B a b = C [a] b Bool
  
  Starting from a constraint set C [a] b Bool, C [a] c d, we have two
  possible reductions:
  
  1) C [a] b Bool, C [a] c d
   = c = b, C [a] b Bool, C [a] b d   (use FD on C)
   = c = b, B a b, C [a] b d  (reduce instance)
  
  2) C [a] b Bool, C [a] c d
   = C a b, C [a] c d (reduce instance)
   ^
should be  B a b


  The proposed solution was to tighten the restrictions on instances to
  forbid those like the above one for C.  However there may be another
  way out.
  
  The consistency condition implies that there cannot be another
  instance C [t1] t2 t3: a substitution unifying a and t1 need not
  unify b and t2.  Thus we could either
  
  1) consider the two constraint sets equivalent, since they describe
 the same set of ground instances, or
  

That's troublesome for (complete) type inference.
Two constraint stores are equivalent if they are equivalent
for any satisfying ground instance? How to check that?

  2) enhance the instance improvement rule: in the above example, we
 must have d = Bool in both cases, so both reduce to
  
   c = b, d = Bool, B a b
  
 More precisely, given a dependency X - Y and an instance C t, if
 tY is not covered by tX, then for any constraint C s with sX = S tX
 for some substitution S, we can unify s with S t.
  

I'm not following you here, you're saying?

rule C [a] b d == d=Bool

Are you sure that you're not introducing further critical pairs?

 We would need a restriction on instances to guarantee termination:
 each argument of the instance must either be covered by tX or be
 a single variable.  That is less restrictive (and simpler) than
 the previous proposal, however.
  
  Underlying this is an imbalance between the two restrictions on instances.
  In the original version, neither took any account of the context of the
  instance declaration.  The implementations change this for the coverage
  condition but not the consistency condition.  Indeed the original form of
  the consistency condition is necessary for the instance improvement rule.
  

Maybe, you found a simple solution (that would be great)
but I'not 100% convinced yet.


The problem you're addressing only arises for non-full FDs.
Aren't such cases very rare in practice?

Martin


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: MPTCs and functional dependencies

2006-04-09 Thread Martin Sulzmann
Manuel M T Chakravarty writes:
  Simon Peyton-Jones: 
   My current take, FWIW.
   
   [...]
   Tentative conclusion: H' should have MPTC + FDs, but not ATs.
  
  My conclusion is that we should not include FDs or ATs into the standard
  at the moment.  Standardising FDs as a stopgap measure may easily put us
  into the same situation that we are having with records at the moment.
  Nobody is really happy with it, but we don't dare to change it either.
  
  Manuel

The situation here is clearly different. Whatever comes next
(after FDs) will be a conservative extension. So, standardising
FDs is a good thing because they have proven to be a useful (somewhat
essential for MPTCs) feature. Hence, I will go with Simon:
H' should have MPTC + FDs, but not ATs.

You mention that nobody is really happy with FDs at the moment,
but you don't provide concrete arguments why. I assume you
refer to the following two issues:

1) FDs may threaten complete and decidable type inference
2) FDs are more verbose than say ATs

Issue 1) is not limited to FDs. ATs share the same problem.
I'll spare the details.

Issue 2) may be a matter of taste. 
In fact, sometimes it's easier to write decidable (but
more verbose) FDs than writing an equivalent *and* decidable AT
program.

Recall the following discussion.
Somebody asked how to write the MonadReader class with ATs:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014489.html

This requires an AT extension which may lead to undecidable type
inference:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html
It is possible to recover decidable AT inference (I hinted how, see
the above email thread), but the sufficient conditions get more and
more complex.


   * MPTCs are very useful.  They came along very rapidly (well before
   H98).  I think we must put them in H'
   
   * But MPTCs are hamstrung without FDs or ATs
   
   * FDs and ATs are of the same order of technical difficulty, as Martin
   says
  
  Both FDs and ATs come in various versions of different levels of
  expressiveness.  They may be of the same level of difficulty with all
  bells and whistles, but that's a bit of a red herring.  The real
  question is how do the levels of difficulty compare at the level of
  expressiveness that we want.  Or phrased differently, for a version that
  is not too difficult, how does the expressiveness compare.
  

See the MonadReader example above!

   * ATs are (I believe) a bit weaker from the expressiveness point of view
   (zip example), but are (I believe) nicer to program with.  
  
  The zip example can be done with ATs - it's actually in one of Martin's
  papers.  I currently don't know of any FD example that you can't do with
  ATs.  It's a matter of what forms of AT definitions you want to allow.
  

The zip example canNOT be expressed with ATs as described in the
ICFP'05 paper!

The point here is really that it's fairly easy to write down the zip
instances and let FDs automatically derive the necessary improvement
rules. In case of ATs (using an extension which has been sketched
in the Associated FD paper), the programmer has to write down 
all improvement rules (as type functions) herself. This can be a
non-trivial task!

   
   * Medium term, I think ATs may *at the programming-language level*
   displace FDs, because they are nicer to program with.  But that's just
   my opinion, and we don't have enough experience to know one way or the
   other.
  
  Maybe not only at the programming-language level.  Given our latest paper,
  
http://www.cse.unsw.edu.au/~chak/papers/SCP06.html
  
  for example, the translation of ATs is simpler than FDs if we also have
  existential types (but admittedly that became clear to us only after
  your email message).
  

Be careful, somebody could argue the opposite.
ATs are now redundant because all FD programs can now be translated.
Hence, the AT via FD encoding scheme which I once sketched is now type
preserving.
The fact that the FD translation is more complex is not a serious
issue. After all, there's an automatic translation scheme.

Martin
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: the MPTC Dilemma (please solve)

2006-03-19 Thread Martin Sulzmann
Manuel M T Chakravarty writes:
  The big question is: do we really want to do logic programming over
  types?  I'd say, no!

With ATs you're still doing logic programming if you like or not.

As Ross Paterson writes:
  I agree that functions on static data are more attractive than logic
  programming at the type level.  But with associated type synonyms,
  the type level is not a functional language but a functional-logic one.

The point is here really that you may think you define
a type function, e.g.

type F [a] = [F a]

though, these type definitions behave more like relations,
cause (type) equations behave bi-directionally.


Manuel M T Chakravarty writes:
  My statement remains:  Why use a relational notation if you can have a
  functional one?

Because a relational notation is strictly stronger. 

Some time ago I asked the following:

Write the AT equivalent of the following FD program.

zip2 :: [a]-[b]-[(a,b)]
zip2 = ... -- standard zip
class Zip a b c | c-a, c-b where
 zip :: [a]-[b]-c
instance Zip a b [(a,b)] where-- (Z1)
 zip = zip2
instance Zip (a,b) c e = Zip a b ([c]-e) where  -- (Z2)
 zip as bs cs = zip (zip2 as bs) cs

Specifying the FD improvement conditions via AT type functions
is a highly non-trivial task.
Check out Section 2 in
[October 2005]  Associated Functional Dependencies 
http://www.comp.nus.edu.sg/~sulzmann/
for the answer.

Sure, ATs provide a nice syntax for a specific kind of type class
programs.  Though, as I've pointed out before any AT program can be
encoded with FDs but the other direction does not hold necessarily.
See the above paper for details.


Manuel M T Chakravarty writes:
   
   From: Martin Sulzmann [EMAIL PROTECTED]
   Subject: MPTC/FD dilemma 
   
   - ATs (associated types) will pose the same challenges.
 That is, type inference relies on dynamic termination checks.
  
  Can you give an example?


There's nothing wrong with ATs as described in the ICFP'05 paper.
The problem is that some simple extension easily lead to undecidable
type inference.

Recall the following discussion:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014609.html

The point here is really that type inference for ATs and FDs is
an equally hard problem:
http://www.haskell.org//pipermail/haskell-cafe/2006-February/014680.html


Another thing, here's an excerpt of the current summary of the MPTC
dilemma from
http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki


Multi Parameter Type Classes Dilemma ¦

Options for solving the dilemma ¦   
2. Put AssociatedTypes on the fast-track for sainthood

Associated Types ¦

   Cons ¦   
* Only a prototype implementation so far 

This is a *inaccurate* and highly *misleading* summary.

Without some formal argument you cannot simply say that ATs are
better than FDs. In some situations, ATs may have a better syntax
than FDs. Though, this is a matter of taste. More seriously, ATs
require the programmer to write *all* improvement rules
explicitly. This can be a fairly challenging task. See the above zip
example.

I understand that you want to push ATs. It's a good thing to seek
for alternatives. But typing and translating ATs is as challenging
as typing and translating FDs (I've left out the translation issue,
cause this leads to far here).

Martin

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: partial type signatures/annotations/declarations..

2006-03-07 Thread Martin Sulzmann

FYI, Chameleon supports a combination of lexically scoped
and partial type annotation. The latest Chameleon version
is a broken (fix on its way). Though, besides the implementation
there's also a concise formal description. See
[July 2005] Lexically Scoped Type Annotations
http://www.comp.nus.edu.sg/~sulzmann/lexical-annot.ps

Martin

Claus Reinke writes:
  instead of introducing holes in types and contexts to leave
  parts of a declaration unspecified, why not use type subsumption?
  
  it has been pointed out to me that I should be more specific about 
  what I mean with this suggestion (not least to ensure that I don't
  suggest to use subsumption the wrong way round, thanks!-):
  
  - first, subsumption: 
  
  (P = t) subsumes (Q = r) 
  iff 
  exists substitution th. (r = th(t)  th(P) |- Q)
  
  [where |- is entailment between sets of predicates]
   
  in words: a qualified type qt1 is subsumed by a qualified type qt2
  (we write: qt1 = qt2) precisely if qt1 is a substitution instance of 
  qt2 whose constraints are implied by the substition instance of qt2's 
  constraints.
  
  - next, its use for partial type declarations:
  
  a qualified type is subsumed if its type part is more specific, and its
  constraint part is implied. so we could declare a qualified type for an 
  expression that, instead of being the precise type of said expression,
  only has to be subsumed by the inferred expression type:
  
  we write (expr :: qt_partial) iff: (expr :: qt) and (qt_partial = qt).
  
  that means that the precise type can be more specific (variables
  instantiated), have more constraints (as long as the ones given are
  implied) and even different constraints (eg., of subclasses, or of
  other constraints that imply the ones given). this accounts for uses
  of placeholders (_) in both types and constraints in the alternative
  option.
  
  note that, currently, declarations can already be more specific than 
  the inferred type:
  
  f :: Int - Int - Int
  f x y = x+y
  
  partial type declarations would permit us to declare types that are
  more general than the inferred type, allowing us to omit type info
  that we want to leave inferred (or specifying part of a type without
  having to specify all of it):
  
  f :: Int - a - Int
  or
  f :: Int - a
  or
  f :: Fractional a = a - a - a
  
  pro: would easily allow for omission of type details or parts
  of context (a type with more context, or with more specific
  type components, is subsumed by the declaration)
   
  cons: while this accounts for the explicit parts of the alternative
  option (_), that option implicitly assumes that other type
  variables cannot be instantiated, and that contexts without
  _ cannot be extended.
  
  as long as we only specify an one-sided bound, the inferred
  type could be more specific than we'd like (we can't say
  that we don't want context, or that some type variable
  must not be instantiated)
  
  hope I got it the right way round this time?
  claus
  
  ps. I can't find the original thread, but this came up again last
   year, and Oleg suggested, that for the special case of function
  signatures, the same effect could be had by adding extra fake
  clauses (similar to fake clauses commonly used for trace or seq):
  
  http://haskell.org/pipermail/haskell-cafe/2004-August/006606.html
  
  this does not work for other places where type declarations
  are required, but would be awkward or impossible to give in
  full, but at least for functions, Oleg's fake clauses might be a 
  possible desugaring (unless our clever optimising compiler
  removes them as dead code;-):
  
  -- f' :: Int - a - Int
  -- f' :: Int - a
  -- f' :: Fractional a = a - a - a
  f' :: Eq a = c - b - a
  f' = undefined
  
  f x y | False = f' x y
  f x y = x+y
  
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://haskell.org/mailman/listinfo/haskell-prime
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-02-21 Thread Martin Sulzmann

When I said type class acrobats I didn't mean to make fun
of people like you who use type classes in a very skillfull way.
Though, I tried (successfully :) ) to be provocactive.

I agree with you there's a real need for unrestricted type class
programming. After all, the Hindley/Milner subset of Haskell already
allows us to write non-terminating, buggy programs. So, what's
the differences if we allow for the same on the level of types!?

The trouble I have with the current practice of type class programming
is that it's not very principled (yes, I'm provocactive again).
People learn how to write type class programs by observing the
behavior of a specific implementation and sometimes the results
are quite unexpected.

I have been mentioning CHRs before as a very useful tool to study
FDs. In fact, CHRs go way beyond FDs see A Theory of Overloading.
I claim that when it comes to unrestricted type class programming
CHRs are the way to go. CHRs have a clean semantic meaning and
precisely explain the meaning of type class programs.

Martin



Claus Reinke writes:
   I'm forwarding an email that Martin Sulzmann asked me to post on his
   behalf.
  
  thanks. well, that is one view of things. but it is certainly not the only 
  one.
  
  first, about those acrobatics: my type-class-level programs tend to
  start out as straightforward logic programs over types - no acrobatics 
  involved, easy to understand. the acrobatics start when I'm trying to 
  fit those straightforward programs into the straightjackets imposed 
  by current systems - be they explicit restrictions or accidental 
  limitations. wrestling them leads to the need to introduce complex 
  work-arounds and implementation-specific tricks into the formerly
  clean logic programs.
  
  the restrictions tend to be artificial and full of odd corners, rather 
  than smooth or natural, and the limitations tend to be the result of 
  not thinking the problem through. so one could say that neither of
  the levels proposed by Martin is ready for standardization. or, one
  could be pragmatic and standardize both levels, well knowing that
  neither is going to be the final word.
  
  second, about the proposal to ignore overlapping instances for the
  moment: that moment has long passed - iirc, undecidable and 
  overlapping instances have been around for longer than the current
  standard Haskell 98. they provide recursion and conditionals (via
  best-match pattern-matching) for logic programming over types.
  they are overdue for inclusion in the standard, and it is time to stop 
  closing our eyes to this fact.
  
  third, understanding FDs via CHRs is very nice, and a good start
  that I hope to see expanded on rather sooner than later. but (a) 
  there is a big discrepancy between the language studied there and 
  current practice, so this work will have to catch on before it can
  be applied; and (b) the main issue with unconstrained type-class-
  level programming is not that it tackles a semi-decidable problem.
  
  in fact, the usual Prolog system only provides an _incomplete_ 
  implementation of almost the same semi-decidable problem. but it 
  provides a well-defined and predictable implementation, so
  programmers can live with that (with a few well-known exceptions).
  
  the two problems are not quite the same, because we extract
  programs from type-class-level proofs. so not only do we need
  the result to be uniquely determined (coherence), we also need
  the proof to be un-ambiguous (or we might get different programs
  depending on the proof variant chosen).
  
  but, and this is an important but: trying to restrict the permissable
  programs so that neither incoherence nor ambiguity can arise in 
  the first place is only _one_ way to tackle the issue; the other
  direction that needs to be explored is to give programmers control
  enough to disambiguate and to restore coherence where needed.
  
  so, I am quite happy with splitting the bigger problem into two
  levels: restrictive programs corresponding to the current state of
  art in formal studies of the problem, and unconstrained programs
  catering for the current state of art in programming practice. just
  as long as both levels are included in the standard, and both levels
  receive the further attention they need.
  
  as far as I understand, this should not pose a problem for 
  implementors, as they usually implement the simpler unconstrained 
  variant first, then add restrictions to conform to the standard.
  
  cheers,
  claus
  
   - There's a class of MPTC/FD programs which enjoy sound, complete
and decidable type inference. See Result 1 below. I believe that
hugs and ghc faithfully implement this class.
Unfortunately, for advanced type class acrobats this class of
programs is too restrictive.
   
   - There's a more expressive class of MPTC/FD programs which enjoy
sound and complete type inference if we can guarantee termination
by for example