Re: Announcing the new Haskell Prime process, and Haskell 2010

2009-07-07 Thread Claus Reinke
At last year's Haskell Symposium, it was announced that we would change 
the Haskell Prime process to make it less monolithic. 
..
In the coming weeks we'll be refining proposals in preparation for 
Haskell 2010. 


Given the incremental nature of the new standards, would it be
useful to switch back to version numbers, eg Haskell 2.0.0 (2010)
instead of Haskell 2010? Otherwise, we'll end up with half a
dozen more or less current Haskells related by no obvious means.
Haskell'98 was chosen because it projected more permanence
than the Haskell 1.x line of Haskell revisions that came before it.

Having API instead of date encoded in the name would support
deprecations, breaking changes, or additions as well as make it 
clear whether a new year's version does break anything or not.


Btw, once upon a time, there was a discussion about an even
more modular approach, standardising language extensions
without saying which extensions made up a standard language.
That would give support to the status quo, where people want
to use, say, Haskell'98+FFI+Hierarchical Modules+MPTC+..

In other words, existing language extensions (LANGUAGE
pragmas) ought to be standardized (currently, they mean different
things in different implementations), independent of whether
or not the committee decides to group them into a Haskell X.

Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell' - class aliases

2008-05-06 Thread Claus Reinke

| which leads me to a problem i have with ATs, which applies
| to class aliases as well: although the ATs are written as if they
| were local to the class, they get lifted out of the class in a naive
| manner. in particular, they can only refer to their parameters,
| not to other local definitions/types, and their parameters have
| to match the class parameters.

I'm not sure what you mean here, Claus.  Can you give a concrete example?


sure. here's one from practice, even. there was a thread on haskell-cafe
on how to re-export FD-based libraries in AT-based form (for better
match with AT-based client code). the obvious translation of

   class FD a b | a - b

would seem to be

   class AT a where type AT a

but, as it turns out, you can't write

   instance FD a b = AT a where type AT a = b

because the 'b' is not in scope! from an AT-based perspective,
it ought to be in scope, because the AT definition is local to the 
instance, but the AT seems to be implemented as sugar for a

non-local TF, for which the local 'b' is not available (i'm not sure
why there is no lambda-lifting behind the scenes to add that 'b'
parameter, in a hidden form?).

the thread, and Manuel's explanation, are here:

http://www.haskell.org/pipermail/haskell-cafe/2008-March/041168.html

this is likely to be less of a problem for class aliases, because
the component class instances share not only the same form 
of instance head, but also the same context (so if a type is 
functionally determined by the context in one component, it

is so in all components).

btw, if type family instances could have contexts that functionally 
determine extra type parameters, the original poster wouldn't have 
to duplicate his FDs as TFs, as suggested in that email, but could 
simply write (i think?-):


   type instance AT a = FD a b = b

claus


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell' - class aliases

2008-05-02 Thread Claus Reinke

   A aliasing of constraints/classes
   (this is the semantic part that could also be explained by
   reduction, or by simple mutual implication encodings)

   B aliasing of syntax, especially instance definitions
   (this syntactic part is hard to encode, and simple in
   terms of syntactic macro expansion)


it just occurred to me that there is a precedence for this kind
of translation, in associated types and type functions.

defining an AT in a class is equivalent to defining a TF
outside the class, and connecting the TF to the class with
superclass and instance constraints, right?

   class C a where
   type CT a
   c :: (a,CT a)

   instance C a where
   type CT a = ..
   c = ..

   -- vs

   type family CT a
   type instance CT a = ..

   class CT a ~ b = C a where c :: (a,CT a)
   instance CT a ~ b = C a where c = ..

though the latter form is not yet supported in GHC (#714).

which leads me to a problem i have with ATs, which applies
to class aliases as well: although the ATs are written as if they
were local to the class, they get lifted out of the class in a naive
manner. in particular, they can only refer to their parameters, 
not to other local definitions/types, and their parameters have 
to match the class parameters.


however, i assume that the restrictions/translations/implementations
for class aliases are similar to the those for the implementation 
of ATs in terms of TFs, which might help?


claus



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


RFC: qualified vs unqualified names in defining instance methods

2008-04-25 Thread Claus Reinke

consider Haskell 98 report, section 4.3.2 Instance Declarations:

   The declarations d may contain bindings only for the class methods of C. 
   It is illegal to give a binding for a class method that is not in scope, but the 
   name under which it is in scope is immaterial; in particular, it may be a 
   qualified name. (This rule is identical to that used for subordinate names 
   in export lists --- Section 5.2.) For example, this is legal, even though 
   range is in scope only with the qualified name Ix.range. 


 module A where
   import qualified Ix

   instance Ix.Ix T where
 range = ...

i consider this confusing (see example at the end), but even
worse is that the reference to 5.2 appears to rule out the use of 
qualified names when defining instance methods.


while this abbreviation of qualified names as unqualified names 
when unambiguous may be harmless in the majority of cases, it

seems wrong that the more appropriate explicit disambiguation
via qualified names is ruled out entirely. 

i submit that 4.3.2 should be amended so that qualified names 
are permitted when defining instance methods.


here's an example to show that the unambiguity holds only on the
lhs of the method definition, and that the forced use of unqualified
names can be confusing:

   module QI where
   
   import Prelude hiding (Functor(..))

   import qualified Prelude (Functor(..))
   
   data X a = X a deriving Show
   
   instance Prelude.Functor X where 
 fmap f (X a) = X (f a)

   where q = (reverse fmap,Prelude.fmap not [True],reverse QI.fmap)
   
   fmap = fmap


note that there are two unqualified uses of 'fmap' in the instance
declaration, referring to different qualified names: 


- in the lhs, 'fmap' refers to 'Prelude.fmap', which isn't in scope
   unqualified, only qualified

- in the rhs, 'fmap' refers to 'QI.fmap'

claus


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Haskell' - class aliases

2008-04-25 Thread Claus Reinke

Is this the most up-to-date description of the proposal?
http://repetae.net/recent/out/classalias.html


what sounds nice about the class alias proposal is that it is pure
sugar, at least to the extent that type aliases are, but the design 
principle behind it seems to be that there should be a separate 
class for each method (as in Clean?), and that any compound 
classes should really just be class aliases (made to look like 
compound classes by the sugar), so that rearranging compound
classes comes down to defining more aliases for the same 
single-method base classes.


since this looks like class equivalence plus namespace handling,
i was wondering how far one could get without the proposed 
extension. this is slightly more difficult than the proposed translation

(which splits compound aliases into their components, so that the
alias class is always translated away), but it might still be of interest.

consider the 'class alias FooBar a = (Foo a,Bar a)' example
from the proposal page. we define FooBar and Foor/Bar in
separate modules and use that for namespace management.

- FooAndBar defines Foo and Bar, as well as a type X
   which is an instance of both

- FooBar defines FooBar, implicit derivations of FooBar 
   from Foo/Bar and vice-versa (the aliasing part), as well 
   as a type Y which is an instance of FooBar


   FooBar also arranges for Y to be an instance of Foo/Bar,
   and for X to be an instance of FooBar, via the implicit
   derivations, but controlled by instances of How

note the class 'How' and its instances, which ensure that 
any type class instance is either defined, or derived (in

a unique, specified way), but never both.

problems:

(1) instance method definitions by qualified names are
   not permitted, leading to the confusing 'foo = foo'
   (cf separate thread)
   
(2) overlapping instances, due to the derived instances;
   it seems this can be held in check by the use of 'How', 
   at the expense of some extra parameters/contexts/

   instances to control how each instance is defined/derived

example session:

   *FooBar foo (X 1)
   False
   *FooBar bar 0 (X 1)
   [X 1]
   *FooBar foo (Y 1)
   True
   *FooBar bar 0 (Y 1)
   [Y 1,Y 1]

   *FooBar FooAndBar.foo (X 1)
   False
   *FooBar FooAndBar.foo (Y 1)
   True
   *FooBar FooAndBar.bar 0 (X 1)
   [X 1]
   *FooBar FooAndBar.bar 0 (Y 1)
   [Y 1,Y 1]

   *FooBar :t foo
   foo :: (FooBar a how) = a - Bool
   *FooBar :t FooAndBar.foo
   FooAndBar.foo :: (Foo a how) = a - Bool
   *FooBar :t bar
   bar :: (FooBar a how) = Int - a - [a]
   *FooBar :t FooAndBar.bar
   FooAndBar.bar :: (Bar b how) = Int - b - [b]

i don't think i'd recommend this encoding style (it does not
quite fullfill the criterion of simplicity!-), but there you are:
class aliases encoded.

hth,
claus

ps. (for the TF vs FD fans: replacing FD class 'How' 
   with a TF doesn't seem to work; a bug?)




FooAndBar.hs
Description: Binary data


FooBar.hs
Description: Binary data
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: DRAFT: Haskell' status update

2008-04-22 Thread Claus Reinke

Nevertheless, the committee feels that we cannot have a Haskell' without
some way to resolve ambiguities when using multi-parameter type
classes, be it Functional Dependencies (FDs) or Type Families (TFs).


agreed!


  - Haskell' alpha will be a complete language specification,
including all the modifications and additions we want to make
to the language *except* for FDs or TFs.


that should at least allow for some necessary progress.


  - Haskell' will follow afterward, adding either FDs or TFs.


that seems to suggest that Haskell' is going to be *the* standard,
so it better be very good, and not leave out anything important,
even if it is as recent as TFs. 

strangely, i never had such great expectations from this process - 
instead, i expected Haskell' to be *one in a series* of standards, 
documenting the current state, not any (supposedly) final state. 
it should have come out long ago, and should be well on its way 
to become obsolete in a few years (though not immediately, if possible).


in line with those expectations, i'd be happy with a Haskell' beta,
if it were to *standardise*, but *not prescribe* FDs and TFs and 
whatever other type system extensions have been in common use 
for a long time now. no Haskell' beta implementation would be

required to implement either FDs or TFs or overlap resolution
or closed classes or .. . but if i write {-# LANGUAGE TF #-}
and the implementation doesn't complain, it'd better conform to
the standard for TFs. 

currently, that is simply not the case, with LANGUAGE pragmas 
supported (and interpreted differently) by more than one implementation 
suggesting a *false* sense of standardisation and portability.



The motivation for this two-stage approach is that we can make progress
on all the other parts of the language without being blocked on the type
system, we can start work on implementing Haskell' alpha in our
compilers, users can start using the new standard, and we can gain some
experience with using it in practice.


yet another option would be to standardise improvement as a means 
of reducing ambiguity, and simplification as a means of rewriting 
constraints, leaving open specific forms of specifying such improvements 
or simplifications - a bit like a language-standard API for adding 
features (perhaps there could even be a corresponding standard API 
for extending standard-conformant implementations) [1]. 


then there could be an addendum specifying FDs as one particular
instance of enabling such improvements, and any implementation
supporting {-# LANGUAGE FD #-} would have to conform
to that addendum.

and anyone wanting to support other forms of improvement
(records, subtyping, ..) could program to that implementation 
extension API..



Thanks for your patience :-)  And rest assured that progress is being made!


thanks for your update! (frankly, i had written off haskell prime;-)

haskell prime is dead! long live haskell prime!
claus

[1]  M. P. Jones. Simplifying and improving qualified types. 
   In FPCA '95: Conference on Functional Programming Languages 
   and Computer Architecture. ACM Press, 1995.


   http://web.cecs.pdx.edu/~mpj/pubs/improve.html


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Make it possible to evaluate monadic actions when assigningrecord fields

2007-07-12 Thread Claus Reinke

Put differently, I don't see a compelling use-case for the proposed
syntax extension. But I've seen many misused monads.


A compelling use-case:

http://darcs.haskell.org/yhc/src/libraries/core/Yhc/Core/Simplify.hs

Look at coreSimplifyExprUniqueExt
   -- helpers, ' is yes, _ is no
   coreCase__ x y = f $ CoreCase x y coreCase_' x y = f . CoreCase x = y


hmm. i'd say that is a compelling misuse-case!-) although apfelmus
probably meant misuses in the sense that a different structure than
monads would often have been a better fit, i just mean readability.

why not simply define

   coreCaseM x y = f = liftM2 CoreCase x y

etc. then this


   f (CoreApp (CoreLet bind xs) ys) = coreLet_' bind (coreApp__ xs ys)


would become somewhat lengthier, but much easier to read

f (CoreApp (CoreLet bind xs) ys) = coreLetM (return bind) (coreAppM (return xs) (return 
ys))


in particular, there aren't 2^n variations of the functions, but simple
return-wrappers around parameters that immediately tell me what is
going on. if anything, i'd often like a quieter/shorter way to write
(return x) - since this pattern usually requires parentheses, some form
of semantic brackets would do nicely, to express lifting of pure values.
that would serve the same overall purpose, without semantic ambiguities,
wouldn't it?

btw, this half-implicit recursion via an f embedded in constructors
looks rather odd to me. why not separate rules and recursion?

claus


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-29 Thread Claus Reinke

   mapA f (nilAP - ()) = nilA
   mapA f (consAP - (h,t)) = consA (f h) (mapA f t)

   foldA  f n (nilAP - ())= n
   foldA  f n (consAP - (h,t)) = f h (foldA f n t)


yes, maps and folds are likely to be parts of the ADT interface, rather than
defined on top of it. I just used them as simple and familiar examples, so
that we have something to compare them with. 

To me this exactly illustrates why view patterns are a bad idea: 


whether or not an ADT interface is well designed, according to some metric, 
does not tell us whether or not the language features used in the code are 
good or not. hiding the internal representation always raises questions of
whether the exposed interface is still expressive enough or allows efficient 
code to be written, even without view patterns. 

in other words, ADTs do not only conflict with the ease of pattern matching, 
but also with other possible advantages of using the internal representation 
directly. view patterns help to address the convenience/readability issue, but

the other issues remain to be addressed by careful interface design.

in this particular case, I believe that separate compilation is the main 
concern standing in the way of optimizing the abstract view away.


Claus

you've taken some concrete type, abstracted it to replace the actual structure 
by a list structure, then defined map and fold over the list structure. This 
means that map and fold can't take advantage of the actual concrete 
structure and are therefore condemned to use the inefficient linear 
structure imposed by the list abstraction.


For example implementing map over a tree directly, gives the possibility of 
parallel execution since different subtrees can be mapped independently. But 
when you view the tree abstractly as a list, no such parallel execution can 
take place. Therefore surely it is better that map and fold are defined for 
each ADT separately, with the separate definitions hidden behind a type 
class, than to attempt to define them outside the definition of the ADT 
using view patterns?


Brian.
--
http://www.metamilk.com 

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-27 Thread Claus Reinke

the alternative I'm aiming for, as exhibited in the consP example, would be
to build patterns systematically from view patterns used as abstract
de-constructors, composed in the same way as one would compose the
abstract constructors to build the abstract data structure. 


This would cause an awful lot of kludging to get around the fact you need 
to declare a new ADT to declare new abstract deconstructors, and requires 
an additional extension for abstract deconstructors to be typeclass 
methods - something abstract constructors can do for free. Neither seems 
gainful to me.


I don't understand? you can define deconstructors for concrete types as well,
as many as you like; it is just that when the representation is not hidden in an
ADT, noone hinders me from bypassing your deconstructors and go for the
concrete representation instead of the abstract representation. and how did 
additional extensions or typeclasses get into the picture??


perhaps a concrete example will help. as I used the lists-as-arrays example
for lambda-match, here it is again for view patterns (implementation not
repeated, List made abstract, untested..):

   module ListArray(List(),nilA,nullA  , nilAP
   ,consA,headA,tailA  , consAP
   ,snocA,initA,tailA  , snocAP
   ) where
   ..imports..

   -- our own array list variant
   data List a = List (Array Int a)

   -- constructors, tests, selectors; cons and snoc view
   nilA :: List a
   nullA :: List a - Bool

   consA :: a - List a - List a
   headA :: List a - a
   tailA :: List a - List a

   snocA :: List a - a - List a
   lastA :: List a - a
   initA :: List a - List a

   -- we also define our own pattern constructors
   nilAP  = guard . nullA 
   consAP l = do { guard $ not (nullA l); return ( headA l, tailA l ) }

   snocAP l = do { guard $ not (nullA l); return ( initA l, lastA l ) }


   module Examples where
   import ListArray

   anA = consA 1 $ consA 2 $ consA 3 $ consA 4 nilA

   mapA f (nilAP - ()) = nilA
   mapA f (consAP - (h,t)) = consA (f h) (mapA f t)

   foldA  f n (nilAP - ())= n
   foldA  f n (consAP - (h,t)) = f h (foldA f n t) 


   foldA' f n (nilAP - ())   = n
   foldA' f n (snocAP - (i,l)) = f (foldA' f n i) l

   palindrome (nilAP - ()) = True
   palindrome (consAP - (_, nilAP - () ) = True
   palindrome (consAP - (h, snocAP - (m,l))) = (h==l)  palindrome m

no need for typeclasses so far. we use abstract data and pattern constructors
for adts, just as we use concrete data and pattern constructors for concrete
types. we choose what view to take of our data simply by choosing what
pattern constructors we use (no need for type-based overloaded in/out).
and since our pattern constructors are simply functions, we get pattern
synonyms as well.

we could, I guess, try to package data and pattern constructors together,
either by typeclasses:

   class Cons t where cons :: t
   instance Cons (a-List a-List a) where cons = ListArray.cons
   instance Cons (List a-(a,List a)) where cons = ListArray.consP

or by declaring consP as the deconstructor corresponding to the cons
constructor, as Mark suggested:

   cons :: a - List a - List a
   cons# :: List a - (a,List a)

both versions could then be used to select the pattern or data constructor,
depending on whether cons was used in a pattern or expression context.
but neither of these seems strictly necessary to get the benefit of views.

if view patterns turn out to be practical, one could then go on to redefine
the meaning of data type declarations as implicitly introducing both
data and pattern constructors, so

   f (C x (C y N) = C y (C x N)

might one day stand for

   f (cP - (x, cP - (y, nP))) = c y (c x n)

but it seems a bit early to discuss such far-reaching changes when we 
haven't got any experience with view patterns yet. in the mean-time, one

might want to extend the refactoring from concrete to abstract types
(HaRe has such a refactoring), so that it uses view patterns instead of 
eliminating pattern matching.


since others have raised similar concerns about needing type-classes,
I seem to be missing something. could someone please explain what?

Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-26 Thread Claus Reinke
  2) There are other reasons why I want to use Haskell-98 and would  
like to be able to use other compilers.  Thus, I'd want a pattern-binder 
preprocessor (extending GHC is not as important to me).


I see. though I'd hope that as long as we keep our extensions simple and
general enough, the other implementations will pick them up anyway.

Here's my motivating example.  Here's a fragment for an STG  
interpreter in Haskell-98:

{{{
  rule_CASE_ELIM (Case p alts, s, h, o) =
  do
  ConApp c as - ptsTo p h
  let matchAlt (Alt c' vs e) | c == c' = Just (vs,e)
  matchAlt _   = Nothing
  (vs,e) - matchFirst matchAlt alts
  return (e `sub` (vs,as), s, h, o)
}}}


yes, abstract machines have inspired many a pattern match extension!-)

are we in Maybe, or in anything more complex? view patterns don't seem to apply, 
but pattern guards do, and lambda-match helps with the local function pattern 
(ignoring the Match type tag for the moment; given the revival of interest in pattern 
functions, eg., in view patterns, I ought to try and see whether I can get rid of the 
type tag in my library for the special case of Maybe):


{{{
rule_CASE_ELIM =
   (| (Case p alts, s, h, o) 
   | ConApp c as - ptsTo p h

   , (vs,e) - matchFirst (| (Alt c' vs e) | c == c' -(vs,e) ) alts
   - (e `sub` (vs,as), s, h, o) )
}}}

which isn't quite as abstract as the pattern binder/combinator version,
but at least I can see the scoping, which I am at a loss with in the pattern
binder version:

I'd like it to have a textual form just a little more abstract, I can  
do that with pattern binders and some appropriate combinators:


{{{
  rule_CASE_ELIM =
  { (Case p alts, s, h, o) }
   ptsTo p h === { ConApp c as  }
   alts === matchFirst { Alt #c vs e }
  .-
(e `sub` (vs,as), s, h, o)
}}}

I'll leave it as an exercise to figure out how the last is  
parenthesized ;-).


ok, I give up. there seem to be some new combinators, and the pattern 
binder variables are no longer distinguishable (via $). but unless you've 
changed the translation as well, the only way the scopes are going to come 
out right is if the layout is a lie, right? and how does the translation apply to 
pattern binders not in an infix application, in particular, how do vs/e get to

the rhs of .-?

Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-25 Thread Claus Reinke
I'm not quite sure whether it all means you think view patterns are good; or that 
they would be good with a tweak; or that something else would be better.


probably because my opinion has been changing;-) at first, I wasn't convinced,
now I think it depends on the details. as Mark said, such syntactic extensions 
of
conventional patterns are not strictly necessary since we know how to avoid them
completely (using data parsing). so for a new functional language, I too would 
like to drop patterns as built-ins, providing their functionality via sugar and libraries.


but as far as Haskell is concerned, I am perhaps less radical in my approach 
than Mark is: Haskellers have invested an awful lot of work in those conventional 
patterns, in readibility, in optimisations, and in linking them with other extensions 
(eg., type system extensions). 

that is why I proposed the lambda-match construct to complement the library 
Control.Monad.Match, so that conventional patterns could be used within the 
data parsing framework. and that is why I think view patterns are useful: they 
allow us to embed data parsing into conventional patterns, reusing existing 
syntax for binding pattern variables while still allowing us to define our own 
pattern constructors.


so I'd like to have both lambda-match and view patterns, supported by 
Control.Monad.Match, and well integrated. but if suggestions to make Maybe 
explicit in view patterns, or to drop it alltogether, carry the day, I might lose 
interest. also, I'd like the syntax to stay close to conventional constructors, 
rather than close to pattern guards.


regarding first-class abstractions/terminology: for myself, I have settled on 
using
first-class matches (or first-class match alternatives) for the likes of the
lambda-match construct (left-hand side pattern, right-hand side expression), 
and first-class patterns for proposals that actually allow to abstract over the 
left-hand sides of matches. both first-class matches and first-class patterns 
tend to use the common framework of MonadPlus instances for match failure 
and fall-through, as a generalisation of the good old monadic combinator 
parsers on strings. for this framework I use the term monadic data parsing.


regarding syntax for view patterns: I like the prefix form, but agree that the 
use
of - is unfortunate. If it wasn't for pattern constants, I'd probably just 
use
application (lower case identifiers in function position in a pattern can only 
be
views, unless someone suggests other uses for that syntax; and the last 
parameter of a view has to be a pattern). The next best thing, to emphasize 
that we're essentially computing patterns, would be to borrow TH's notation 
for splicing, using


   $(view p1..pn) pattern

instead of

   view p1..pn - pattern

Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-25 Thread Claus Reinke
Strangely, for other reasons, I'm planning, within a week or so, to  
start implementing the pattern-binder syntax I discussed in the paper 
(either in GHC or as a pre-processor).


I'm somewhat surprised to read this. Between view patterns, lambda-match,
and Control.Monad.Match, I thought we were approaching a situation in
which we have all the essential aspects covered (perhaps apart from the 
fact that your combinators come in both left-right and right-left variants), 
with slightly more convenience and better integration with existing pattern 
match facilities 


Especially the pattern-binder syntax and translation strike me as more
complicated (so much so that I would rather use a simplified form of the 
translation result than all that machinery) and no more general than 
combining view patterns with pattern functions. But perhaps that is a

question of personal style (and my own use of type-classes to lift
mplus to pattern-functions has also been classed as complicated by
others;-).

Is there anything specific you find missing, or a those other reasons the
motivation with going for your own version?

Claus


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-24 Thread Claus Reinke
   -- abstract list deconstructors / list pattern constructors 
   -- (consP takes h/t sub-patterns as parameters)

   consP h t l = do { guard $ not (null l); hr - h (head l); tr - t (tail l); 
return (hr,tr) }
   nilP l = do { guard $ null l; return () }
   
   -- wildcard and variable patterns

   wildP l = return ()
   varP = return

   -- extract the head of the tail of the parameter list, if that list has two 
elements
   f (consP wildP (consP varP nilP) - (_,(x,_))) = x


hmm, the above was probably guided too much by thinking about my own proposal
(and this style could be translated back to it fairly easily, I think). the following would 
make better use of view patterns, and be a lot simpler:


   -- cons pattern/deconstructor
   consP l = do { guard $ not (null l); return (head l, tail l) }

   -- extract head of tail of two-element list
   f (consP - (_, consP - (x, []) ) ) = x

btw, lambda-match and view patterns complement each other:
- the sugar in lambda-match embeds classical matches in data parsing
- the sugar in view patterns embeds data parsing in classical patterns

In view of this, I was wondering: if you do not limit yourself to Maybe, but 
allow other MonadPlus instances, wouldn't that give you or-patterns?


also, view patterns give us local guards:

   g ( xs@( guard . not . null - () ) ) ys = xs++ys

if we combine these two, we could presumably do things like using
the list MonadPlus for backtracking matches, as proposed in some other
functional languages (also assuming non-linearity of patterns here):

   select :: Eq a = a - Map a b - b
   select key ( toList - ( (guard . (key==) ) ,value) ) = value

claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Views in Haskell

2007-01-23 Thread Claus Reinke

   http://hackage.haskell.org/trac/haskell-prime/wiki/ViewPatterns

I'm thinking of implementing it in GHC, so I'd be interested in feedback of the 
form
   - how desirable is it to have a feature of this general form?
   - can this particular proposal be improved?


IMHO, getting a handle on the ADT vs pattern matching issues is overdue, so 
thanks
for raising this again. a few first comments:

1 I am a bit concerned about the use of non-linear patterns in your examples. 
   There are good arguments for non-linear patterns, and Haskellers have made good

   arguments against non-linear patterns. But you seem to suggest allowing 
non-linear
   patterns in some cases (related to view patterns), but not in others 
(general patterns).
   That is likely to be confusing.

2 view patterns nicely separate expressions in patterns from pattern variables. But I 
   didn't realize at first that view patterns can be used nested inside other patterns.


   Yet this variable binding during nested matching is the essential 
contribution, and
   the only reason why the extra syntactic sugar is justified. Perhaps this 
point could
   be repeated and emphasized in The proposal more formally, for people like 
me?-)

3 what you call first class abstractions are not entirely orthogonal to view 
patterns.
   taking Tullsen's and my own proposal as examples:

- the way patterns and alternatives are handled needs to fit together. that doesn't 
   seem to be a problem since your and our proposals agree on using what I call
   a monadic data parsing framework (using a MonadPlus such as Maybe to handle 
   pattern match failure and alternatives)


- all three proposals have discussed how to handle patterns as well. For 
Tullsen,
   that is central to his proposal, for me, it was only one of the more advanced 
   examples because I wanted to focus on match alternatives first.


   Tullsen first builds his pattern combinators, then outlines a point-free 
style that
   avoids the need for pattern variables althogether but does not seem to scale well, 
   then suggests syntactic sugar for translating patterns with variables into applications 
   of his combinators. So that last part is closely related to, if different from, your

   proposal.

   In my example, I build up patterns from de-constructors (which use tests and 
   selectors), so that a cons pattern takes a head pattern and a tail pattern as 
   parameters and applies them to the head and tail if it is applied to a non-empty
   list. To handle variables, I use an old trick from the early functional logic 
   languages, namely that logic variables can be passed unbound, then bound to

   values later, just what we need for pattern variables. Since Haskell doesn't
   have logic variables, I have to simulate them, which is the only awkward bit
   of the example:
   http://www.haskell.org/pipermail/haskell-prime/2006-November/001915.html

as long as Haskell doesn't support logic variables, some syntactic sugar for
variables in nested patterns, such as Tullsen's or your's, is probably 
inevitable.

4 whether to use view patterns inside ordinary patterns, or whether to build up
   patterns from abstract de-constructors (just as expressions are built from 
   abstract constructors) may seem only a question of style. but if your aim is 
   to encourage people to transition from exporting concrete data types to

   exporting abstract types only, the latter approach seems more consistent
   to me. In my example, a cons de-constructor would be as simple as

   -- the cons view of array lists is a higher-order pattern that takes
   -- patterns for the head and tail components, and applies them after
   -- checking whether the list parameter is a non-empty list
   consAP h t l = do { Match $ guard $ not (isNilA l); h (headA l); t (tailA l) 
}

   but that relies on the scoping of (simulated) logic variables, and it does 
   not translate directly to your view patterns, as the h and t pattern parameters

   would have their own scope for their pattern variables. It would be 
instructive
   to have something equivalent to pattern constructors/abstract deconstructors
   for view patterns, if only to see whether view patterns can support a fully 
   abstract style of nested matches easily. I am not entirely sure they do, but 
   here is a first attempt:


   -- abstract list deconstructors / list pattern constructors 
   -- (consP takes h/t sub-patterns as parameters)

   consP h t l = do { guard $ not (null l); hr - h (head l); tr - t (tail l); 
return (hr,tr) }
   nilP l = do { guard $ null l; return () }
   
   -- wildcard and variable patterns

   wildP l = return ()
   varP = return

   -- extract the head of the tail of the parameter list, if that list has two 
elements
   f (consP wildP (consP varP nilP) - (_,(x,_))) = x

   It seems a bit awkward to have to specify the structure of the parameter 
twice,
   once to build up the pattern, then again to match sub-expressions 

strictly matching monadic let and overloaded Bool (was: Are pattern guards obsolete?)

2006-12-14 Thread Claus Reinke

consider the following examples:

   -- do-notation: explicit return; explicit guard; monadic result 
   d _ = do { Just b - return (Just True); guard b; return 42 }


   -- list comprehension: explicit return; implicit guard; monadic (list) result
   lc _ = [ 42 | Just b - return (Just True), b ]

   -- pattern guard: implicit return; implicit guard; non-monadic result
   pg _ | Just b - Just True, b = 42


This ongoing discussion has made me curious about whether we could actually
get rid of these irregularities in the language, without losing any of the 
features
we like so much.

=== attempt 1

(a) boolean statements vs guards

   this looks straightforward. Bool is a type, so can never be an instance of
   constructor class Monad, so a boolean statement in a monadic context is
   always invalid at the moment. that means we could simply extend our
   syntactic sugar to take account of types, and read every

((e :: Bool) :: Monad m = m _) 
   
   in a statement of a do block as a shorthand for


   (guard (e :: Bool) :: Monad m = m ())
   
(b) missing return in pattern guards


   this could be made to fit the general pattern, if we had (return == id).
   that would put us into the Identity monad, which seems fine at first,
   since we only need return, bind, guard, and fail. unfortunately, those
   are only the requirements for a single pattern guard - to handle not
   just failure, but also fall-through, we also need mplus. which means
   that the Identity monad does not have enough structure, we need at
   least Maybe..

this first attempt leaves us with two problems. not only is (return==id)
not sufficient for (b), but the suggested approach to (a) is also not very
haskellish: instead of having syntactic sugar depend on type information,
the typical haskell approach is to have type-independent sugar that 
introduces overloaded operations, such as 


   fromInteger :: Num a = Integer - a

to be resolved by the usual type class machinery. addressing these two 
issues leads us to


=== attempt 2

(a) overloading Bool

following the approach of Num and overloaded numeric literals, we could
introduce a type class Boolean

   class Boolean b where
   fromBool :: Bool - b

   instance Boolean Bool where 
   fromBool = id


and implicitly translate every literal expression of type Bool

   True ~~ fromBool True
   False ~~ fromBool False

now we can embed Boolean statements as monadic statements simply by
defining an additional instance

   instance MonadPlus m = Boolean (m ()) where
   fromBool = guard

(b) adding a strictly matching monadic let

we can't just have (return==id), and we do not want the hassle of having to
write

   pattern - return expr

in pattern guards. the alternative of using let doesn't work either

   let pattern = expr

because we do want pattern match failure to abort the pattern guard and
lead to overall match failure and fall-through. so what we really seem to want 
is a shorthand notation for a strict variant of monadic let bindings. apfelmus 
suggested to use '=' for this purpose, so that, wherever monadic generators

are permitted

   pattern = expr  ~~ pattern - return expr

===

returning to the examples, the approach of attempt 2 would allow us to write

   -- do-notation: implicit return; implicit guard; monadic result 
   d _ = do { Just b = Just True; b; return 42 }


   -- list comprehension: implicit return; implicit guard; monadic (list) result
   lc _ = [ 42 | Just b = Just True, b ]

   -- pattern guard: implicit return; implicit guard; non-monadic result
   pg _ | Just b = Just True, b = 42

almost resolving the irregularities, and permitting uniform handling of related
syntactic constructs. hooray!-)

I say almost, because Bool permeates large parts of language and libraries,
so one would need to check every occurence of the type and possibly
replace Bool by (Boolean b = b). the Boolean Bool instance should mean
that this process could be incremental (ie, even without replacements, things
should still work, with more replacements generalizing more functionality,
similar to the Int vs Integer issue), but that hope ought to be tested in 
practice.

one issue arising in practice is that we would like to have

   fromBool  :: MonadPlus m = Bool - m a

but the current definition of guard would fix the type to

   fromBool  :: MonadPlus m = Bool - m ()

which would require type annotations for Booleans used as guards. see the
attached example for an easy workaround.

on the positive side, this approach would not just make pattern guards more
regular, but '=' and 'MonadPlus m = Boolean (m ()) would be useful for 
monadic code in general. even better than that, those of use doing embedded

DSLs in Haskell have been looking for a way to overload Bools for a long
time, and the implicit 'Boolean b = fromBool :: Bool - b' ought to get us
started in the right direction. most likely, we would need more Bool-based
constructs to be overloaded for 

Re: strictly matching monadic let and overloaded Bool (was: Are patternguards obsolete?)

2006-12-14 Thread Claus Reinke

one issue arising in practice is that we would like to have

   fromBool  :: MonadPlus m = Bool - m a

but the current definition of guard would fix the type to

   fromBool  :: MonadPlus m = Bool - m ()

which would require type annotations for Booleans used as guards. see the
attached example for an easy workaround.


what attachment, you ask? sorry, lack of sleep - now attached to this message.

claus

Boolean.hs
Description: Binary data
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Higher order syntactic sugar

2006-12-14 Thread Claus Reinke

ooohh.. when I saw the subject, I fully expected a worked out proposal for
extensible syntax in Haskell, just in time for Christmas. well, maybe next 
year!-)


It was to late when i realized that = is already used as smaller than
or equal to :)


oops. okay, lets change that. what about this: 


  pattern = expr  ~~ pattern - return expr

a cleaner variant would be a let!, perhaps, but that would probably be too 
noisy for pattern guards? (also, we don't want to steal nice infix ops like ==)



do-notation is not the natural style for MonadPlus Maybe, the natural
style is more like the current syntax of pattern guards. I mean that one
rarely hides a Just constructor like in


oh? getting rid of nested (case x of {Just this -..; Nothing - that}) is a
very good argument in favour of do-notation for Maybe, and I find that
very natural (for some definition of nature;-). granted, once one has taken 
that step, one is close to writing in monadic style anyway, so it is no longer 
specific which constructors are hidden. but I don't see a specific problem

with Maybe there, and I haven't seen convincing sugar for MonadPlus yet.


general syntax is too much for the special case. But there is something
more canonical than completely disjoint syntax: in a sense, Claus'
suggestions are about making the syntax for the special case a *subset*
of the syntax for the more general one.


indeed. thanks for pointing that out. I first went the other direction, but
as you say, generalizing pattern guards introduces too much syntax in an
awkward place. so my current suggestion follows the subset idea.

Some higher order syntactic sugar melting machine bringing all these 
candies together would be very cool.


hooray for extensional syntax!-) syntax pre-transformation that would
allow me to extend a Haskell parser in library code is something I'd 
really like to see for Haskell, possibly combined with error message 
post-transformation. together, they'd smooth over the main objections

against embedded DSLs, or allow testing small extensions of Haskell.

I have been wondering in the past why I do not use Template Haskell
more, given that I'm a great fan of meta-programming and reflection,
and I think the answer is that it sits in an unfortunate way between two
chairs: it cannot quite be used for syntax extensions because it insists
on Haskell syntax/scopes/types, and it cannot quite be used as a
frontend because there's some typing coming after it. persistent users
have found wonderful things to do with it nevertheless, even analysis/
frontend stuff, but its main use seems to be program-dependent
program generation, within the limits of Haskell syntax.

in fact, I have a pragmatic need for even more, namely type system
extensions as well: somewhere on my disk, I have a type-directed
monadification prototype, based on a type system that infers not
just a type, but a type coercion; works well, at least for simple 
monomorphic code, but what do I do with it? being type-directed,

it uses a completely different foundation than the rest of HaRe
refactorings, and to fully realize it for Haskell, I'd have to implement
and -here comes the killer- maintain a complete Haskell type system,
just because I need a few modifications/extensions. it is just not 
practical to do so, let alone once for every type-directed algorithm.


Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Are pattern guards obsolete?

2006-12-13 Thread Claus Reinke

I am not clear why you think the current notation is confusing...
Could you give a concrete example?  I am thinking of something along
the lines:  based on how - works in list comprehensions and the do
notation, I would expect that pattern guards do XXX but instead, they
confusingly do YYY.  I think that this will help us keep the
discussion concrete.


consider the following examples:

   -- do-notation: explicit return; explicit guard; monadic result 
   d _ = do { Just b - return (Just True); guard b; return 42 }


   -- list comprehension: explicit return; implicit guard; monadic (list) result
   lc _ = [ 42 | Just b - return (Just True), b ]

   -- pattern guard: implicit return; implicit guard; non-monadic result
   pg _ | Just b - Just True, b = 42

in spite of their similarity, all of these constructs handle some of the 
monadic aspects differently. the translations of pattern guards not only

embed statements in guard, they also embed the right hand sides of
generators in return. translations of list comprehensions only lift 
statements. translation of do-notation lifts neither statements nor

generators.

does this clarify things?

Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Teaching

2006-11-30 Thread Claus Reinke

defaulting can also be used for non-standard arithmetic in teaching
(not something you want to let loose on students who you don't
want to know about type classes, though, so be careful where you
demonstrate this;-):

   http://www.cs.kent.ac.uk/people/staff/cr3/toolbox/haskell/R.hs

   Main foldr1 (*) [1..5]
   (1 * (2 * (3 * (4 * 5
   Main foldl (*) 1 [1..5]
   (1 * 1) * 2) * 3) * 4) * 5)
   Main foldr (*) 1 [1..5]
   (1 * (2 * (3 * (4 * (5 * 1)
   Main map (+) [1..4]
   [\x-(1 + x),\x-(2 + x),\x-(3 + x),\x-(4 + x)]
   Main map (1+) [1..4]
   [(1 + 1),(1 + 2),(1 + 3),(1 + 4)]
   Main map (1+) [1..4] :: [Int]
   [2,3,4,5]

this was written long ago, with Hugs in mind, where the defaulting
applies to the interactive loop - with GHCi, you'll need to add
-fglasgow-exts, and still don't get the defaulting interactively (?), 
so you'll need to write the type annotations:


   *Main foldr (-) 0 [1..4] :: R Int
   (1 - (2 - (3 - (4 - 0
   *Main foldl (-) 0 [1..4] :: R Int
   0 - 1) - 2) - 3) - 4)

of course, this would be fine if defaults could be set in the interactive
loop itself, so you don't need defaults here, and one might argue that
having to give type annotations is annoying, but instructive..

personally, I use default very rarely, but that kind of reasoning has 
never been a good argument for excluding a feature that others like.


I would agree, however, that the monomorphism restriction should go
(warning only), so that defaulting cannot change my 1 :: Num a = a
constants from Behaviours to Integers; with DMR gone, there may
be less need for defaulting, but I would also agree that defaulting 
should be generalized so that its current use becomes a special case
of disambiguating overloading for a whole module/program/session, 
without having to specialize the type-signatures everywhere in the code.


and yes, teaching is important, and Haskell is not only about teaching,
so teaching needs need to be addressed like the needs of any other
important Haskell application area: by optional, but widely supported,
domain-specific libraries/packages/flags/syntax/..

Just my (2 :: Num a = a),
Claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Re[2]: Teaching

2006-11-30 Thread Claus Reinke

and yes, teaching is important, and Haskell is not only about teaching,
so teaching needs need to be addressed like the needs of any other
important Haskell application area: by optional, but widely supported,
domain-specific libraries/packages/flags/syntax/..


oh, and it might be useful to look at the interests of Haskell textbook
authors in a different way: if I was an author of a Haskell textbook, and
I had the foresight to rely not on the prelude or the standard libs 
(because they will both change, among other things, before the first 
edition of that book gets out of circulation..), what help could Haskell'

or Haskell'' could give me?

here are a few examples to get the discussion going - imagine things like:

   -package Thompson

   fromInt = fromInteger -- easy
   error message += explanation -- hmm?

   -package Hudak

   module SOE -- ouch, years of hard work by several hackers, 
   -- and still mostly trouble and pain

   libaries += portable graphics and stuff

   -package Bird

   syntax pattern += (n+k) -- hmm

   -package Helium -- is there a book to go with this?

   language -= type classes -- not really easy?
   (error messages -= abstract messages) += specific messages

   -package NewKidOnTheBlock

   everything = haskell' and state-of-2006  -- to be adapted as things 
change

as we can see from this exhaustive survey.. okay, I'm just kidding!-)

but it seems that there are quite a few things that *are* important to
Haskell teaching, and have disturbed Haskell textbook authors, that
ought to be investigated as language and library design and maintainance
problems - though it is a bit late in the day for Haskell', perhaps the
committee and current authors could think of a roadmap, so that 
authors and implementers have something to aim for? and of course, 
so that these issues are adressed properly, rather than returning here 
as limitations on the design process?


of course, that's no bikeshed, but still, perhaps people have an opinion?-)
claus


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


lambda-match example - from parser combinators to grammar combinators

2006-11-11 Thread Claus Reinke

Some of you have asked me whether I could provide more convincing
examples for lambda-match, or whether the shortcomings of Haskell
addressed in this proposal will be of practical relevance to the typical
seasoned Haskeller without specific interests in language design.

There are of course the various themes of views, pattern abstractions,
and first-class patterns, which could be built on top of lambda-match,
but I'd like to follow a slightly different angle first, inspired by an 
interesting off-list remark in response to the lambda-match proposal:


   I do consider myself a fairly seasoned Haskell programmer, and 
   to be honest, I have to admit that I rarely if ever have missed 
   composable pattern matching at the source level. Of course, that 
   could be because I subconsciously just work around the problem, 
   being used to Haskell as it is.


I do indeed believe that the problem of non-compositional pattern
match has been around in Haskell for so long that many of today's
Haskellers are no longer even aware of the issue, and of how much
it affects them.

So, here is one slightly less trivial example of using lambda-match, 
which happens to stand for a large group of possible applications, 
and for one particular area where the lack of compositional patterns
has influenced the Haskeller's world-view: 

Ever since I took up Haskell, I have wondered why Haskellers tend 
to specify their grammars not just twice (abstract + concrete), but 
thrice (abstract + parsing + unparsing).


The majority of seasoned Haskellers seems to accept that there must 
be parsers+pretty-printers, read+show, serialize+de-serialize, etc., and
that changing concrete syntax must involve making fixes in two separate 
bits of code, often even following two separate coding patterns.


But if one looks at so-called parser combinators, there is very little in
them that is parser-specific - usually only the literal parsers determine
that we are talking about parsing, whereas the majority of combinators
can be used just as well for other syntax-directed tasks. Still, people
tend not to reuse their combinator-based grammars for anything but
parsing.

I submit that one of the main reasons for this is that Haskellers have
come to accept that they can construct, but not deconstruct algebraic 
types in a compositional way (hence the use of parser combinators

for converting Strings into algebraic data types, and the use of more
pedestrian means for showing the latter as Strings; pretty-printing
libraries do at least use combinators, but do not reuse the grammars
specified through parser combinators).

Please have a look at the example (which needs both syntax patch 
and library from the proposal ticket, if you actually want to run it *,
but the ideas should be reasonably obvious even without): 


it specifies a concrete and abstract syntax for lambda calculus, and
the relationships between the two levels of syntax, using an algebraic
data type for the abstract syntax, and a grammar built with monadic
combinators for the rest. fairly standard, but for the following:

language and library support for monadic data parsing via 
lambda-match allow us to mix data parsing and string parsing in the 
same monadic framework, using the same grammar combinators 
to specify the concrete syntax and its relation to the abstract syntax

just once, in one piece of code.

we can use that single grammar for parsing, unparsing, or indeed, 
for mixtures of both (see the examples). A long time ago, I used 
something like this (then sadly without language support) to 
implement a syntax-oriented editor, with parsing and formatted

printing from a single grammar.

Although I haven't worked this out, I suspect that the technique 
would also apply to protocol-based applications: instead of 
writing client and server separately (and then trying to prove 
that they fit together and follow two sides of the same protocol), 
one might try to write a single grammar for the protocol between 
them, toggling mode at the appropriate points, and then client 
and server would simply be two instances/uses of the same 
grammar in its two start modes (so the server would generate 
prompts, parse requests, and generate responses, and the 
client would expect prompts, generate requests, and parse 
responses).


have fun;-)
claus

* I have submitted the syntax patch for the GHC head repository,
*  but GHC HQ are reluctant to apply the patch as long as there
*  is no obvious general interest (someone else but myself, and 
*  not just in private email to myself;-) in using these features. If 
*  you want to investigate lambda-match in GHC, to make up your

*  mind about whether or not you like the proposal (at the moment,
*  we're only talking about the daily snapshots of GHC head, not 
*  about long-term support in GHC, let alone inclusion in Haskell'!), 
*  please let yourself be heard!


   (more adventurous souls can of course apply the patch from the
   ticket themselves 

Re: Fractional/negative fixity?

2006-11-07 Thread Claus Reinke


by all means, lets have warm fuzzy precedence declarations

   infix(nearly right) (exp (2*i*pi) + 1) :-)
   infix(mostly left) (((\x-cos x + i*(sin x)) (2*pi)) + 1) (-:

who says that all the fun has to start in the type system?-)

we would probably need to refer to hyperreals, in order to
introduce infinitesimal differences between real precedence levels?

oh, and let us not forget the early Basic's contribution to language
design: renum (who could ever to without it!-)


ah well, to justify the use of bandwith (and because you should 
never let your design decisions be influenced by someone making

fun of any of the suggestions):

- absolute numbers for operator precedence are a hack that
   reminds me strongly of my Basic times: I used steps of 100
   starting with 1000 for line numbers, I used renum to make
   space for additions or to clean up (was that refactoring?-), 
   but I was still happy to leave all that nonsense behind!


- googling for operator precedence relative suggests that some
   parser generators already use something other that absolute
   preferences

- prolog has more precedence levels, as well as simple declarations
   for pre- and postfix operators (fx, xf)

sorry, I just couldn't resist any more;-)
claus

--
unsagePerformIO:
   some things are just not wise to do



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: lambda-match vs PMC

2006-10-30 Thread Claus Reinke
lambda-abstraction doesn't even exist at expression level, but is 
replaced by spliced matching; application exists, but is not needed,

because (f e) = {| e | {| f |} |} (unless I'm mistaken?)


oops, wrong brackets around f - should be:

{| e | ^f^ |} -- {| ^f e^ |} -- f e

with f supposedly being a lambda abstraction ( \v.b ) represented 
as a match ( {| v = ^b^ |} ), we get:


f e = {| v = ^b^ |} e -- {| e | v = ^b^ |} -- {| ^b^[ v\a ] |}
   -- b[ v\a ]

as seen on page 3 of the MPC2006 paper, also showing that
application reduces to argument supply, which provides beta
reduction.

claus
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: proposal: introduce lambda-match (explicit match failure andfall-through)

2006-10-30 Thread Claus Reinke
name: 
   introduce lambda-match (explicit match failure and fall-through)


Dear All,

may I be so optimistic as to interpret the absolute lack of counter
arguments over the last week as indication that this proposal is
acceptable in general? Thanks to those few who have expressed 
support so far, usually in the form I've wanted something like

this for years! (*)

I have braved the evil trac-wiki formatter again, to convert the email
proposal into a slightly updated ticket, with attached patch for GHC, 
support libraries and usage examples:


   introduce lambda-match (explicit match failure and fall-through)
   http://hackage.haskell.org/trac/haskell-prime/ticket/114

most notable updates are in the support library (now being a bit 
more helpful in preserving error messages and defining fall_through

cases; also supports joining of nested matches now), with a few
added examples demonstrating the changes.

It is a good sign that the syntax patch itself has not changed so far,
and the support library now supports most of what I had in mind
for it (took me a while to figure out how to do nest ;-). But it 
would be very helpful if more eyes looked over the code, to see

if the functionality is roughly right (not to mention the implementation).

And, of course, syntax patches for other Haskell implementations
would be great (at least verify whether your favourite implementation
can handle the support library, please - so far verified for GHC and
Hugs)!

Thank you,
Claus

ps. a quick recap for those who don't read webpages:  a
   lambda-match

   | patterns | guards - expr

   is syntactic sugar for 


   \ parameters - case parameters of
  { patterns | guards - Match $ return expr
   ; _ - Match $ fail lambda-match failure }

   which allows us to program explicitly with match failure
   (represented as Monad.fail/MonadPlus.mzero) and match
   fall-through (using MonadPlus.mplus), lifting MonadPlus
   operations over function parameters for ease of use.

   this enables us to write previously practically impossible 
   things (the example file gives some indication of just how
   unreadable and hence unusable these would be without 
   syntactic sugar), such as a  user-defined case-variant 
   (included in the library):


   caseOf True $ ( |True- False ) +++ ( |False- True )
   --
   False
   
   or monadic match-failure without using do-notation:


   return True = (ok $ |False- return hi) :: Maybe String
   --
   Nothing

   lambda-matches may be nested, but unlike PMC, that will 
   usually result in nested match monads, unless we use the new

   nest to join the nested monads:

   myAnd = splice (nest (|True-  (|True-True) 
  +++ (|False-False))

   +++ nest (|False- fall_through False) )

   we can now also abstract over groups of match alternatives:

   grp :: MonadPlus m = String - [(String, String)] - Match m String
   grp = (|  x  locals | Just y - lookup x locals - y)
 +++ (| X locals - 42)
 +++ matchError var not found

   and extend them later, or just use them to build different functions:

   -- select only the first match
   varVal :: String - [(String, String)] - String
   varVal  = spliceE grp

   -- a variation, delivering all successful matches
   varVals :: String - [(String, String)] - [] String
   varVals  = allMatches grp

   leading to uses like these:

   *Main varVal X [(X,hi)]
   hi
   *Main varVal Z [(X,hi)]
   *** Exception: var not found
   *Main varVals X [(X,hi)]
   [hi,42]
   *Main varVals Z [(X,hi)]
   []


   and so on, and so on.. see the proposal attachments for more
   inspirations !-)

(*) it might be useful for the Haskell' committee to clarify the
   process for acceptance of proposals, similar to what the
   Haskell library community has done recently:

   http://haskell.org/haskellwiki/Library_submissions

   (where the intent of the discussion period is to focus the
   process, and to ensure progress, ie lack of objections to
   a clearly implementable/implemented proposal is seen as
   implicit agreement)

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


lambda-match vs PMC

2006-10-29 Thread Claus Reinke

It is encouraging that separate groups have come to similar approaches
wrt to more modular pattern match facilities (though perhaps it isn't all 
that surprising, eg, my own musings on this topic started after one of the

early functional logic programming high periods, in the early 1990s, and
have since been shaped by the developments in monadic and type class 
programming in Haskell; PMC seems to have come from a rewriting
background, shaped by monadic semantics and type theory; these 
would imply the same influences, moderated via different communities?).


There are, however, some differences that might be worth further
inspection. Referring to http://www.cas.mcmaster.ca/~kahl/PMC/ , 
and especially to the MPC 2006 paper, in no particular order, with 
no claim to completeness (and please read comments like odd as 
purely subjective, reflecting only my personal reactions as I'm trying 
to figure out the relation!-):


- matchings are not first-class expressions in PMC

the only syntactically correct way to mention a matching in an expression
is inside a splice ( {| match |} ). this is fairly surprising given the aims of 
PMC, as one would expect operations on matchings, matchings as 
parameters and results of functions, etc. .. until one notices that not 
only the operations on matchings, but


- *all* the interesting action in PMC is at the level of matchings, 
   not at the level of expressions


lambda-abstraction doesn't even exist at expression level, but is 
replaced by spliced matching; application exists, but is not needed,

because (f e) = {| e | {| f |} |} (unless I'm mistaken?)

if one eliminated the oddity of having two applications, but only
one abstraction, the category of expressions would be nearly 
empty, apart from constructor terms, and the separation of 
expressions and matchings seems needed only to support 
conversions between the two:


{| _ |} :: Match - Expression
^ _ ^ :: Expression - Match 


which directly brings up the next oddity, in that there are no such
types in PMC:

- the difference between matchings and expressions is maintained 
   in the typing contexts/labels, not in the types


in spite of the monadic semantics, there are no monads in the type
system, and instead of 

.. |- pattern | match :: a - m b 

(as Haskellers might expect to see), one has 


   .. |M- pattern | match :: a - b

I was looking into these details because I was trying to compare
lambda-match and PMC, and while most of the differences seemed
merely cosmetic at first, the one difference I couldn't account for was 
that PMC examples seemed to imply an implicit flattening of the 
monadic type structure, whereas I had to join nested lambda-matches 
explicitly (I would find the ability to join nested matchings quite useful

on occasion, but I fail to see how this could be done implicitly/
automatically, unless by giving up the ability to express nested matches).

if I now understand all these items correctly in combination, PMC is 
a flattened monadic framework, ie., one cannot even construct the 
equivalent of a ( nested :: m (m a))? or am I missing the obvious?-)


cheers,
claus

ps. after reading the MPC 2006 paper, I have to support the referees'
   recommendation: as one of many who only occasionally dive into
   the most shallow parts of category theory, I don't find the monadic
   semantics in its current presentation helpful. A running commentary
   in computational lambda-calculus, as apparently suggested by the
   referees, would have made all the difference (I think, because that 
   kind of presentation usually translates easily into Haskell;-). As it 
   stands, I fear you are limiting your audience needlessly.


--
interactive:1:0:-)
   Couldn't match `Categories' against `Haskell'
   Expected type: Haskell
   Inferred type: Categories
   In the first argument of `readPaper', namely `PMC_MPC2006'
   In the definition of `it' : it = readPaper PMC_MPC2006


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


proposal: introduce lambda-match (explicit match failure and fall-through)

2006-10-24 Thread Claus Reinke
name: 
   introduce lambda-match (explicit match failure and fall-through)


summary: 
   Haskell 98 provides *two different translations* of lambda

   abstractions involving pattern matching, only one of which is
   directly accessible (Section 3.3 - the other is embedded in the
   translation of do-notation, see ok in Section 3.14).

   providing explicit source notation for the second translation,
   substantially simplifies programmability of pattern-match
   fall-through by reification of pattern-match failure, two 
   central language features that are so far only available

   through the built-in, non-composable case construct.

what:
   in Section 3.3, the translation for conventional lambda
   abstractions with patterns is given as

   [| \p1 .. pn- e |] = \x1 .. xn- case (x1,..,xn) of (p1,..,pn) - e

   with xi fresh identifiers, and pattern-match failure resulting in
   _|_. the latter is unfortunate, and results in partial functions the
   coverage of which cannot be combined, but it is forced by 
   translation into conventional lambda calculus.


   since computational lambda calculus (\-M) has become a central part
   of Haskell programming practice, there shall be an alternative form
   of lambda abstraction, dubbed here lambda-match, with associated
   translation into \-M:

   [| |p1 .. pn- e |] = \x1 .. xn- case (x1,..,xn) of 
   { (p1,..,pn) - return e;

  _ - fail no match }

   [note1: this is the translation in principle - in practice, to
enable composition of lambda-matches without further language
extensions, a slight modification is needed, wrapping the right-hand
sides of the case in a Match constructor, see library and patch]

   a library for composing these lambda-matches shall provide for
   composition of alternative lambda-matches (+++), match failure
   (nomatch) and embedding of the explicit match monad into pure
   expressions (splice, where splice |p1..pn-e = \p1..pn-e).

   [note2: an alternative translation would use do-notation instead
of case as the target:

[| |p1 .. pn- e |] = \x1 .. xn- do { 
   (p1,..,pn) - return (x1,..,xn); 
   return e }


   both translations easily accomodate guards and pattern guards
   as well, the former by building on existing support for these two
   features in case constructs, the latter without requiring any 
   previous implementation of pattern guards, by lifting (pattern)

   guards into the match monad]

implementation impact:
   minimal - limited to an extension in parser/desugarer, employing
   previously inaccessible syntax for lambda-match, and a slight
   variation of the existing lambda translation; also adds a small
   library to support composition of matches.

   [note3: a first draft of the composition library and a patch for
GHC (about a dozen new lines in the parser) are provided as 
attachments to this proposal, together with some examples]


context:
   as has been pointed out in the thread on replacing and improving
   pattern guards, Haskell's pattern matching support, even before
   pattern guards, is monolithic (built-in case is the only way to handle
   multiple alternative matches) rather than compositional (lambdas
   represent individual alternatives, but cannot be composed on 
   match fall-through). this implies increased complexity of the language
   definition and limited expressiveness of its features, when compared 
   with alternative models (eg, adapting Wolfram's pattern match 
   calculus for Haskell). see, eg.:


   http://www.haskell.org/pipermail/haskell-prime/2006-October/001713.html
   http://www.haskell.org/pipermail/haskell-prime/2006-October/001720.html
   http://www.haskell.org/pipermail/haskell-prime/2006-October/001723.html
   http://www.haskell.org/pipermail/haskell-prime/2006-October/001724.html

   in principle, replacing Haskell's current pattern matching support
   with a simpler, more compositional model, and defining the current
   constructs on top of that new core is the way forward, IMHO. in
   practice, however, I suspect that the committee is less than tempted
   to introduce such a substantial simplification for Haskell'.

   the present proposal is an attempt at a compromise, suggesting a
   minimal language change to introduce compositional pattern-match
   failure and fall-through. with lambda-match, it implements only a 
   single language extension (as syntactic sugar), delegating the rest

   of the functionality to a library. without the sugar, the result of the
   by-hand translation becomes so hard to read as to be near 
   unusable, while the chosen form of sugaring allows to provide

   most of the language features discussed in the earlier threads to
   be provided as a library. this does seem to be a useable balance.

   building constructs of the simpler pattern-match model on top of 
   the more 

Re: (Pattern) Guards in lambdas

2006-10-18 Thread Claus Reinke

suggestion: undo removal of guards from lambdas, especially
(but not only) if pattern guards make it into the language.


See the existing proposals
   http://hackage.haskell.org/trac/haskell-prime/wiki/LambdaCase
   http://hackage.haskell.org/trac/haskell-prime/wiki/MultiWayIf


thanks. I'm a fan of the correspondence principle, and as we have
a LetCase, there should be a LambdaCase as well (the other seems
to be inspired by Lisp's cond?). but the syntax is slightly awkward.
is there a reason not to merge LambdaCase and Lambda, thus 
addressing both my suggestion and the LambdaCase proposal?


   f pat | patguard = rhs

   \ pat | patguard - rhs

   case x of arms

   (\ arms ) x

claus

ps. strawpoll-2 has both LambdaCase and MultiWayIf as 'no'.
   but that is numbers, not rationale..
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


(Pattern) Guards in lambdas

2006-10-17 Thread Claus Reinke

since Pattern Guards appear to be popular with the committee,
I suggest to revisit the decision to drop guards from lambdas:

(a) http://www.cse.unsw.edu.au/~dons/haskell-1990-2000/msg00353.html
(b) http://www.cse.unsw.edu.au/~dons/haskell-1990-2000/msg00382.html

1. I disagree that this was a simplification of Haskell

   the language became smaller (fewer valid programs), but
   that reduction in size was bought by breaking a symmetry
   (pattern matches are the same whereever they are used)
   and adding a restriction (no guards for patterns in lambdas),
   so the smaller language is actually more complicated.

2. adding guards to lambdas can only cause more program
   runs to fail (no chance of handling pattern-match/guard
   failure and fall through), so it is kind of understandable 
   that this feature was considered dubious. however,


   - adding a guard there is comparable to adding an
   assertion, a feature often considered valuable
   
   - with pattern guards, the guard is no longer restricted

   to filtering, and that added functionality is not
   currently accessible for lambda patterns

suggestion: undo removal of guards from lambdas, especially
(but not only) if pattern guards make it into the language.

claus

ps. are there any notes regarding the discussion and
   stylistic grounds mentioned in (a)?

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Replacing and improving pattern guards with PMC syntax

2006-10-02 Thread Claus Reinke

I'm not sure I follow all the details, but I think I agree ;) with Wolfram
on several points, even if I've arrived there via a different route. 

Some of this may sound strange to those who equate declarative 
with equational, so let me state in advance that I do not agree with 
that particular notion - expression-level programs, such as parsers 
built from parser combinators, can be just as declarative as equations.


However, I do agree that pattern matching has become a problem
in functional languages (I once knew a fpl where that part of the
language and implementation was of roughly the same complexity
as the whole rest of it), and Haskell is unfortunately no exception. 
The problem is not that there is syntactic sugar for pattern matching,

but that this isn't sugar coating at all - there is functionality hidden
in there that cannot be provided by the remainder of the language.

In other words, pattern matching and associated sugar become
part of Haskell's core, which thus becomes more complex,
without offering sufficient compensation in terms of expressiveness.

The particular issue at hand is not that case is modeled after let
rather than after lambda, but that pattern match failure and fall
through are built-in concepts that cannot be programmed in the 
language but have to be dealt with in specific syntactic forms. 
Following the old adage of simplicity through generality, I 
would prefer instead if we would have available language-
constructs for matchings, composition of matchings, and 
splicing of such rule sets back into expressions (\ in 
Wolfram's description). 

Then pattern match fall-through would be programmable, 
patterns matching constructs could be composed as easily 
as parser combinators (as an added bonus, getting closer 
to first-class patterns), case, multi-equations, and do-notation 
would become true syntactic sugar, and Haskell's core 
would finally start to become simpler, for once.


My own awkward-looking translations were driven by having
found two tool's in Haskell that come close to this ideal, even
if the syntax is awkward: the mapping of pattern-match failure
to fail in do-notation, and the use of fail msg=mzero in
MonadPlus. By these means, matchings (lambdas with patterns
and explicit match failure) can be represented as do-blocks:

   lhs-rhs == \x-do { lhs - return x; return rhs }  (1)

and then be composed (the examples I gave moved the \s to 
the front instead and used mplus to compose matchings, but 
we could also lift the compositions to function-level instead), 
and finally spliced back (turning explicit into implicit match 
failure) using fromJust or suchlike. Adding pattern guards into 
this translation was straightforward - they simply go between 
the two returns.


Now, instead of repeatingWolfram's arguments about case
and multi-equations, let's have a look at do-notation, as used
in (1): it is obvious that this part of the translation is the source
of the awkwardness I mentioned, as the right-hand side of (1)
has a lot more syntax noise than the left-hand side. What we
really want here is some way of lifting lambda-abstractions
so as to make potential pattern-match failures explicit and
handleable. Perhaps we can get rid of some of that syntax
noise? 


   \x-do { lhs - return x; return rhs }
--
   \x-(return x = \lhs-return rhs)
--
   \x-(= (\lhs-return rhs)) (return x)
--
   (= (return . (\lhs-rhs))) . return

hey, that's great! so the lifting we are looking for is simply

   lift match = (= (return . match)) . return

right? wrong, unfortunately. Looking more closely at the
translation of do-notation in 3.14 of the Report, we see
that it actually creates a new function by syntactic rather
than semantic manipulation (in fact mapping the problem
of fall-through back to a multi-equation first, then to fail), 
so we have no hope of reproducing the nice behaviour 
wrt pattern-match failure without using the do-notation, 
and all the syntax noise that implies.


I'm not sure whether Wolfram suggested only a simplication
of the specification of pattern matching or an actual reification
of the concepts of matching, composition of matching, and
splicing of matchings into multi-alternative lambdas. Personally,
I'd very much like to see the latter, as this issue frequently
confronts me, eg., when embedding domain-specific languages
and their patterns in Haskell.

When Haskell came into being, it started from the lambda
calculus, but nowadays, a substantial portion of Haskell
programs use various monads to capture program patterns.
If Haskell was designed today, monads would not be a 
late add-on with some syntax to be translated away, but 
they would be at the core of the language, with other 
features translated away into that generalised core. 

And one of the things that would make possible is to replace 
some previously built-in and non-composable notions, like 
pattern-match fall through and composition of rule alternatives, 
by a smaller, yet more 

Re: termination for FDs and ATs

2006-05-04 Thread Claus Reinke

see also:
http://www.haskell.org//pipermail/haskell-prime/2006-March/000847.html


1.  As Manuel explained, in the AT formulation it's possible to avoid
non-termination by suspending (leave unsolved) any equality constraint
of form (a = ty) where 'a' appears free in a type argument to an
associated type in 'ty'.


as both Manuel and myself have pointed out, ensuring that the occurs 
check covers type-functions as well is not only possible, but implied by

working on top of an HM type-system. whether to leave open constraints
that run foul of this check, or whether to reject them early, knowing that
they can't be fulfilled, is a separate matter. as Manuel explained, the AT
system leaves them open because the unification is semantic, ie. further
reduction of the type-functions in such equations might eliminate the
recursive variable references. as long as the type-functions remain
open, the system cannot know whether the constraints can be fulfilled
(analogously for an FD system).


2.  This solution is not the same as stopping after a fixed number N of
iterations.  It's more principled than that.  


indeed.


3.  We may thereby infer a type that can never be satisfied, so that the
function cannot be called.  (In Martin's vocabulary, the constraints
are inconsistent.)  Not detecting the inconsistency immediately means
that error messages may be postponed.  Adding a type signature would fix
the problem, though.


see above. forcing the type by a signature closes the set of type function
definitions that may be used (further extension is possible, but not visible
at the point of the signature), by which means delayed constraints may
be turned into errors (but note that without the signature/MR, there might
not be any error, just some type-function definition not yet visible).


5.  The effect is akin to dropping the instance-improvement CHR arising
from the corresponding FD.  But we can't drop the instance-improvement
CHR because then lots of essential improvement would fail to take place.
It is not obvious to me how to translate the rule I give in (1) into the
CHR framework, though doubtless it is possible.


the effect is that of conditionally disabling the instance-improvement CHR, 
not dropping it completely. improvement CHR introduce unifications, and

HM implies that unifications are guarded by occurs-checks, so the CHRs
for improvement should be guarded by occurs-checks. that should disable
these CHR in the same cases in which the occurs-check disables the
type-functions in the AT case.

note that the FD-CHR for the class also introduce a unification, hence
need to be guarded the same way.

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: collecting requirements for FDs

2006-04-13 Thread Claus Reinke



What other libraries should Haskell' support, and what are their
requirements?


useful initiative! will your collection be available anywhere?

may I suggest that you (a) ask on the main Haskell and library lists
for better coverage (I would have thought that the alternative Num
prelude suggestions might have some use cases), and (b) collect 
non-use cases as well (eg, where current implementations are 
buggy/incomplete/do different things, or where other reasons have 
prevented Haskellers from using FDs so far)? I think trying to clean

up the latter will be more effective than wading through dozens of
variations of the same working examples - you're looking for 
counter-examples to the current design, aren't you?


and just in case you haven't got these on your list already, here are 
some examples from earlier discussions on this mailing list:


- ticket #92 has module Data.Records attached to it.
   http://hackage.haskell.org/trac/haskell-prime/ticket/92
   I'd like to be able to use that in Haskell'. the library is useful in 
   itself (I've used its record selection and concatenation parts when 
   encoding attribute grammars), and I also suggested it as a good 
   testcase for Haskell' providing a sufficient (but cleaned-up) subset 
   of currently available features. but it is also an example of code that


   - works with GHC, but not with Hugs; one of those problems 
   I reported on hugs-bugs:

   http://www.haskell.org//pipermail/hugs-bugs/2006-February/001560.html

   and went through a few of the Hugs/GHC differences here 
   on this mailing list:

   http://www.haskell.org//pipermail/haskell-prime/2006-February/000577.html
   
   and used the Select example to motivate the need for relaxed

   coverage in termination checking:
   http://www.haskell.org//pipermail/haskell-prime/2006-February/000825.html

   I have since come to doubt that GHC really solves the issue,
   it just happens that its strategy of delaying problems until they may
   no longer matter works for this example; but one can construct other 
   examples that expose the problem in spite of this delayed complaining 
   trick. see my own attempts to show FD problems here:

   http://www.haskell.org//pipermail/haskell-prime/2006-February/000781.html

   or Oleg's recent example on haskell-cafe:
   http://www.haskell.org//pipermail/haskell-cafe/2006-April/015372.html
   
   while I didn't quite agree with his interpretation (see my answer

   to his message), he did manage to construct an example in which
   GHC accepts a type/program in violation of an FD.

   - requires complex workarounds, thanks to current restrictions,
   where the same could be expressed simply and directly without;
   (compare the code for Remove in Data.Record.hs: the one in 
comments vs the one I had to use to make GHC happy)


- things like a simple type equality predicate at the type class level
   run into problems with both GHC and Hugs. reported to both
   GHC and Hugs bugs lists as:
   http://www.haskell.org//pipermail/hugs-bugs/2006-February/001564.html

- the FD-visibility limitations strike not only at the instance level. 
   here is a simplified example of a problem I ran into when trying 
   to encode ATS in FDs (a variable in a superclass constraint that

   doesn't occur in the class head, but is determined by an FD on
   the superclass constraint):
   http://hackage.haskell.org/trac/ghc/ticket/714

- the HList library and associated paper also use and investigate
   the peculiarities of FDs, and variations on the TypeEq theme
   (it has both unpractical/portable and practical versions that 
make essential use of some limitations in GHC's type class

implementation to work around other of its limitations; it
demonstrates wonderfully why the current story needs to
be cleaned up!):
   http://homepages.cwi.nl/~ralf/HList/

hope that's the kind of thing you are looking for?-)

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-03-31 Thread Claus Reinke

This is the way it is right now in GHC: the default is safe, and safe
means both reentrant and concurrent.  This is for the reason you give:
the default should be the safest, in some sense.
..
So we can't have the default (unanotated) foreign call be something that
isn't required by the standard. 


why not? you'd only need to make sure that in standard mode,
no unannotated foreign declarations are accepted (or that a warning
is given).

claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Pre-emptive or co-operative concurrency (was: Concurrency)

2006-03-30 Thread Claus Reinke

GHC's SMP mode is truly preemptive, operations from multiple threads can
be arbitrarily interleaved.  So let's stop saying that all known
implementations are non-preemptive, please ;-)


but gladly, if that is the default!-) 

so if we take that hypothetical example of foreign exporting GHC's 
concurrency support, can we assume that the (IO a)s implemented in 
foreign code will be given their own OS thread when using that 
concurrency library? all of them, or only the non-atomic ones?


if, say, Hugs was to foreign import that library from GHC, its IO actions
wouldn't do much (GHC-side )allocation; and if Hugs was to import that
same library from YHC, it wouldn't do many (YHC-side) abstract machine
steps; etc.; we could try real time-slicing, but how would we suspend/restart
foreign code? so there doesn't seem to be much choice for integrating
foreign IO code into the schedule, other than giving it its own OS-thread;
of course, Hugs' IO actions may not be thread-safe, so that may not
be an option, either.

the point being: the FFI says something about how to integrate
foreign and Haskell memory management; should it also say something
about threadability of foreign code (wrt to scheduling, and wrt thread-safety)?

cheers,
claus

ps: Neil said:

If all Haskell' prime implementations depend on GHC the library,
then do we really have many Haskell' prime implementations, or just a
pile of wrappers around GHC?


are you implying that implementing external libraries in Haskell is
in any way inferior to implementing them in C?-)

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-03-30 Thread Claus Reinke

It is not like inserting yields needs to be done much at all since we have
progress guarentees, so we know the program is doing something and on
any blocking call that could potentially take a while, the library will
yield for you.


where do we get the progress guarantees from? do we need a 
yield-analysis? something that will automatically insert yields
in the code after every n atomic steps, and complain if it cannot 
infer that some piece of code is atomic, but cannot insert a yield 
either? how much of the burden do you want to shift from the

implementer to the programmer?

cheers,
claus
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-03-30 Thread Claus Reinke

I updated the ForeignBlocking wiki page with what I believe is the
current state of this proposal; see


didn't I mention that concurrent may be inappropriate and misleading, 
and that I think it is bad practice to rely on the programmer annotating 
the dangerous cases, instead of the safe cases?


wouldn't the safe approach be to assume that the foreign call may do 
anything, unless the programmer explicitly tells you about what things 
it won't do (thus taking responsibility).


cheers,
claus


http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/ForeignBlocking


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-03-30 Thread Claus Reinke
didn't I mention that concurrent may be inappropriate and misleading, 
and that I think it is bad practice to rely on the programmer annotating 
the dangerous cases, instead of the safe cases?


I think dangerous is a misleading term here. you are already using the
FFI, all bets are off. and it is not really dangerous to accidentally
hold up your VM when you didn't expect, it is more just a simple bug.


perhaps dangerous was too strong a term, but if programmers don't
annotate an ffi declaration, what is more likely: that they meant to state
a property of that function, or that they didn't mean to? 

if there is a way to avoid simple bugs by not making assumptions about 
undeclared properties, then I'd prefer that to be the default route. if, 
on the other hand, programmers do annotate the ffi declaration, then 
it is up to them to make sure that the function actually has the property 
they claim for it (even in such cases, Haskell usually checks the 
declaration, but that isn't an option here).


Unsafe or dangerous means potentially leading to undefined behavior, 
not just incorrect behavior or we'd have to label 2 as unsafe becaues 
you might have meant to write 3. :)


you mean your compiler won't catch such a simple mistake?-)

but, seriously, that isn't quite the same: if I write a Num, it's my 
responsibility to write the Num I meant, because the implementation

can't check that. but if I don't write a Num, I'd rather not have the
implementation insert one that'll make the code go fastest, assuming
that would always be my main objective! (although that would be
a nice optional feature!-)

wouldn't the safe approach be to assume that the foreign call may do 
anything, unless the programmer explicitly tells you about what things 
it won't do (thus taking responsibility).


I think the worse problem will be all the libraries that are only tested
on ghc that suddenly get very poor performance or don't compile at all
when attempted elsewhere.


- GHC and the other implementations should issue a warning for
   using non-standard or non-implemented features (that includes code
   that won't obviously run without non-standard features)
- if an implementation doesn't implement a feature, there is no way
   around admitting that, standard or not
- if adding valid annotations are necessary to make non-GHC 
   implementations happy, then that's what programmers will have to do 
   if they want portable code; if such annotation would not be valid, we 
   can't pretend it is, and we can't pretend that other implementations 
   will be able to handle the code


- if only performance is affected, that is another story; different
   implementations have different strengths, and the standard shouldn't
   assume any particular implementation, if several are viable
- but: if certain kinds of program will only run well on a single 
   implementation, then programmers depending on that kind of program 
   will only use that single implementation, no matter what the standard 
   says (not all my Haskell programs need concurrency, but for those 
   that do, trying to fit them into Hugs is not my aim)



However, the 'nonreentrant' case actually is dangerous in that it could
lead to undefined behavior which is why that one was not on by default.


why not be consistent then, and name all attributes so that they are off 
by default, and so that implementations that can't handle the off case will

issue a warning at least?

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-03-29 Thread Claus Reinke

Malcolm correctly notes that when I say non-blocking I'm referring to
the behaviour from Haskell's point of view, not a property of the
foreign code being invoked.
In fact, whether the foreign code being invoked blocks or not is largely
immaterial.  The property we want to capture is just this:
 During execution of the foreign call, other Haskell threads
 should make progress as usual.


if that is really what you want to capture, the standard terminology 
would be asynchronous call (as opposed to synchronous call). 
hence all that separation between synchronous and asynchronous 
concurrent languages (so concurrent would not be a useful qualifier).


the only remaining ambiguity would be that concurrent languages
(eg, Erlang) tend to use asynchronous calls to mean that the 
_calling thread_ does not need to synchronise, whereas you 
want to express that the _calling RTS_ does not need to 
synchronise while the _calling thread_ does need to. 

which makes me wonder why one would ever want the RTS to 
block if one of its threads makes a call? if the RTS is sequential 
(with or without user-level threads), it can't do anything but 
synchronous foreign calls, can it? and if the RTS does support 
non-sequential execution, I can see few reasons for it to block 
other threads when one thread makes a foreign call.


I think what you're after is something quite different: by default,
we don't know anything about the behaviour of foreign call, so
once we pass control to foreign, it is out of our hands until
foreign decides to return it to us. 

for sequential RTS, that's the way it is, no way around it. for 
non-sequential RTS, that need not be a problem: if the foreign 
call can be given its own asynchronous (OS-level) thread of 
control, it can take however long it needs to before returning, 
and other (user-level) threads can continue to run, 
asynchronously. but that means overhead that may not 
always be necessary.


so what I think you're trying to specify is whether it is safe for
the RTS to assume that the foreign call is just another primitive
RTS execution step (it will return control, and it won't take long
before doing so). the standard terminology for that is, I believe,
atomic action.

in other words, if the programmer assures the RTS that a foreign
call is atomic, the RTS is free to treat it as any other RTS step
(it won't block the current OS-level thread of control entirely, 
and it won't hog the thread for long enough to upset scheduling

guarantees). if, on the other hand, a foreign call is not annotated
as atomic, there is a potential problem: non-sequential RTS
can work around that, with some overhead, while sequential
RTS can at best issue a warning and hope for the best.

so my suggestion would be to make no assumption about
unannotated calls (don't rely on the programmer too much;),
and to have optional keywords atomic and non-reentrant.

[one might assume that an atomic call should never be 
permitted to reenter, so the annotations could be ordered

instead of accumulated, but such assumptions tend to
have exceptions]

cheers,
claus


It doesn't matter whether the foreign call blocks or not (although
that is a common use for this feature).  I'd rather call it
'concurrent', to indicate that the foreign call runs concurrently with
other Haskell threads.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Pre-emptive or co-operative concurrency (was: Concurrency)

2006-03-29 Thread Claus Reinke

Pre-emption means that (1) threads have priority levels, and (2) that a
higher priority thread can steal the processor from a currently-running
lower priority thread, and (3) it can do so immediately it needs to,
without waiting for some safe synchronisation point.


obviously, cs-concepts are still taught differently around the globe..
I was taught that non-preemptive scheduling simply means that threads
will never be preempted, so they complete whatever they want to do,
then voluntarily return control. the opposite, preemptive scheduling,
allocates schedule resources to threads independent of their needs, 
solely based on meta-level properties (time slices, number of 
reductions, ..), so threads may be preempted by a context switch

in whatever they are doing.

there is no implication of priorities, nor is there an implication of
re-scheduling not happening at safe synchronisation points. it is
just that safety is from the scheduler's point of view (a reduction
step has completed), not from the thread's point of view (all the 
steps needed for a certain task have been completed).


so (at least in my background;-), non-preemptive scheduling implies
cooperative concurrency (if any of the threads does not play fair
with yields, the whole scheduling arrangement may break down),
and preemptive scheduling implies careful programming for another
reason (none of the threads may assume that it won't be interrupted
and resumed in the middle of some complex activity; which is why
atomic actions, transactions, STM, and the like are so important).

all of the concurrency implementations discussed so far seem to
be based on a mixture of preemptive and non-premptive scheduling.
context-switches happen only on specific events, which every 
thread will usually engage in, although it need not always do so:


1 only calls to yield
2 any calls to concurrency library api
3 any allocation

these differ merely in the level of cooperation required from 
threads in order to achieve the appearance of pre-emptive 
scheduling. each of them can be defeated by non-cooperative

thread code, so none of them is entirely preemptive. however,
the more possible thread events are permitted as context-switch
points, the closer we come to a preemptive scheduler, as there
is less and less potential for threads to be non-cooperative.

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-22 Thread Claus Reinke

  class Graph (g e v) where
  src :: e - g e v - v
  tgt :: e - g e v - v

  we associate edge and node types with a graph type by
  making them parameters, and extract them by matching.


If I understand correctly, this requires all graphs to be polymorphic in 
the types of edges and vertices. Thus, you cannot (easily) define a graph 
which provides, say, only boolean edges. Moreover, doesn't this require 
higher-order matching?


I've already answered the last question. as for polymorphism, all this
requires is for a graph type parameterized by an edge and vertex
type (just as the SML solution, which got full marks in this category,
requires instantiations of the edge and vertex types in the graph structure). 
I already gave an example of a graph instantiated with (Int,Int) edges 
and Int vertices. see below for a translation of the ATC paper examples



variant B: I've often wanted type destructors as well as constructors.
  would there be any problem with that?

  type Edge (g e v) = e
  type Vertex (g e v) = v

  class Graph g where
  src :: Edge g - g - Vertex g
  tgt :: Edge g  - g - Vertex g


This suffers from the same problems as the previous variant. It also looks 
a lot like a special form of associated types. Could the AT framework be 
extended to support a similar form of type synonyms (in effect, partial 
type functions outside of classes)? 


it suffers as little as the previous variant. and it was meant to be a special
form, showing that the full generality of ATs as a separate type class 
extension is not required to solve that paper's issue. and the translation 
from type functions to FDs or ATs is entirely syntactic, I think, so it 
would be nice to have in Haskell', as long as at least one of the two is 
included.



Would

  instance Graph Int
-- methods left undefined

be a type error here?


yes, of course. instances still have to be instances of classes. in variation
A, the type error would be in the instance head, in variation B, it would
be in the method types (although it could backpropagate to the head).


  class Edge g e | g - e
  instance Edge (g e v) e
  class Vertex g v | g - v
  instance Vertex (g e v) v

  class (Edge g e,Vertex g v) = Graph g where
  src :: e - g - v
  tgt :: e - g - v

  (this assumes scoped type variables; also, current GHC,
   contrary to its documentation, does not permit entirely 
FD-determined variables in superclass contexts)


What are the types of src and tgt here? Is it

  src, tgt :: (Edge g e, Vertex g v, Graph g) = e - g - v


yes.

This does not seem to be a real improvement to me and, in fact, seems 
quite counterintuitive.


Roman


you're free to your own opinions, of course!-)

it is, however, probably as close as we can come within current Haskell,
and the shifting of Edge/Vertex to the right of the '=' is a purely syntactic
transformation, even if it is a nice one.

and as you can see from the implementation below (I had to move the 
class methods out of the class to circumvent GHC's current typing problem, 
so no method implementations, only the types), it is sufficient to address the 
problem in that survey paper, and accounting for graphs with specific types 
is no problem (direct translation from ATC paper examples):


   *Main :t \e-src e (undefined::NbmGraph)
   \e-src e (undefined::NbmGraph) :: GE2 - GV2
   *Main :t \e-src e (undefined::AdjGraph)
   \e-src e (undefined::AdjGraph) :: GE1 - GV1

cheers,
claus

{-# OPTIONS_GHC -fglasgow-exts #-}

class Edge g e | g - e
instance Edge (g e v) e 


class Vertex g v | g - v
instance Vertex (g e v) v

class Graph g
-- these should be class methods of Graph..
src, tgt :: (Edge g e,Vertex g v,Graph g) = e - g - v
src = undefined
tgt = undefined

-- adjacency matrix
data G1 e v = G1 [[v]]
data GV1 = GV1 Int
data GE1 = GE1 GV1 GV1
type AdjGraph = G1 GE1 GV1  -- type associations

instance Graph AdjGraph

-- neighbor map
data FiniteMap a b
data G2 e v = G2 (FiniteMap v v) 
data GV2 = GV2 Int

data GE2 = GE2 GV2 GV2
type NbmGraph = G2 GE2 GV2  -- type associations

instance Graph NbmGraph

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-21 Thread Claus Reinke
you're right about interactions in general. but do you think constructor 
classes specifically would pose any interaction problems with FDs?

You have to be more careful with unification in a higher-kinded setting.
I am not sure how to do that with CHRs.


to quote from the ATS paper: just like Jones, we only need first-order
unification despite the presence of higher-kinded variables, as we require
all applications of associated type synonyms to be saturated.


variant A: I never understood why parameters of a class declaration
are limited to variables. the instance parameters just have
to match the class parameters, so let's assume we didn't
have that variables-only restriction.

class Graph (g e v) where
src :: e - g e v - v
tgt :: e - g e v - v

we associate edge and node types with a graph type by
making them parameters, and extract them by matching.


The dependency seems to be lost here.


what dependency?

the associated types have become parameters to the graph type,
so the dependency of association is represented by structural inclusion
(type constructors are really constructors, so even phantom types
would still be visible in the type construct). any instances of this class 
would have to be for types matching the form (g e v), fixing the type 
parameters.



variant B: I've often wanted type destructors as well as constructors.
would there be any problem with that?

type Edge (g e v) = e
type Vertex (g e v) = v

class Graph g where
src :: Edge g - g - Vertex g
tgt :: Edge g  - g - Vertex g


Also no dependency and you need higher-order matching, which in general
is undecidable.


the dependency is still represented by the type parameters, as in the
previous case. and is this any more higher-order than what we have 
with constructor classes anyway? here's an example implementation

of the two destructors, using type classes with constructor instances:

   {-# OPTIONS_GHC -fglasgow-exts #-}

   import Data.Typeable

   data Graph e v = Graph e v

   class Edge g e | g - e where edge :: g - String
   instance Typeable e = Edge (g e v) e where edge g = show (typeOf 
(undefined::e))

   class Vertex g v | g - v where vertex :: g - String
   instance Typeable v = Vertex (g e v) v where vertex g = show (typeOf 
(undefined::v))

[the Typeable is only there so that we can see at the value level that 
the type-level selection works]


   *Main edge (Graph (1,1) 1)
   (Integer,Integer)
   *Main vertex (Graph (1,1) 1)
   Integer


variant C: the point the paper makes is not so much about the number
of class parameters, but that the associations for concepts
should not need to be repeated for every combined concept.
and, indeed, they need not be

class Edge g e | g - e
instance Edge (g e v) e
class Vertex g v | g - v
instance Vertex (g e v) v

class (Edge g e,Vertex g v) = Graph g where
src :: e - g - v
tgt :: e - g - v

(this assumes scoped type variables; also, current GHC,
 contrary to its documentation, does not permit entirely 
 FD-determined variables in superclass contexts)


You still need to get at the parameter somehow from a graph (for which
you need an associated type).


oh, please! are you even reading what I write? as should be clear from 
running the parts of the code that GHC does accept (see above), FDs

are quite capable of associating an edge type parameter with its graph.

all three seem to offer possible solutions to the problem posed in 
that paper, don't they?


Not really.


...


II. The other one is that if you use FDs to define type-indexed
types, you cannot make these abstract (ie, the representations
leak into user code).  For details, please see the Lack of
abstraction. subsubsection in Section 5 of
http://www.cse.unsw.edu.au/~chak/papers/#assoc

do they have to? if variant C above would not run into limitations
of current implementations, it would seem to extend to cover ATS:

class C a where
type CT a

instance C t0 where
type CT t0 = t1

would translate to something like:

class CT a t | a - t
instance CT t0 t1

class CT a t = CT a
instance CT t0 t1 = C t0

as Martin pointed out when I first suggested this on haskell-cafe,
this might lead to parallel recursions for classes C and their type
associations CT, so perhaps one might only want to use this to
hide the extra parameter from user code (similar to calling auxiliary
functions with an initial value for an accumulator).


That doesn't address that problem at all.


come again? CT expresses the type 

Re: the MPTC Dilemma (please solve)

2006-03-20 Thread Claus Reinke

welcome to the discussion!-)


(A) It's not as if every interesting program (or even the majority of
interesting programs) use(s) MPTCs.


well, I express my opinions and experience, and you express your's:-)

let's hope that Haskell' will accomodate both of us, and all the other
possible views and applications of Haskellers in general, to the extent 
that they are represented here.



(B) I don't think the time for which an extension has been around is
particularly relevant. 


oh yes, it is. don't get me wrong, we have ample proof that having 
been around for long does not imply good, necessary or even just 
well-understood. but it is certainly relevant, as it demonstrates

continued use.

haskell prime is an attempt to standardize current Haskell practice, 
and MPTCs and FDs have been part of that for a long time, so haskell
prime has to take a stand about them. Haskell98 could get away with 
closed eyes, claiming that those features were new then, just as GADTs 
and ATS are new today. haskell prime does not have that choice

any more.

so I see two choices:

- define current practice in MPTCs and FDs, then mark those then
   well-defined extensions as deprecated, to be removed in Haskell''.

   to do this, you'd need to provide a clear alternative to migrate to,
   with a simple and complete definition both of the feature set you
   are advocating, and its interactions with other features, and its
   relation to the features it is meant to replace.

- define current practice in MPTCs and FDs, find the simple story
   behind these features and their interactions with other features
   (as simple as we can make it, without the scaremongering), and 
   add that to Haskell'. 

   also define the initial versions of your alternative feature sets 
   and their interactions. leave it to Haskell'' to decide which of 
   the two to deprecate, if any.


either option needs a definition of both feature sets (I assume you're
arguing for associated type synonyms). we do have fairly simple
definitions of MPTCs and FDs, although I still think (and have been
trying to demonstrate) that the restrictions are too conservative. 

what we don't have are definitions of the interactions with other 
features (and the restrictions really get in the way here), such as 
overlap resolution, or termination proofs, but we are making 
progress there. what we also don't have is a simple definition 
of ATS, its interactions with other features, and a comparison 
of well-understood ATS with well-understood FDs (I only just 
printed the associated FD paper, perhaps that'll help), which 
could form the basis for a decision between the two.


so, imho, haskell prime can only define the current state, hopefully
with a better form of MPTCs/FDs and at least some preliminary 
form of ATS, and let practice over the next years decide whether 
haskellers will move from MPTCs/FDs to ATS. just as practice

seems to have decided that other features, eg, implicit parameters,
in spite of their initial popularity, have not recommended themselves
for any but the most careful and sparing use, and not as part of
the standard.


One of the big selling points of Haskell is that
it's quite well defined, and hence, its semantics is fairly well
understood and sane - sure, there are dark corners, but compared to
other languages of the same size, we are in good shape.  If we include
half-baked features, we weaken the standard.


we are not in a good shape. Haskell 98 doesn't even have a semantics.

current Haskell is so full of dark corners, odd restrictions and unexpected
feature interactions that I've been tempted to refer to it as the C++ of
functional languages. the question on the table is not wether to include
half-baked features of current Haskell in Haskell', but how to make sure
that we understand those features well enough to make an informed
decision. nothing I've seen so far indicates that it is impossible to come
up with a well-defined form of some of those features, including their 
interactions with other features. that doesn't come without some 
further work, so the haskell prime effort cannot just pick and choose,

but there's no reason to give up just yet.

and to keep what is good about Haskell, we need to think about
simplifying the features we have as our understanding of them improves,
in particular, we need to get rid of unnecessary restrictions (they
complicate the language), and we need to investigate feature interactions.
we weaken the standard if it has nothing to say about the problems
in current practice.


In fact, it's quite worrying that FDs have been around for so long and
still resisted a thorough understanding.


they don't resist. but as long as progress is example-driven and scary
stories about FDs supposedly being tricky and inherently non-under-
standable are more popular than investigations of the issues, there 
won't be much progress. please don't contribute to that hype.


you reply to a message that is about a month 

Re: the MPTC Dilemma (please solve)

2006-03-20 Thread Claus Reinke

As understand it, you've proposed changes in context reduction to
restore confluence:


yes.


What is your plan to deal with non-termination (e.g. examples 6 and 16
of the FD-CHR paper)?


I haven't read all the suggestions that Martin, you, and others have 
made in that area yet (still busy with overlaps), including those in the 
revised FD-CHR paper, so I can't make concrete suggestions yet, 
beyond those I've already posted here:


1 we all want terminating instance inference, but trying to assure that
   via restrictions is bound to be limiting (does GHC still have to build
   part of the hierarchical libs with -fallow-undecidable-instances?).

   I'm not opposed to it, but I'd like Haskell' to document current
   practice, in that we have the option to switch of termination checks
   whenever they are not able to see that our programs are terminating
   (available in Hugs and GHC, and all too often necessary).

2 that said, termination checks can do a lot, and it is certainly useful
   know various methods of checking terminations, as well as terminating
   examples beyond current termination checks.

   the old FD-CHR draft already discussed relaxed FD conditions,
   but was somewhat hampered because confluence checks seemed
   entangled with termination. with confluence problems out of the
   way, the restrictions can focus on termination. was there any other
   reason not to go for the relaxed FD conditions as a start?

3 in http://www.haskell.org//pipermail/haskell-prime/2006-February/000825.html

   I gave two examples that are terminating, but for which the
   current conditions are too restrictive. the first could be cured
   by taking smaller predicates into account, in addition to smaller
   types and smaller variable sets, and the second turned out to be 
   a special case of the relaxed coverage condition, I think. I run 
   into both problems all the time (the first is especially annoying as 
   it prevents calling out to simpler helper predicates..).


4 example 6, FD-CHR, is vector multiplication Mul. I argued that 
   it is wrong to call the declarations faulty and invalid just because

   there are some invalid calls. I also suggested one way in which
   the declarations can be permitted, while ruling out the faulty call:

   http://www.haskell.org//pipermail/haskell-prime/2006-March/000847.html

   basically, the idea is that FDs specify type-level functions, so
   unless we can demonstrate that those functions are non-strict, we
   need to rule out cyclic type-level programs that feed the range of
   an FD into one of its domain parameters, by a generalized occurs
   check.

   simply looking at the intersection of variables is easy to implement;
   that method is still too conservative (eg, if the range is simly a
   copy of the domain), but adding it to our repertoire of termination 
   checks definitely improves the situation, and is sufficient to permit

   the declarations in example 6, while ruling out the offending call.

5 example 16, CHR, defines an instance that hides an increasing
   type behind an FD. my intuition on that one tells me that we are
   again ignoring the functional character of FDs (as we did in 4). 
   
   instead of ruling out types determined entirely by FDs via the 
   bound variable restriction, as the paper suggests, we could 
   extend the termination check to collect information about 
   FDs. just think of type constructors as simple FDs and try 
   to treat real FDs similarly: 

   adding a constructor around a type variable in the context 
   means we cannot guarantee termination by reasoning about 
   shrinking types. determining a type variable in the context by

   putting it in the range of an FD means we cannot guarantee
   termination by reasoning about shrinking types, unless we
   have some conservative approximation of the relation 
   between domain and range of the FD to tell us so.


   the example case:

   class F a b | a- b
   instance F [a] [[a]]

   clearly shows the range to be more complex than the domain,
   so we can account for that increase in complexity when we 
   see F t x in an instance context.


   if instead, we only had:

   class F a b | a-b
   instance F [[a]] [a]

   the range would be less complex than the domain, so we could
   permit use of this, even though the bound variable condition
   would treat it the same way - forbid it.

there are whole yearly workshops dedicated to termination.
we shouldn't assume that we can reach any satisfactory solution
by a mere handful of restrictions. which means that we need to
add to our repertoire of termination checks, and to keep open 
the option of switching of those checks.


cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-03-20 Thread Claus Reinke
For example, AFAIK the CHR formalisation doesn't consider higher 
kinds (ie, no constructor classes).


you're right about interactions in general. but do you think constructor 
classes specifically would pose any interaction problems with FDs?



I don't care whether I do my case a favour.  I am not a politician.
There is only one reason that ATs exist: FDs have serious problems.


you don't solve problems by creating new features from scratch,
with a different theory/formalization to boot. you try to pinpoint the
perceived problems in the old feature, then either transform it to
avoid those problems, or demonstrate that such transformation is
not possible. otherwise, you have the nasty problem of relating 
your new feature/theory to the old one to demonstrate that you've

really improved matters.


Two serious problems have little to do with type theory.  They are more
like software engineering problems:

I. One is nicely documented in
   
http://www.osl.iu.edu/publications/prints/2003/comparing_generic_programming03.pdf


that paper isn't bad as far as language comparisons go, but it focusses
on a rather restricted problem, so I'm surprised that this was part of the
motivation to launch ATS: to reduce the number of parameters in 
combined concepts, we might as well associate types with each

other, instead of types with instances.

variant A: I never understood why parameters of a class declaration
   are limited to variables. the instance parameters just have
   to match the class parameters, so let's assume we didn't
   have that variables-only restriction.

   class Graph (g e v) where
   src :: e - g e v - v
   tgt :: e - g e v - v

   we associate edge and node types with a graph type by
   making them parameters, and extract them by matching.

variant B: I've often wanted type destructors as well as constructors.
   would there be any problem with that?

   type Edge (g e v) = e
   type Vertex (g e v) = v

   class Graph g where
   src :: Edge g - g - Vertex g
   tgt :: Edge g  - g - Vertex g

variant C: the point the paper makes is not so much about the number
   of class parameters, but that the associations for concepts
   should not need to be repeated for every combined concept.
   and, indeed, they need not be

   class Edge g e | g - e
   instance Edge (g e v) e
   class Vertex g v | g - v
   instance Vertex (g e v) v

   class (Edge g e,Vertex g v) = Graph g where
   src :: e - g - v
   tgt :: e - g - v

   (this assumes scoped type variables; also, current GHC,
contrary to its documentation, does not permit entirely 
FD-determined variables in superclass contexts)


all three seem to offer possible solutions to the problem posed in 
that paper, don't they?



   II. The other one is that if you use FDs to define type-indexed
   types, you cannot make these abstract (ie, the representations
   leak into user code).  For details, please see the Lack of
   abstraction. subsubsection in Section 5 of
   http://www.cse.unsw.edu.au/~chak/papers/#assoc


do they have to? if variant C above would not run into limitations
of current implementations, it would seem to extend to cover ATS:

   class C a where
   type CT a

   instance C t0 where
   type CT t0 = t1

would translate to something like:

   class CT a t | a - t
   instance CT t0 t1

   class CT a t = CT a
   instance CT t0 t1 = C t0

as Martin pointed out when I first suggested this on haskell-cafe,
this might lead to parallel recursions for classes C and their type
associations CT, so perhaps one might only want to use this to
hide the extra parameter from user code (similar to calling auxiliary
functions with an initial value for an accumulator).


you reply to a message that is about a month old.


That's what re-locating around half of the planet does to your email
responsiveness...but the topic is still of interest and I got the
impression that your position is still the same.


definitely. I was just unsure how to react - if you really hadn't seen
the messages of the last month, it would be better to let you catch up
(perhaps via the mailing list archive). if you just took that message as
the natural place to attach your contribution to, there's no need to wait.


ATs are not about special syntax.  Type checking with ATs doesn't not
use improvement, but rather a rewrite system on type terms interleaved
with unification.  This leads to similar effects, but seems to have
slightly different properties.


that's what I'm complaining about. if ATs were identified as a subset of
FDs with better properties and nicer syntax, it would be easy to compare.

Re: Suggestion: refine overlap handling for instance declarations

2006-03-16 Thread Claus Reinke
(a) we can't specify that two types in an instance are not equal; 

but we can use overlap resolution to ensure that equal types 
are handled by another, more specific instance

...
permit type inequalities as guards in instance declarations:

topdecls - ..
 | 'instance' [ctxt '='] head [ '|' ineqs ] [body]

ineqs - typevar '/=' type [ ',' ineqs]


first, I need to point out an oversight of mine here: restricting the lhs
of inequalities to typevars was meant to simplify the semantics, but if
we do that, we also need disjunctions of inequalities (eg, the negation
of (a,b)==(Int,Bool) is (a/=Int || b/= Bool) ). that is not an unnatural
requirement, but no longer simpler than just stating the full inequality
( (a,b)/=(Int,Bool) ). so I'm no longer sure which alternative to prefer?


I'm afraid things might be a bit more complex. First of all, we may


great, a second reader!-) about time that someone started to 
raise issues. that can only be helpful.


yes, there are different variations on equality and disequality. your
examples on equalities seem to hinge on the question of whether or
not to permit substitutions, so we'd get unification (substitutions on
both sides), matching (substitution on one side), equality (no 
substitutions). you also mention the possibility of asking for 
substitutions that make two types unequal (instead of asking for 
substitutions that make two types equal, then negating the result),

but I'll take that as a general remark (useful when one needs to
demonstrate that two terms are separable by substitution), not as 
something specifically relevant for the present context.


for negated equalities, the difference between permitting or not
permitting variable substitutions in the equality check is reversed:
unification equates more terms than syntactic equality, so negated
unification rejects more term pairs than negated syntactic equality.
however, we also need to take into account that in our current
context syntactic equality should cause residuation rather than
giving the answer false for terms involving variables: 

- consider a/=Int (that is: not (a=Int)). 

   with negated unification, this guard fails, because a could be 
   instantiated to Int, so a rule with this guard won't apply. but 
   if a is later instantiated to Bool by some FD, the guard 
   becomes Bool/=Int, which succeeds, so the rule is now 
   applicable. if a is instead instantiated to Int, the guard 
   becomes Int/=Int, which fails.


   with negated equality, this guard cannot be decided, so the
   decision is delayed (residuation). if a is later instantiated to
   Bool, the guard is woken up, and succeeds, so the rule is
   now applicable. if a is instead instantiated to Int, the guard
   is woken up, and fails, so the rule is no longer applicable.

- consider a/=b (that is: not (a=b)).  


   with negated unification, this guard fails, because either
   variable could be instantiated to the other, so a rule with this
   guard won't apply. if the variables are instantiated later, the
   guard will change, and rule applicability changes.

   with negated equality, this guard cannot be decided, so the
   decision is delayed. if a or b are instantiated later, the guard
   is woken up, and retried.

comparisons of more complicated structures reduce to these
and ground comparisons. so it looks as if there isn't that much
difference, but we need to be careful about when rules are
suspended/woken up, or fail and are retried, and when guards 
are decided. so we certainly need to be careful about which 
disequality to choose as default, even if we are only interested 
in replacing overlaps. 

I'm not yet decided about which one is the right default for 
our problem. (of course, one might want to permit others, but 
I'm only looking for one at the moment;-).



variables. Does 'a' equal to 'b'? It depends. We may define two types
t1 and t2 dis-equal if in any possible interpretation (for any
assignment to type variables that may occur free in t1 and t2) the
types are syntactically dissimilar. Under this definition, 'a' is not
equal to 'b'. 
OTH, we may say that two types are disequal if there

exists an interpretation (i.e., an assignment to type variables) that
makes the types dissimilar. Both definitions can be useful, in
different circumstances. 


these two variations appear to be the same, though?

The _full_ HList technical report discusses these issues in detail 
(Section 9 and Appendix D).


and many other issues relevant to the present discussion. I've 
already referred to it as being full of examples illustrating why

the current situation is unacceptable, and had been hoping that
the authors would be among those arguing in favour of searching
a more consistent picture here and now, avoiding the need for 
such hacks. after all, that's what you wrote at the end of 
Appendix D, two years ago!-)


cheers,
claus

btw, said appendix D compares 


   typeCast :: 

Suggestion: refine overlap handling for instance declarations

2006-03-14 Thread Claus Reinke
overlapping instances with best-fit overlap resolution are useful 
because they give us an alternative way to write programs that 
we cannot write directly as long as current Haskell isn't expressive 
enough:


   (a) we can't specify that two types in an instance are not equal; 

   but we can use overlap resolution to ensure that equal types 
   are handled by another, more specific instance


   (b) we can't enable library users to add instances for their types,
   yet ensure that there will be some instance for every type;

   but we can use overlap resolution to provide a default
   instance, to be overridden by more specific instances in
   library and user code

   (c) we can't rely on mutual exclusion properties specified in instance
   contexts to be used to select the appropriate instance because
   Haskell only looks at instance heads;

   but we can sometimes wrap our types in extra constructors
   so that the mutual exclusion is reflected in more and less
   specific instance heads, with overlap resolution choosing
   the right one (this is a real pain, brittle against change, and
   not always possible..)

   (d) ..this seems to cover most uses discussed so far, but I don't
   claim this list to be complete. if you know of examples not 
   covered by a-c, please add them by reply!


there hasn't been any enthusiasm about changing (c) for Haskell',
which isn't to my liking, but nothing I can do much about. I would,
however, like to continue arguing in favour of refining overlap 
handling until we get at least some progress into Haskell' (and
addenda). so I propose to add the something like the following 
to Haskell':


-- we know how to handle (a) [no overlap needed]:

syntax: 


   permit type inequalities as guards in instance declarations:

   topdecls - ..
| 'instance' [ctxt '='] head [ '|' ineqs ] [body]

   ineqs - typevar '/=' type [ ',' ineqs]

semantics (sketch):

   - add inequality guards from an instance as guards to the 
   rules for that instance in the CHR

   - add inequality guards as attributes to the typevars
   after successful guard evaluation and rule commit

remarks:

   - we only need the extra guard syntax because of (c) above
   - we restrict inequalities to variables on lhs to simplify semantics
   - attributed variables are logic programming folklore:
   variable attributes are simply goals to be reexamined whenever
   the attributed variable is instantiated; we need them here to 
   ensure that after we have selected a CHR rule based on an 
   inequality, that inequality is not violated by later instantiations


 we can make (b) more explicit:

instead of enabling overlapping instances and overlap resolution
on a per-module or per-session basis, we can specify which
instances we want to use only as defaults:

syntax:

   topdecls - ..
| ['default'] 'instance' [ctxt '='] head [ '|' ineqs ] 
[body]

semantics (sketch):

   - default instances may be overridden by more specific instances
   including other default instances, but no such overlap is permitted
   unless there is a single most specific instance 
   - default instances only apply if there can be no more specific 
   instances; the easiest way to model this is by collecting all

   instances for the class, select those that are more specific
   than some default instance, and add type inequality guards
   that exclude those instantiations from the rule for the 
   default instance

   - no other instances are affected by overlap resolution

remarks:

   - by declaring all instances as 'default', we could recover the
   current, indiscriminate overlap handling; but we hardly ever
   want that, as declaring default instances allows us to be
   more precise, limiting the impact of overlap handling
   - once we have type inequality guards in the language, we 
   can be a bit stricter about ruling out overlaps than GHC 
   is at the moment (see Simon's earlier example)

   - default instances with ground heads (no variables) can be
   treated like non-default instances (there can't be any more
   specific instances)
   - without further analysis, application of default instances has
   to be delayed until the whole program is visible; that does
   not prevent separate compilation, it only prevents early
   application of some instances (those marked as defaults);
   in other words, separately compiled modules are 
   parameterized by the set of more specific instances for

   their default instances
   - we don't want to go through all that trouble for every
   instance in every class, which is the motivation for being
   very specific about where we want overlaps to be 
   permitted and resolved


cheers,
claus

___
Haskell-prime 

Re: alternative translation of type classes to CHR(was:relaxedinstance rules spec)

2006-03-13 Thread Claus Reinke

[still talking to myself..?]

all confluence problems in the FD-CHR paper, as far as they were 
not due to instances inconsistent with the FDs, seem to be due to 
conflicts between improvement and inference rules. we restore 
confluence by splitting these two constraint roles, letting inference 
and improvements work on constraints in separate roles, thus 
removing the conflicts.


I should have mentioned that the improved confluence obtained by
separating the dimensions of FD-based improvement and instance
inference buys us a lot more than just permitting more valid programs
(compared to the original, incomplete CHR):

- separating the two dimensions of inference and improvement leads
   to better confluence (implementations are no longer forced to
   iterate improvement before continuing inference; fewer conservative
   restrictions are needed in the static semantics of TC; more valid
   code can be accepted)

- better confluence guarantees that all improvement rules that apply will 
   be run eventually, which means that the new CHR is self-checking 
   wrt FD consistency! 

   [if consistency is violated, there are at least two instances with 
   different FD range types for the same FD domain types; that 
   means there will be two instance improvement rules with the 
   same lhs, but different equations on their rhs; if any constraint 
   arises that would run into the FD inconsistency by using one 
   of those improvement rules, the other will cause the derivation 
   to fail]


we can see this in action by looking at the relevant example of the
FD-CHR paper (last revised Feb2006), section 5.1 Confluence,
example 5:

   class Mul a b c | a b - c
   instance Mul Int Float Float
   instance Mul Int Float Int

the old CHR for this example (which violates FD consistency) is 
not confluent, allowing derivation of both c=Float and c=Int for

the constraint Mul Int Float c. the revised paper still claims that
consistency is inuitively necessary to guarantee confluence (it
also still claims that it isn't sufficient, referring to the example we
dealt with in the previous email).

but if we apply the new Tc2CHR translation, we obtain a
confluent CHR for the same example (there doesn't appear
to be a way to switch off the consistency check in GHC, so
I had to translate the two instances separately..):

   mul(A,B,C) = infer_mul(A,B,C), memo_mul(A,B,C).

   /* functional dependencies */
   memo_mul(A,B,C1), memo_mul(A,B,C2) == C1=C2.

   /* instance inference: */
   infer_mul(int,float,float) = true.
   infer_mul(int,float,int) = true.

   /* instance improvements: */
   memo_mul(int,float,C) == C=float.
   memo_mul(int,float,C) == C=int.

now, if we consider the problematic constraint again, and its two
initially diverging derivations, we see that the derivations can be
rejoined, exposing the inconsistency:

   mul(int,float,C)
= infer_mul(int,float,C), memo_mul(int,float,C)
[
== infer_mul(int,float,C), memo_mul(int,float,C), C=float
= true, memo_mul(int,float,float), C=float
== memo_mul(int,float,float), C=float, float=int
|
== infer_mul(int,float,C), memo_mul(int,float,C), C=int
= true, memo_mul(int,float,int), C=int
== memo_mul(int,float,int), C=int, int=float
] 
= fail
  
this dynamic safety does not mean that we should drop the 
consistency check in the static semantics completely! but whereas
the old CHR translation _depends_ on the consistency check for 
safety, and is therefore stuck with it, the new translation gives us 
some manouvering space when we try to relax that check to 
account for the combination of overlap resolution and FDs.


cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: alternative translation of type classes to CHR(was:relaxedinstance rules spec)

2006-03-13 Thread Claus Reinke

Thanks, Taral,

it is good to know that I'm not just writing for the archives!-)

a paper, yes, at some point (unless someone shoots a hole in my
suggestions first;), but at the moment, I'm more concerned with
keeping my hopes for Haskell' alive, and completing my case. 

when Haskell' was announced, most of us thought that the committee 
would just collect all those old and proven extensions like MPTC, 
FDs, overlapping instances, undecidable instances, more flexible 
instances, etc., figure out the common story behind them and weave 
all of that into a coherent new standard, leaving the newer extensions 
like GADTs, ATS, etc. for future standards. unfortunately, the idea 
that well-established popular extensions implied well-defined 
behaviour turned out to be an illusion, so unless we're doing the 
work now, we're not going to have the useful standard we wanted.


which makes it all the more important to have genuine discussions
here - there are so many extensions that have been proposed and
partially implemented over the years since Haskell 98, for which 
noone is even bothering to speak up on this list (generics in their
various forms and implementations? better support for faking 
dependent types? template meta-programming? a genuine type 
Dynamic, as in Clean? ..). I am a bit worried that many 
Haskellers appear to have given up listening here, let alone 
arguing for their interests. and with the extreme timeline the 
committee is insisting on, there just wont be time to wait for

the first draft and start complaining then.

I can't argue for all the features I'm missing in the discussions so
far, but I can try to help with a few of them, and hope that others
will wake up before the committee closes the doors.

you ask about effects on existing handling of FDs: I appreciate the 
work that has gone into FD-CHR, and into the refined conditions 
now implemented in GHC head, but I cannot accept them as the 
last word (for reasons explained in previous emails, the restrictions 
are too restrictive in practice, for real programs; eg. since the 
change, I suddenly have to use undecidable instances for 
instances that are obviously decidable, which kind of defeats the

purpose of that flag; as a minimum benchmark, I'd like to be
able to use the Data.Record.hs stuff, in its simple form, without
the hacks, in whatever Haskell' turns out to be - and currently,
we are far from passing that criterium). 

I hope I have now explained what I meant when I said that most 
of the confluence issues were due to the translation, not inherent 
in FDs, and I intend to use this groundwork for tackling the 
combination of FDs and overlap resolution, in the way explained 
informally in my early emails here. I also hope that this simpler 
basis might help implementors to simplify and gain more confidence

in their code bases (in which these features have grown over years,
in wild combinations with other experiments, often driven only by 
examples and counter-examples).


unfortunately, tracking down the reasons for why these conditions 
were considered necessary in this form has been a slow process, 
as has been trying to show that they might not be. so it really helps 
to know that I'm not the only one who expects more from Haskell'.


having the formal specification in the FD-CHR paper, and having 
some of it implemented in GHC, is one of the best examples of the 
Haskell' process actually producing useful deliverables, and could
set the example for the other aspects of Haskell'. so I can only 
encourage Haskellers to read the paper, and to try GHC head,

and see whether they can live with the suggested limitations and
formalizations. if not, raise your voice here, now!

cheers,
claus

- Original Message - 
From: Taral [EMAIL PROTECTED]

To: Claus Reinke [EMAIL PROTECTED]
Cc: haskell-prime@haskell.org
Sent: Monday, March 13, 2006 10:57 PM
Subject: Re: alternative translation of type classes to CHR(was:relaxedinstance 
rules spec)


On 3/13/06, Claus Reinke [EMAIL PROTECTED] wrote:

[still talking to myself..?]


This is all wonderful stuff! Are you perhaps planning to put it all
together into a paper?

What effect do you think this can have on existing algorithms to resolve FDs?

--
Taral [EMAIL PROTECTED]
Computer science is no more about computers than astronomy is about
telescopes.
   -- Edsger Dijkstra
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: overlapping instances and constraints

2006-03-11 Thread Claus Reinke

- Haskell would need to be a lot more specific about exactly where
context reduction takes place.  Consider
f xs x = xs == [x]
Do we infer the type (Eq a) = [a] - a - Bool?  Thereby committing to
a particular choice of instance?  Or do we (as GHC does) infer the type
(Eq [a]) = [a] - a - Bool, so that if f is applied at (say) type
Char, then an instance Eq [Char] instance would apply.  GHC is careful
to do the latter.


is there actually anything unusual about this example? you're saying that
there are at least two instances for Eq:

   instance Eq a = Eq [a]
   instance Eq [Char]

and best-fit overlap resolution demands that we never apply the first if
the second fits, too. we just can't apply any rules unless we have established
that there are no more specific rules.

GHC enables overlap resolution on a pre-module basis, Hugs on a per-session
basis, so this means that we can never apply any rules unless they have ground
heads (no variables), or we've seen the whole program. which is somewhat of
an obstacle, suggesting that we would want to be more specific about enabling
overlap resolution, and use any trick we know to figure out when we no longer
have to wait for further, more specific instances (or be content to delay most 
of instance inference to the compilation of Main).


in functional logic overloading, POPL 2002, Neubauer et al suggested to
enable overlap resolution on a per-class basis, which seems sensible. even
enabling overlap on a per-instance basis might be worth looking into (that 
is, we would annotate those non-ground instance declarations which we 
want to be allowed to be overridden by more specific declarations 
elsewhere). with such finer control, all unannotated instances/classes 
might become fair game for earlier rule applications.


we can further limit the scope of overlaps by using the module system
to close a set of class instances (this is useful independent of overlaps,
as it allows us to infer when there will be no instance for a given 
constraint): a set of instances for a class is closed if the class is neither 
exported nor imported, and if, for any instances with instance contexts, 
the context constraints are closed as well. [if the class isn't available 
outside this module, there can't be any direct instances outside, and 
if the instance context constraints are closed as well, there can't be 
any indirect instances generated from outside (*)]. 


of course, simply annotating a class or class export as closed, and
letting the compiler ensure that there are no further direct instances, 
would be somewhat simpler. I'm not sure what to do about further

indirect instances in this case?

cheers,
claus

(*) what do I mean by indirect instances:
   module A () where { class Closed a; instance X a = Closed a}
   module B where { import A; instance X t }
   even if Closed is not exported, creating instances for X indirectly
   creates instances for Closed.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: alternative translation of type classes to CHR (was:relaxedinstance rules spec)

2006-03-08 Thread Claus Reinke
a second oversight, in variation B: CHR rules are selected by matching, 
not by unification (which is quite essential to modelling the way type 
class inference works). this means that the idea of generating memo_

constraints for the instance fdis and relying on the clas fdi rules to
use that information is not going to work directly. 


however, we can look at the intended composition of those fdi instance
rules with the fdi class rules, and specialize the latter when applied to 
the rhs of the former (assuming unification while doing so).


!!
the nice thing about this is that variation B now looks very much like
the original translation, differing only in the splitting of roles, without
any other tricks merged in. that means it should now be more obvious
why variation B is a modification of the original translation with better
confluence properties. 

all confluence problems in the FD-CHR paper, as far as they were 
not due to instances inconsistent with the FDs, seem to be due to 
conflicts between improvement and inference rules. we restore 
confluence by splitting these two constraint roles, letting inference 
and improvements work on constraints in separate roles, thus 
removing the conflicts.


= Tc2CHR alternative, with separated roles

   class C = TC a1..an | fd1,..,fdm

   where fdi is of the form: ai1..aik - ai0

   -  TC a b = infer_TC a b, memo_TC a b, C. (two roles +superclasses)

   -  memo_TC  a1..an, memo_TC th(b1)..th(bn) = ai0=bi0. (fdi)

where th(bij) | j0 = aij
  th(bl)  | not (exists j0. l==ij) = bl 


= Variation B (separate instance inference/FD improvement):

   instance C = TC t1..tn

   - infer_TC t1..tn = C.   (instance inference)

   - memo_TC th(b1)..th(bn) = ti0=bi0. (fdi instance improvement)

where th(bij) | j0 = tij
  th(bl)  | not (exists j0. l==ij) = bl 


=

in particular, the new CHRs for examples 14 and 18 (coverage violations,
hence not variable-restricted, hence confluence proof doesn't apply)
should now be confluent, because even after simplification, we can still use 
the class FDs for improvement.


here are the relevant rules for example 14:

   /* one constraint, two roles + superclasses */
   eval(Env,Exp,T) = infer_eval(Env,Exp,T), memo_eval(Env,Exp,T), true.

   /* functional dependencies */
   memo_eval(Env,Exp,T1), memo_eval(Env,Exp,T2) == T1=T2.

   /* instance inference: */
   infer_eval(Env,expAbs(X, Exp),to(V1, V2)) = eval(cons((X, V1), Env), Exp, 
V2).

   /* instance improvements: */
   memo_eval(Env_,Exp_,T_) == T_=to(V1, V2).

and the troublesome example constraints:

   eval(Env,expAbs(X,Exp),T1), eval(Env,expAbs(X,Exp),T2).
-
   infer_eval(Env,expAbs(X,Exp),T1), infer_eval(Env,expAbs(X,Exp),T2), 
   memo_eval(Env,expAbs(X,Exp),T1), memo_eval(Env,expAbs(X,Exp),T2).


[
- [class FD first]
   infer_eval(Env,expAbs(X,Exp),T2), memo_eval(Env,expAbs(X,Exp),T2),
   T1=T2.
|
- [instance improvement and simplification first]
   eval(cons((X,V11),Env),Exp,V12), eval(cons((X,V21),Env),Exp,V22), 
   memo_eval(Env,expAbs(X,Exp),T1), memo_eval(Env,expAbs(X,Exp),T2),

   T1=to(V11,V12), T2=to(V21,V22).
]

- [rejoin inferences]
   eval(cons((X,V21),Env),Exp,V22), 
   memo_eval(Env,expAbs(X,Exp),T2),

   T1=T2, T2=to(V21,V22).
- ..

cheers,
claus

ps I've only listed the updated variation B here, to limit confusion. if you 
   want the updated code and full text, you should be able to use


   darcs get http://www.cs.kent.ac.uk/~cr3/chr/

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: overlapping instances and constraints

2006-03-08 Thread Claus Reinke
there were a couple of issues Simon raised that I hadn't responded to in 
my earlier reply. since no-one else has taken them on so far, either, ..



- Haskell would need to be a lot more specific about exactly where
context reduction takes place.  Consider
f xs x = xs == [x]
Do we infer the type (Eq a) = [a] - a - Bool?  Thereby committing to
a particular choice of instance?  Or do we (as GHC does) infer the type
(Eq [a]) = [a] - a - Bool, so that if f is applied at (say) type
Char, then an instance Eq [Char] instance would apply.  GHC is careful
to do the latter.


my general idea about that would be never to commit unless we know it
is the only choice. which seems to be in line with what GHC is doing in
this case. of course, it follows that we'd like to be able to specify choices
unambiguously, to avoid delayed committs.


Concerning using the instance context, yes, it's attractive, but it
involves *search* which the current mechanism does not.  Presumably you
have in mind that the type system should commit only when there is
only one remaining instance declaration that can fit.  You need to be
very careful not to prune off search branches prematurely, because in a
traditional HM type checker you don't know what all the type are
completely.  And you need to take functional dependencies into account
during the search (but not irrevocably).   I have not implemented this
in GHC.  I don't know anyone who has.   I don't even know anyone who has
specified it.


search, yes, but with deterministic result (similar to HM inference). so 
the main issue is that we need to be able to perform inferences without 
committing to their conclusions, or setting up encapsulated inference 
processes with their own assumptions. which isn't surprising given that 
we're dealing with implications, or type class functions, where the usual 
proof rule is if we can prove the conclusions assuming the prerequisites, 
then we have proven the implication. 

that may be substantially more complicated to implement, but is just what 
Prolog, or even simple HM type inference for functions, have been doing 
for a long time. and it is a pain to see the current situation, where Haskell 
implementations treat the conclusions as if there were no pre-requisites

(Haskell: these instances are overlapping; Programmer: no, they are not,
just look at the code!).

can we agree, at least in principle, that in the long term this needs to change?

since the general implementation techniques aren't exactly new, are there 
any specific reasons why they couldn't be applied to type classes? we'd
have a state for the constraint store, and a backtracking monad with 
deterministic result for the inference, just as we have for implementing 
HM inference. 

if we want a more efficient, more low-level implementation, we could 
use the WAM's idea of variable trails (proceed as if there was no 
search, but record all variable substitutions, so that we can undo them 
if it turns out that this branch fails). or is there a pragmatic issue with 
current implementations of those type classes, having grown out of 
simpler type class beginnings, and having grown so complex that they 
couldn't go in that direction without a major rewrite?


in the short term, I'd be quite willing to aim for a compromise, where
we'd not look at all constraints in the context, but just at a few specific 
ones, for which we know that the search involved will be very shallow.

whether to do that via strictness annotations in contexts, as Bulat has
suggested, or by reserving a separate syntactic position for constraints
known to have shallow proofs, is another question.

the outstanding example of this would be type inequalities, which I'd
really like to see in Haskell', because they remove a whole class of
instance overlaps. and with FDs, one can build on that foundation.

I'm not sure I have a good handle on understanding when or how searches
could be hampered by incomplete types. naively, I'd expect residuation, ie,
delaying partially instantiated constraints until their variables are specific
enough to proceed with inference. I think CHR already does this. 

if that means that instance context constraints cannot be completely 
resolved without looking at concrete uses of those instances, then 
we'd have a problem, but no more than at the moment. and I suspect

that problem will not be a showstopper. on the contrary, it may help
to keep those searches shallow.

from my experience, it seems quite possible to arrange instance 
contexts in such a way that even such incomplete resolution will be 
sufficient to show that they ensure mutual exclusion of their instances 
(type inequality, type-level conditional, FDs, closed classes, ..). 
which would be all that was needed at that point.


once we start looking, we could probably find more ways to help
such incomplete inferences along. eg, if there was a built-in class
Fail a (built-in only so that the system could know there can be
no 

Re: Keep the present Haskell record system!

2006-03-06 Thread Claus Reinke

my own opinion is that this scheme is like classes - they can be
resolved at compile time in most real cases but noone do it because
code will be too large. if some function can accept any records which
has field 'a' then to use this function on records of different types
we need either to do specialization or use scheme with non-constant
access time


for those who haven't seen it, the following paper explored the former
possibility with good success (at a time when type classes where 
still somewhat simpler:):


   Dictionary-free Overloading by Partial Evaluation
   Mark P. Jones, ACM SIGPLAN Workshop on Partial 
   Evaluation and Semantics-Based Program Manipulation, 
   Orlando, Florida, June 1994.

   http://www.cse.ogi.edu/~mpj/pubs/pepm94.html

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


partial type signatures/annotations/declarations..

2006-03-04 Thread Claus Reinke

when trying to look up the state of this proposal, I noticed:

- there seem to be two parallel versions (probably signatures
   is an older form, and all references ought to point to annotations
   instead?):

   http://hackage.haskell.org/trac/haskell-prime/wiki/HaskellExtensions
   points to:
   http://hackage.haskell.org/trac/haskell-prime/wiki/PartialTypeSigs
   http://hackage.haskell.org/trac/haskell-prime/ticket/40

   whereas

   http://hackage.haskell.org/trac/haskell-prime/report/9
   points to:
   http://hackage.haskell.org/trac/haskell-prime/wiki/PartialTypeAnnotations
   http://hackage.haskell.org/trac/haskell-prime/ticket/86

- the proposal focusses on using _ as a place holder for unspecified 
   parts of a type or context, without discussing the alternative I'd favour:


   instead of introducing holes in types and contexts to leave
   parts of a declaration unspecified, why not use type subsumption?

   the idea being, that instead of declaring the precise type, one
   would declare an upper bound on the type, and the precise type
   inferred needs to be subsumed by the one declared. type and 
   context syntax would remain unchanged, but in addition to ::

   for precise type annotations, there'd be ::: (or ::, or whatever)
   to indicate the difference in intended semantics.

   pro: would easily allow for omission of type details or parts
   of context (a type with more context, or with more specific
   type components, is subsumed by the declaration)

   cons: as long as we only specify an upper bound, the inferred
   type could be more specific than we'd like (we can't say
   that we don't want context, or that some type variable
   must not be instantiated)

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: overlapping instances and constraints

2006-02-28 Thread Claus Reinke

instance C2 a b | a/=b


I was thinking it would be all kinds of useful if we had two predefined
classes

class Eq a b
class NEq a b

where Eq has instances exactly when its two types are equal and NEq has
instances exactly when its two types are not equal.


   class Eq a b
   instance Eq a a

   class NEq a b
   instance Fail a = NEq a a
   instance NEq a b

   class Fail all -- no instances

I think I first saw that class Fail trick in an HList talk. but having those
instances doesn't help if they are not used (eg, by following instance
constraints, to aid in overlap resolution, or to confirm FDs; or simply
because the system doesn't use the fact that Fail never has instances).
Even just extending Eq/NEq to type-level predicates (with a 3rd,
functionally dependent parameter) runs into trouble.

I'd prefer to extend the language so that those uses become expressible,
but for the short term, it'd be nice if the predicates _and_ their uses
were built-in. hence the special syntax to indicate that this predicate is
actually looked at when checking the instance.

cheers,
claus


Eq should be straightforward to implement, declaring any type
automatically creates its instances. (sort of an auto-deriving). NEq
might be more problematic as that would involve a quadratic number of
instances so its implementation might need to be more special. but
perhaps we can do with just 'Eq'.

   John

--
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime 


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-02-28 Thread Claus Reinke

You addressed this to me -- but I'm an advocate for rather conservative
extensions whereas you are calling for extensions that go beyond what
any current implementation can do.


generally, that may be true,-) but in this specific case, I was just asking 
for an effort to document the differences in current handling of extended
Haskell in hugs and ghc, by collecting test cases such as those I included. 

whether or not the haskell' process manages to help eradicate those 
differences is another matter, but collecting them seems a useful basis 
for decisions, hence a task rather than a proposal. I addressed that to 
you because (a) I was hoping this more moderate approach would be 
down your alley and (b) because you have a stake in this at ghc hq, 
and probably would want to collect the test cases for possible fixing.


I guess I will create the ticket myself, but if no committee member
or implementer has a stake in it, that won't do much good..


Anyway, there is already a ticket for overlapping instances, I think --
why don't you just add to that.


that might work. apart from the fact that I really, really hate the 
braindead wiki markup processor, especially when editing through

that tiny ticket change field instead of loading up text. I went through
that experience once, when Isaac suggested the same for my labels
proposal - I don't want to have to do that again.


If you send me Wiki-marked-up text I'll gladly paste it in for you.


perhaps I'll just restrict myself to attaching my example code to
some ticket (are guests allowed to update attachments?). will see..

thanks,
claus

| -Original Message-
| From: Claus Reinke [mailto:[EMAIL PROTECTED]
| Sent: 25 February 2006 15:33
| To: Simon Peyton-Jones
| Cc: haskell-prime@haskell.org
| Subject: Re: the MPTC Dilemma (please solve)
| 
| | Is the behaviour of GHC with -fallow-undecidable-instances (and

| | -fcontext-stack) well-understood and specifiable?
| I would not say that it's well-specified, no.
| 
| to start improving that situation, could we please have a task ticket

| for document differences in unconstrained instance handling, then
| ask everyone to attach source examples showing such differences?
| [can guests attach code to task tickets?]
| 
| the hope being, of course, that implementations nominally providing

| the same feature will eventually converge on providing the same
| interpretation of all programs using that feature.
| 
| an example of the current oddities (at least they seem odd to me;):

| both hugs and ghc claim to resolve overlapping instances in favour
| of the most specific instance declaration. both claim that functional
| dependencies uniquely determine the range types from the domain
| types. but they do not agree on which programs to accept when
| we try to combine best-match with FDs.
| 
| I've already given an example where ghc allows me to define

| record selection, while hugs complains that the overlap violates
| the FDs.
| 
| I reported that as a hugs bug, because I think the best-match

| resolution of overlaps should ensure that the FD violation cannot
| happen, so the code should be valid. there are different ways to
| interpret FDs (something to check, or something to use), but it
| seemed that ghc was doing the right thing there. thread start:
| 
| http://www.haskell.org//pipermail/hugs-bugs/2006-February/001560.html
| 
| but after further experimentation, I'm not longer sure that ghc

| is doing the right thing for the right reasons. here is a tiny example
| of one of the disagreements:
| 
| {- ghc ok

|hugs Instance is more general than a dependency allows -}
| 
| class C a b | a - b

| instance C a b
| 
| so what is ghc doing there? is it going to guarantee that b will

| always be uniquely determined?
| 
| {- ghc ok

|hugs Instance is more general than a dependency allows -}
| 
| class C b | - b where c :: b

| instance C b where c = error b
| 
| safely m = m `CE.catch` print

| main = do
|   safely $ print $ (c::Int)
|   safely $ print $ (c::Bool)
|   safely $ print [id,c]
| 
| oh, apparently not. unless b is uniquely determined to be universally

| quantified, and the instantiations happen after instance selection.
| 
| {- ghc ok

|hugs Instance is more general than a dependency allows -}
| 
| class C b | - b where c :: b

| instance C b where c = error b
| 
| class D a where d :: a - String

| instance C a = D a where d a = a
| instance C Int = D Int where d a = Int
| 
| -- try at ghci prompt:  (d 1,d (1::Int))

| -- gives: (a,Int)
| 
| so that parameter of C isn't all that unique. at least not long enough

| to influence instance selection in D.
| 
| comments?
| 
| cheers,

| claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: overlapping instances and constraints

2006-02-27 Thread Claus Reinke


[
I'll only address some of your issues in this message, as they fall nicely
under the use of a feature I'd like to see anyway: 
type disequality constraints

]


- A program that type checks can have its meaning changed by adding an
instance declaration
- Similarly adding import M() can change the meaning of a program (by
changing which instances are visible


yes, these are consequences of the start with overlapping declarations, 
then avoid overlapping instances by selecting the most specific declaration.


we could avoid that, by using disequality constraints, as some other 
constraint logic systems have done. that way, for many examples, there 
wouldn't be any overlapping instances in the first place:


   class C a where c :: a - String
   instance C [a] | a/=Char where c as  = .. -- dealing with most lists
   instance C Stringwhere c s = ..   -- special case for strings

[note that the special syntax for disequality constraints (borrowing from 
FDs) here is only needed as long as instance contexts are ignored;

otherwise, type disequality would just be a built-in binary type class,
and the instance would look like this:
instance (a/=Char) = C [a] where c as = ..
]

we could now rule out any overlapping instance declarations, while
still permitting instance declarations covering the gaps left by the
disequality constraints.

this should work well for uses of overlapping instances as local conditionals,
but it would rule out the popular pattern of extensible type case with default
rule. the latter explicitly depends on specifying a default instance that may
be replaced by more specific instances in future modules.

so we can avoid these issues for some use cases, and that may be worth
trying out as a first step, but if we want all use cases, it seems we will have
to live with those consequences.


- When exactly is overlap permitted?  Is this ok?
instance C a Int
instance C Bool b
Previously GHC rejected this, on the grounds that it could be ambiguous
when you came across (C Bool Int).  But not GHC accepts it, on the
grounds that (C Bool Char) is quite unambiguous.  


again, a consequence of the best-match rule. but note that in examples
like this, there are two levels of overlap: the first level is resolved by
the best-match rule, the second _remains_ overlapping. so GHC is faced
with the choice of rejecting the declarations outright because they _might_
run into this overlap, or to wait and see whether they _will_ run into it.

this could actually be cured by using disequality constraints:

   instance C a Int | a/=Bool
   instance C Bool b | b/=Int
--instance C Bool Int -- we can declare this if we want it

even without ruling out overlapping instance declarations, this excludes
the problematic case while permitting the unproblematic ones.

just as unification allows us to prefer specific type instances, disequality
allows us to avoid specific type instances, so we would be able to state
_only_ the first, resolvable, level of overlap in this example, without
having to deal with the by-product of the second, unresolvable, and
therefore possibly erroneous level of overlap.

the other issues you raise are just as important, but won't be addressed
as easily, so I leave them for later (and possibly for others;-).

cheers,
claus

ps I don't know whether additional references are helpful or needed,
   but google for disequality constraints or for disunification (see
   also constructive negation).

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-02-27 Thread Claus Reinke


continuing the list of odd cases in type class handling, here is a
small example where overlap resolution is not taken into account
when looking at FDs. 

context: both hugs and ghc resolve instance overlaps in favour 
of the most specific declaration.


so the following works in both ghc (darcs, 25022006) and 
hugs (minhugs 20051031):


   {- both ghc and hugs accept without 3rd par and FD
  neither accepts with 3rd par and FD -}

   data T = T deriving Show
   data F = F deriving Show

   classTEQ a b {- tBool | a b - tBool -} where teq :: a - b - Bool
   instance TEQ a a {- T-} where teq _ _ = True
   instance TEQ a b {- F-} where teq _ _ = False

   test = print (teq True 'c', teq True False)

and both print (False,True), so best-fit overlap resolution entirely 
determines which instance to choose! 

now, if we uncomment the third class parameter, they both complain 
(as expected, though about different things).


however, if we also uncomment the functional dependency, to fix 
the ambiguity wrt the 3rd parameter, both complain that this FD 
is in conflict/inconsistent with the instances!


as far as i understand it, the potential inconsistency should have 
been eliminated by the best-fit overlap resolution, so I submit this

is a bug (and unlike the earlier example I submitted to hugs-bugs,
this fails with both hugs and ghc; and it is less open to alternative
interpretations, I hope).

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


public/private module sections (was: Haskell-prime Digest, Vol 2, Issue 58)

2006-02-24 Thread Claus Reinke

I feel unkeen.


you will notice that I haven't actually proposed adopting this (yet:-);
neither did Simon M for his original version. so far, I had thought
Haskell's export/import language quite limited, but useable and simple.
so apart from fixing the asymmetries between export and import, and
adding a few missing features, I wasn't expecting much change.

So, I was surprised to see that following the route of sections and
dropping export lists altogether might actually simplify the language and
accomodate other proposed variations more easily than the current
system. it isn't often that we find such opportunities in language design.

yes, this would add one constraint on where to place definitions. but
grouping logically related definitions together is not quite what one 
might think anyway: aren't the definitions making up the interface

most strongly related, with the others just changeable auxiliaries?
and how do you flatten the graph of mutually related (according to
one of many possible criteria) definitions, without separating some
of them? even refactoring does not solve this - you can't rewrite
your code (even automatically) any time you want to take a different
view on your sources. 


in other words, you can only _partially_ support _one_ of many
relations by actual proximity. everything beyond falls firmly into
tool support for virtual proximity (creating useful views of your
sources on the fly, without changing the sources, as needed).

for instance, I really dislike the public/private modifier idea
because it splatters logically related items from the export interface
and the definition of said interface all over the source code, so I 
need a tool to gather the interface definition back together;-)


even with the current system, I constantly need tool support
to keep one auxiliary view on the module header while editing 
its body, and another auxiliary view on the definitions of any
imported items I might be using. and though I always start with 
related definitions close together, it usually doesn't take long 
before that fails for some reason or other (not to mention that 
my view of what should be related changes all the time 
depending on what I'm doing).


the other problem you mention is that either the export section
would contain code (rather than just names) or synonym definitions
(rather than just names). that is true, and I don't particularly like
this (especially the second bit), but I can't see yet to what extent 
that problem would bite in practice. code navigation should 
certainly not be an issue here (even the ageing vim supports tag

stacks, and ghc head has supported tag file generation for some
time now; hmm, that reminds me that we should have hugs-style
editor integration in ghci..).

as for Haddock, it seems to have won the fight for the documentation
niche, so it would be nice to have it available with every Haskell
installation. but generally, the availability of a specific tool is not a
prerequisite for aiming for a balance between language and tool
design. you just need committment to building/distributing and
maintaining some tool to cover the issue (e.g., both ghci and hugs
support the :browse module command, which is about the
easiest way to extract the interface; and that has been used in
at least one editor mode, without needing Haddock).

what is a prerequisite is that the language definition does not
ignore tools, and that -apart from balancing the responsibilities
of language and tools- the definition provides foundations on
which tool building would be supported more easily than today.

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: public/private module sections (was: Haskell-prime Digest, Vol 2, Issue 58)

2006-02-24 Thread Claus Reinke

so:


not quite (though I believe that would be close to Simon M's idea).

in my modification, both map and length would move completely
into the export section, length# would stay in the local section.
both sections would just be module bodys., containing full
definitions, declarations, imports. to export anything, move it
to the export section, to hide anything, move it to the local 
section (and in case that wasn't clear, these would be two 
sections of the same module body, only distinguished by

whether or not their contents are exported).

cheers,
claus


-- |iterate function over list
map :: (a-b) - [a] - [b]

-- |find length of list
length :: [a] - Int

private:

map f (x:xs) = f x : map f xs
map f [] = []

length xs = length# xs 0#

length# (x:xs) n# = length# xs (n# +# 1)
length# [] n# = n#


and in order to see map's type or comment when i implement it, i
should see to other part of file. i personally prefer to have
public/private modifiers on each function and gather interface
documentation by tools like haddock


--
Best regards,
Bulatmailto:[EMAIL PROTECTED]


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: public/private module sections (was: Haskell-prime Digest, Vol 2, Issue 58)

2006-02-24 Thread Claus Reinke

  i personally prefer to have
  public/private modifiers on each function and gather interface
  documentation by tools like haddock
 Me too.



having to type one of public or private at each
function site would get really tedious...


you mean as in public static void main(String[] args) { ..}
instead of main args = ..?-) there are such languages, and
I'm happy to say Haskell isn't one of them! 


also remember that you'd need to add public and private
to more than just function definitions:

public class C a 
   where

   public m1 :: a
   private m2 :: a - String

public infixl :@
private infixl :@@ -- internal applications

public data Expr a = public Var a
 | Expr a (public :@)  Expr a
 | Expr a (private :@@) Expr a
deriving (private Show, public Eq)

private data Rec a = public Rec{ private distance :: a
   , public x :: a
   , public y :: a}
   deriving (public Show)

private -- please, no!-)


the nice thing about Haskell syntax is that is is fairly quiet,
there isn't much that doesn't have to be there or could distract
from the essentials of the code. please, let's keep it that way.

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


public/private module sections (was Re[2]: Export lists in modules)

2006-02-23 Thread Claus Reinke

modules M exports

class Eq a where
  (==) :: a - a - Bool

data T :: * - *
f :: T - Int
mkT :: Int - T

where -- implementation below here
--


SM The main difference is that I'm doing away with parentheses, commas, and
SM export specifiers, and using layout and full declaration syntax instead.
SM (I don't really want to discuss this very rough idea any more though,
SM it's just a distraction, and I'm not sure I like it anyway).


let's have a closer look before we dump this again, shall we?-)

if you'd go one step further, you'd replace the public/private modifiers
with public/private module sections. I don't like the modifiers, and I'm 
uncertain about the intermediate form you suggested, but I might be 
able to live with two-section modules:


nothing is duplicated, the public and private items are clearly grouped.
if you like to make the public section look more like an interface, you 
only use short definitions in it (could this be made to work for data type

constructors/class members?).

another neat consequence not available with the other alternatives:
instances could be private (without having to be named), instead of 
flooding into importing modules. 

proper interfaces would still be nice (and useful!), but the general 
opinion seems to be that they're beyond Haskell' (what isn't, really?-).


about tools:
tools can relieve some of pressure on the language design (which is why
I'm more in the camp a language definition should not ignore tools now!-).

but the wrong language design can make the tools' job awkward, so it 
may be useful to look at what a tool can/cannot do:


1 ides may let you browse interfaces, but they don't give a standard form
   of printing those interfaces with the modules they belong to, nor even
   a standard form of those interfaces..

2 ides let you see different locations in your sources in a single window
   (I use this whenever I need to modify the imports while editing in the
   middle of my module; I sometimes use this to see the definition of the
   data types I'm working on). that means that the link between definitions
   and import lists can be implicit, and browsing may bypass import lists/
   export interfaces entirely. that holds for both browsing and navigation.

3 ides can add/remove items from the imports remotely (HaRe does 
   this). they can also generate explicit export/import lists (even my old 
   Hugs.vim supported that), and add types in comments, but if they do 
   so, it is not clear what to do with comments already present (update 
   any types in them? what if they were meant to document alternative/
   old/comming versions..; leave any unidentified comments alone? then 
   we'll duplicate type comments as soon as someone adds any text to

   the automatically generated non-interface..)

4 ides can use tooltips to tell you whether the stuff you're looking at
   is exported or not, but again, that won't usually make it into printouts.

look at the problem from this perspective, and you see that

- haskell98 fails 1/4, profits from (and really needs) 2, doesn't support 
   3 all that well
- public/private modifiers only help with 4, part of their job is covered 
   by 3 already
- public/private sections solve 1/4, may still use 2/3 


so, public/private sections seem to support all pragmatic issues, and
since they're still just haskell code, don't suffer from duplication/scoping/..
issues, either. in fact, we might end up simplifying the syntax be removing
export lists! but exposing only some data constructors of a data type 
would be awkward?


cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: public/private module sections (was Re[2]: Export lists in modules)

2006-02-23 Thread Claus Reinke
let's go through 5.2 Export Lists to see what would be missing 
if we tried to replace the export list with a separation of a module

into a public (exported) and a private (local) part:

---
   module M 
   exports

   body
   where
   body
--

1. value, field name, or class method, whether declared in the 
   module body or imported, may be named by giving the name 
   of the value as a qvarid


   the easiest way to do that is to put the definition of that name
   or a synonym for it in the export section. to reexport names
   from other modules, import those names in the export section.
   
   that approach gets awkward if we only want to export some 
   field names of a data type.

   [ISSUE 1]

2. algebraic datatype T declared by a data or newtype declaration 
   - The form T names the type but not the constructors or 
   field names. 
   
 declare T itself in the local section, declare a type synonym 
   for T in the export section


   - The form T(c1,...,cn), names the type and some or all of its
   constructors and field names. 

   declare T in the local section, and synonyms for T and for 
   some of its fields/constructors in the export section. 


   again, the latter is awkward. worse, it doesn't work when
   constructors are used in patterns. 
   [ISSUE 2]


   - The abbreviated form T(..) names the type and all its 
   constructors and field names 


   declare T in the export section

3. A type synonym T declared by a type declaration 


   declare T in the export section

4. A class C with operations f1,...,fn declared in a class declaration
   - The form C names the class but not the class methods. 


   declare C in the local section. declare a synonym for C in
   the export section. 

   (it is strange that Haskell 98 allows us to export a class 
   without its methods, yet without restricting its use; 
   instantiating such a class would only make sense if all 
   hidden methods had default definitions, wouldn't it?

   so perhaps the class synonym would only need to be
   one-sided: for use, not for adding instances?).

   - The form C(f1,...,fn), names the class and some or all of its methods. 

   declare C in the local section, declare a partial synonym for 
   C, including some of its methods, in the export section. 

   (again, it doesn't seem to make much sense to make that 
   more than a one-sided synonym; see previous point).


   - The abbreviated form C(..) names the class and all its methods

   declare C in the export section

5. The form module M names the set of all entities that are in scope 
   with both an unqualified name e and a qualified name M.e. 


   for re-exports M, import M in the export section.
   for the current module: ??
   the current module seems to need syntax at first, until we realize that 
   this case is already handled by the split into export/local section.


   for imports we don't want to re-export, import them in the local 
   section (so imports need to be allowed in two places, at the 
   beginning of the exported section and at the beginning of the 
   local section (but that seems to be no problem, more relaxed 
   versions of import have been suggested here).


   note that it is no longer possible to export a module M that
   has not been imported. so the description of that error in the 
   report can be removed


--- 
so far, so good, if we can resolve the issues mentioned above,
there is a lot of simplicifation in return: 


   - no export lists
   (so the language definition becomes simpler, and if some 
   future standard tackles module interfaces, they won't have 
   to compete with overlaps between export lists and interfaces)


   - no need to duplicate features of import lists in export lists, 
   as import lists in export sections serve the same purpose 


   - less potential for errors

but that's not the end of the advantages: compared to other
proposals on the table, there is no duplication of type signatures,
nor of export information, and whether or not an item is
exported is directly visible from its presence in either section.
moreover:

6. (cf. 5.4 importing and exporting instances)

   to export an instance, declare it in the export section.
   to avoid exporting an instance, declare it in the local section.
   to import instances for re-export, import them in the export
   section.
   to import instances *without re-exporting* them, import
   them in the local section! (hooray!-)

   this is not a perfect success, however: we have selective
   export of instances, but not selective import - we have no 
   chance to import names from a module M without importing 
   its exported instances as well.

   [ISSUE 3]

the more I think about it, the more I like it (which probably
means that there is some ugly part of it that I'm not thinking
about;-). the outstanding 

Re: superclass implications

2006-02-22 Thread Claus Reinke

  class Monad m = MonadPlus mif  ..oops..

if Monad m, then declare MonadPlus m as follows..


This gloss doesn't make sense. The act of declaration is a constant 
static property of the module, and cannot be conditional on the property 
of a variable. The module _always_ declares the class.


would be nice, wouldn't it? and since section 4.3.1 Class Declarations
skirts the issue, one might assume that it does (*). but if you look through
4.3.2 Instance Declarations, you'll find:

   1. .. In other words, T must be an instance of each of C's superclasses 
   and the contexts of all superclass instances must be implied by cx'. 


and

   If the two instance declarations instead read like this: 
   ...
   then the program would be invalid. 


in other words, whether or not the superclass instances exist does not
just affect whether or not the subclass instances exist, it affects whether
or not the instance declaration, and hence the whole program, is valid.
if you don't have any ms for which Monad m holds, you won't be able
to declare any instances of MonadPlus m.

it doesn't matter whether you never use those instances. this program 
is simply not valid (but adding an A Int instance makes so):


   class A x
   class A x = B x 
   instance B Int 

(*) granted, the class declaration alone might still be considered valid, 
but you couldn't actually use it for anything, so I'm not sure that makes 
a difference. and whether or not the instance declaration is statically 
valid _is_ conditional on the existence of other instances.


it is this early/eager checking of superclass constraints that I find odd,
and different from all other constraint handling. it means that I can't use
superclass constraints to lift out common method constraints, because

   class ctxt = B x where {m1 :: t1;..; mn :: tn}

is not equivalent to 


   class B x where {m1 :: ctxt = t1;..; mn :: ctxt = tn}

[even if the conditions for variables in contexts would not rule that out]
whereas such lifting is possible for common constraints in instances.

it also means that I have to provide superclass instances at the _point
of declaration_ of subclass instances - I can not defer that obligation
to the _point of use_.

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


superclass implications (was: The worst piece of syntax in Haskell)

2006-02-21 Thread Claus Reinke
Not quite the same complaint, but I've always been bothered by the 
inconsistent use of =. I would prefer A = B to mean if A, then 
B. 


that keeps bugging me, too. but switching the implication is not going
to help (although others have proposed the same). here's how I keep 
my peace with that anomaly:


  class Monad m = MonadPlus m 
   if MonadPlus m, then declare Monad m as follows


  instance Integral a = Eq (Ratio a)
   if Integral a, then Eq (Ratio a)

  foo :: (Monad m) = [m a] - m [a]
   if Monad m, then foo :: [m a] - m [a]

the problem is (methinks) that the superclass implication is interpreted
at a different time/phase than the others, and classical logic doesn't 
have that notion:


   1. check Monad m, to ensure that MonadPlus m is a valid declaration
   (here, we _check_ that MonadPlus m = Monad m)
   2. handle everything else; and since know that we've done 1 first, we 
   can now _use_ that MonadPlus m = Monad m as well


actually, it is worse: constraints in instances and types just affect the 
validity of the thing that follows them, whereas constraints in classes
affect the validity of the whole program. 

on the basis of which we can reason backwards: 


   - if the program was invalid, I wouldn't be doing this step
   - I'm doing this step, so the program is (still) valid
   - if the program is valid, so must be the Monad m declaration
   - if MonadPlus m is a valid declaration, there must be Monad m
   - hence, MonadPlus m = Monad m

so, the reasoning for superclass contexts is backwards, not the 
implications. I once argued that it would be quite natural to interpret 
the superclass implications in the same way as the other implications

(thus relaxing the constraint that 1 has to be checked globally before
the program can be assumed valid, hence permitting more programs
to be valid).didn't convince the folks I showed it to, so that draft was
never even completed..

cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: proposal: standardize interface to Haskell' implementations

2006-02-20 Thread Claus Reinke

| (*) a standard haskell' api providing the commands of ghci/hugs
| style interactive systems would be a start, together with an
| annotated AST, parser/typer/pretty printer. more detailed
| specifications could be left for future revisions.
A reasonable suggestion, but I'm unsure what you actually have in mind.


what I have in mind are things to come, which would be quite
different from the initial steps we could reasonably expect Haskell'
to take. initially, a separate libary may be an acceptable start; but
ultimately, I don't want two separate Haskell implementations 
shipped for each installation.


for the moment, I'd like the Haskell' committe to say this is useful,
and commit to making a start, then see how far we can get.

at the very least, Language.Haskell needs to be expanded on, 
to cope with modules, to provide type information, and to cope
with language extensions [also, one might want to check whether 
SYB-style traversals, which are so useful for annotated ASTs,
are permitted within the limitation of Haskell']. but even there, 
adding and maintaining a type/module system implementation 
would be more work than exposing the existing one, and the 
same goes if we want loading/evaluation as well.


the difference between such an extended Language.Haskell and
the standardized interface I suggested is that the former is one
naive implementation of the latter. the difference between an 
extended Language.Haskell and the implementation by reflection

I had in mind is that the latter reuses the underlying Haskell
implementation to provide the same interface more efficiently
(even if that might involve translating between internal types 
and those in Language.Haskell+).


It is meta-programming because it allows Haskell programs to
operate on representations of Haskell programs. It may use
reflection to do so, if it permits Haskell programs access to
their own representations and to parts of the implementation 
they are running on. 

I don't say to do it all perfectly for Haskell', but just to make 
a start that goes beyond current Language.Haskell. For that
start, it may still be sufficient to leave most things in an library 
(**), and it doesn't have to support everything GHC's API 
does (though it does have to define implementation-independent

interfaces).

But ultimately, there will be ramifications for future language 
definitions (how to pass from programs to representations

and back? how to type these things? how to extend programs
at runtime? ... all the issues common to Template Haskell, 
hs-plugins, and type Dynamic [as done in Clean, not the 
poor man's version of Haskell]). Simon M adds extensible

data types to the list. I'm sure there's more, once we start
looking.

I find it interesting to note the the folks who claim this is
a libary-only problem are willing to put up with lots of
non-Haskell' hacks, not to mention partially functioning
work-arounds for features that belong in the language
definition (a proper type Dynamic, for instance, with
support for polymorphism, and with a way to address
the issue of representations of types originating from 
separate programs).


I'd prefer to flush out these secret hacks hidden in so-called
libraries, and to call a language feature a language feature.

Cheers,
Claus

Ideals are like stars. You may never be able to reach them, 
but you can navigate by them.


(**) one of the attractive things about early Haskell reports
   was the combination of language definitions and libraries.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-02-19 Thread Claus Reinke

I'm forwarding an email that Martin Sulzmann asked me to post on his
behalf.


thanks. well, that is one view of things. but it is certainly not the only one.

first, about those acrobatics: my type-class-level programs tend to
start out as straightforward logic programs over types - no acrobatics 
involved, easy to understand. the acrobatics start when I'm trying to 
fit those straightforward programs into the straightjackets imposed 
by current systems - be they explicit restrictions or accidental 
limitations. wrestling them leads to the need to introduce complex 
work-arounds and implementation-specific tricks into the formerly

clean logic programs.

the restrictions tend to be artificial and full of odd corners, rather 
than smooth or natural, and the limitations tend to be the result of 
not thinking the problem through. so one could say that neither of

the levels proposed by Martin is ready for standardization. or, one
could be pragmatic and standardize both levels, well knowing that
neither is going to be the final word.

second, about the proposal to ignore overlapping instances for the
moment: that moment has long passed - iirc, undecidable and 
overlapping instances have been around for longer than the current

standard Haskell 98. they provide recursion and conditionals (via
best-match pattern-matching) for logic programming over types.
they are overdue for inclusion in the standard, and it is time to stop 
closing our eyes to this fact.


third, understanding FDs via CHRs is very nice, and a good start
that I hope to see expanded on rather sooner than later. but (a) 
there is a big discrepancy between the language studied there and 
current practice, so this work will have to catch on before it can

be applied; and (b) the main issue with unconstrained type-class-
level programming is not that it tackles a semi-decidable problem.

in fact, the usual Prolog system only provides an _incomplete_ 
implementation of almost the same semi-decidable problem. but it 
provides a well-defined and predictable implementation, so

programmers can live with that (with a few well-known exceptions).

the two problems are not quite the same, because we extract
programs from type-class-level proofs. so not only do we need
the result to be uniquely determined (coherence), we also need
the proof to be un-ambiguous (or we might get different programs
depending on the proof variant chosen).

but, and this is an important but: trying to restrict the permissable
programs so that neither incoherence nor ambiguity can arise in 
the first place is only _one_ way to tackle the issue; the other

direction that needs to be explored is to give programmers control
enough to disambiguate and to restore coherence where needed.

so, I am quite happy with splitting the bigger problem into two
levels: restrictive programs corresponding to the current state of
art in formal studies of the problem, and unconstrained programs
catering for the current state of art in programming practice. just
as long as both levels are included in the standard, and both levels
receive the further attention they need.

as far as I understand, this should not pose a problem for 
implementors, as they usually implement the simpler unconstrained 
variant first, then add restrictions to conform to the standard.


cheers,
claus


- There's a class of MPTC/FD programs which enjoy sound, complete
 and decidable type inference. See Result 1 below. I believe that
 hugs and ghc faithfully implement this class.
 Unfortunately, for advanced type class acrobats this class of
 programs is too restrictive.

- There's a more expressive class of MPTC/FD programs which enjoy
 sound and complete type inference if we can guarantee termination
 by for example imposing a dynamic check.  See Result 2 below. Again,
 I believe that  hugs and ghc faithfully implement this class if they
 additionally implement a dynamic termination check.
 Most type class acrobats should be happy with this class I believe.

Let's take a look at the combination of FDs and well-behaved instances.
By well-behaved instances I mean instances which are non-overlapping and
terminating. From now on I will assume that instances must be well-behaved.
The (maybe) surprising observation is that the combination
of FDs and well-behaved instances easily breaks completeness and
decidability of type inference. Well, all this is well-studied.
Check out
[February 2006] Understanding Functional Dependencies via Constraint
Handling Rules by Martin Sulzmann, Gregory J. Duck, Simon Peyton-Jones
and Peter J. Stuckey
which is available via my home-page.

Here's a short summary of the results in the paper:

Result 1:
To obtain sound (we always have that), complete and decidable type inference
we need to impose
   - the Basic Conditions (see Sect4)
 (we can replace the Basic Conditions by any conditions which guarantees
  that instances are well-behaved. I'm ignoring here
  super classes for simplicity)
  

Re: First class labels

2006-02-13 Thread Claus Reinke

there is now a ticket and a wiki page for this, #92:
http://hackage.haskell.org/trac/haskell-prime/ticket/92

we haven't had much discussion yet; in particular noone
has said yes, that makes sense or no, we don't need that,
but Simon's questions have helped to clarify the initial message
a bit, and I thought I'd summarize things, especially as he's
away for a bit, and noone else seems to be interested in 
record-related stuff for haskell'?


cheers,
claus

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: First class labels

2006-02-10 Thread Claus Reinke

Simon

thanks for the questions. I'll try to clarify.

[1]

...   For example, what does it mean to remove the need to
declare labels, make them identifiable as such in use? 


implicit label declarations, basically.

the code for records I posted depends on any two field labels being 
distinguishable by type. to get readable records, we want something

like typed constants, e.g.:

   data LabelX = LabelX deriving (..)

but all these declarations will be the same, apart from the label name,
so if we had syntactic means to see that something is a label, and the
only inhabitant of its type, there'd be no need for these declarations. 

for the sake of discussion, let us assume that we reserve a '#' prefix 
before identifiers and types to single them out as label-related. 
then we'd know (without explicit declaration) that


   #labelX :: #LabelX

why would that be interesting? that brings us to your second issue:

[2]
Then, the code that you enclosed appeared to show that you could 
do without any extension at all.


the code establishes a context for the proposal, nothing more.

the code demonstrates that there are record system variants that we
can implement without any new language extensions. but it could not
be used in practical, multi-module situations, because of the need to 
declare those label types.


if we have two modules, A and B (think ...OpenGL and some GUI
library), that both want to use records with fields named 'pointX', 
they'd both have to declare the field label as


data PointX = PointX deriving (..)

now, if we ever want to import A and B into the same module C
(think OpenGL windows in a GUI), we are in trouble, because we 
have two conflicting declarations for PointX. at the moment, that 
means that we have two ways out


   - either use qualified names in C: A.PointX and B.PointX;
   this is awkward, to say the least, and still doesn't let us
   identify what should be two instances of the same field 
   name, forcing upon us superfluous conversions


   - or modify the imports: introduce a new module PointX that
  declares PointX, and have both A and B import that;
   this is impractical: it breaks module composition, and there
   is no least upper bound in the import hierarchy where we
   can safely place our label declarations once and for all

which brings us to your final suggestion:
  
[3]

Records are a huge swamp with a very large number of possible variants
and design choices.  Perhaps you might gain more traction if you were
ruthlessly specific about what language changes you advocate, and what
benefits they would have (versus the existing situation).


I was trying to single out a minimal extension that might help to steer
around that swamp (which seems to be the undeclared intention for
Haskell'?), while still providing the means for making progress wrt 
a better record system for Haskell'. I was explicitly _not_ suggesting

to build any new record system variant into the language.

the code shows by example what is possible without new extensions
while also highlighting issues that are not easily addressed without new
extensions (and making old extensions official parts of the language). 


the concrete proposal is to address one of these remaining issues,
namely how to identify record field labels as such. for that, I outlined
three options, although that outline perhaps wasn't concrete enough:

1. make label declarations unneccessary (eg., #pointX :: #PointX)

2. make type sharing expressible (something like the sharing 
   constraints in Standard ML's module language, to allow you to 
   say when two declarations from different imports refer to the 
   same type)


3. introduce a least upper bound for shared label imports
   (so A and B could just 'import Data.Label(pointX)', which
would magically provide the shared declaration for pointX)

does that clear things up a bit? if anyone still has questions, 
please ask!-)


cheers,
claus

original message:
http://www.haskell.org//pipermail/haskell-prime/2006-February/000463.html

   
___

Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


things to throw away?

2006-02-03 Thread Claus Reinke

We must find *something* to throw away though! :-)
Simon


Indeed. One of the things I had been hoping for in Haskell'
was the removal of the many conservative restrictions put
into earlier definitions: they complicate the language definition,
restrict expressiveness, and have prompted various extensions.

- mr
- the whole bunch of you can't do this (we think) in type 
   classes and their instances, when nowadays we know that 
   type class instances are all about logical meta-programming 
   at the type level. non-decidability should still be optional, 
   but also, at least standardised.

- ..

(btw, I hope I'm not misquoting, but I think it was Mark Jones
who said that permitting complex type parameters was more 
important than having multiple parameters in type classes - you 
can simulate multiple parameters by tupling)


anyway. Just as I was disturbed by the many not-yet-existing
features under discussion, I am worried about the new trend
of proposing not to include old friends (MPTC, concurrency,
functional dependencies, ..). If that should happen, Haskell'
will be just as irrelevant as Haskell98 was, before the FFI
addendum (how many Haskell98 programs were there that
did not use primitives?).

So I repeat my opinion: the committee should not limit itself
to a single, all-encompassing standard. There are things that
can and need to be standardised, for which we do not yet
know whether they should be frozen into _the_ standard
forever, and there are things that need to be standardized,
for which the standardization might take too long to match
the Haskell' timeline.

The established answer to such changeability in software is
to modularize, and the same should happen for the language
standard. I agree with Patryk here (I even like the idea of
abusing imports to specify language extensions in use, though
I would simply use a combination of imports and reserved
parts of the module hierarchy, without modifying the import
syntax at all). 

Perhaps we cannot have Concurrent Haskell in all Haskell' 
implementations, or perhaps Functional Dependencies will 
be replaced by something else in the future. But when I use 
either of them, I want to be able to write code that any 
supporting Haskell'+CH+FD implementation will understand 
and interpret the same way, and about which any 
non-supporting Haskell' implementation will be able to tell
me exactly what it is that it doesn't support (instead of giving 
obscure syntax errors). Scanning over the import lines and

reporting that no, sorry, we don't have Language.Haskell.
Extensions.Types.FancyRankN here should do the latter
quite nicely, and allows to document the former in the same
way as libraries.

Cheers,
Claus

PS Someone suggested searching the libraries for features
   that are in use and should therefore be included in Haskell'.
   Another thing to look for are preprocessor directives
   protecting differences between implementations. Also,
   perhaps someone could write a simple program analyzer
   that people could run over their own code repositories
   to report features in use back here (perhaps based on
   the extended Haskell syntax parser)? You'll need something
   like this anyway, as part of moving code from Haskell98 
   and Haskell(GHC), ... to Haskell'.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: what's the goal of haskell-prime?

2006-01-30 Thread Claus Reinke

No language can serve all of the people all of the time, but I think
we should just do our best with a single standard.  I think that the
complexity of multiple languages / layers / standards would not be
worth the payoff.


My original understanding of the Haskell' effort was that it was *not*
intended as going for Haskell 2, but rather as an update of Haskell 98.

In other words, the target is Haskell 2005:

- anything that was tried and tested by the end of 2005 is a potential
   candidate for inclusion in Haskell 2005. nothing else is.

this would necessarily exclude much of the discussion here, for which
I'd see only three ways out:

   - make an exception to rule one (bad, but occasionally needed)
   - ignore and leave for Haskell 2, whenever that might be (impractical)
   - standardise as an optional addendum to Haskell 2005, to lay the
   groundwork for Haskell 2010, and to narrow down on the more
   successful experiments (good, avoid adhoc Haskell 2 in favour 
   of incremental approximations)


and the third way seems the most likely to succeed. There'll always 
be Haskell xx+extensions (unless people stop experimenting) and some

extensions are good enough to be standardised (perhaps with options),
even if not yet good enough to be part of the current standard. Has 
the target changed, or was I misled to think of it this way?-)


btw, I'd find it hard to track discussion on a wiki/ticket system alone.
Could a member of the committee arrange for a Haskell'-weekly
message, please (similar to Haskell weekly, but collecting news headers
and links from haskell', wiki, track, and internal committee discussions)?

cheers,
claus

ps. will haskell 98 support continue when the new standard comes 
   out, or will there always be 2 languages (standard and standard+)?


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Removal candidates in patterns

2006-01-26 Thread Claus Reinke

Olaf Chitil wrote:
I'd like to add one pattern to this list of removal candiates: k 
patterns, that is, numeric literals.


I was rather shocked when I first read this. And I certainly don't
like the argument from implementation difficulties in a certain tool!-)

I don't mind losing (n+k), not because it wasn't neat, but it looks like
a special case needing a more general solution, beyond Haskell''s scope.

I don't want to lose numeric literals in patterns! But, having recovered
from the first shock, and ignoring other people's hats, there may be a
few things that need cleaning up in that area (why do some patterns
work without Eq, some with? Should there be a Match class or 
something to pin down what can and what can't be matched how?..).



Let's remove higher order functions too, they are tricky to
implement. :)


it seems so, at least for pattern matching numeric literals; what is the
result of (f 1) and (g A) in the following code?

... -- some code omitted here

f 1 = True
f n = False

g A = True
g n = False

and should it depend on the context (types, instances, ..), or not?

run the following through ghci with and without the signature for f, 
and with either version of (==) for functions; and what happens if

we uncomment the Eq instance for D? is that all as expected?

cheers,
claus

---
module K where

import Text.Show.Functions

instance Eq (a-b) where
 f == g = False
 -- f == g = True

instance Num b = Num (a-b) where
 fromInteger n = const $ fromInteger n

-- f :: Num b = (a-b) - Bool
f 1 = True
f n = False

main = print $ (f 1,g A)

-

data D = A | B -- no Eq, but matching allowed

-- instance Eq D where a == b = False

g A = True
g n = False


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime