[Haskell] Post-doc positions at Portland State University

2011-02-16 Thread Mark P. Jones
The High-Assurance Systems Programming (HASP) project at Portland
State University in Portland, OR, USA, has openings for *two*
post-doctoral researchers to help design, develop, and apply a new
strongly typed, pure functional language for systems programming.
The Habit language derives from Haskell, with the addition of
features for efficient low-level programming.  Its compiler and
high-assurance runtime system (HARTS) extend the verified CompCert
compiler.  Possible demonstration projects include an L4-based
microkernel, lightweight Xen guest domains, and high-assurance
portable devices.  It is anticipated that one post-doc will
champion the certifying compiler and the other will champion the
demonstration project.  The HASP team currently consists of three
faculty (James Hook, Mark Jones, and Andrew Tolmach) and seven PhD
students.

Please see http://hasp.cs.pdx.edu/postdoc.html for more details.


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Type Family Relations

2009-01-04 Thread Mark P. Jones

Hi Thomas,

The specific problem you describe has a simple solution using
multiple parameter classes with syntactic sugar for functional
notation instead of type families [1]:

 class AddressOf h a | h - a, a - h  -- bijections
 class HeaderOf a h  | a - h, h - a
 instance HeaderOf (AddressOf h) h -- your axiom

By the way, without the syntactic sugar, the instance
declaration shown here is just:

 instance AddressOf h a = HeaderOf a h  -- also your axiom

Individual (address,header) pairs are documented with a
single instance, as in:

 instance AddressOf IPv4Header IPv4

And now the type system can derive HeaderOf IPv4 IPv4Header
automatically using your axiom.  But you won't be able to add
the invalid AppHeader example because it will conflict (either
with overlapping instances or with covering, depending on how
you approach it).  I like this approach because we only have to
give a single instance instead of writing a pair of declarations
that have to be checked for mutual consistency.

I conclude that perhaps your example doesn't illustrate the kind
of arbitrarily complex constraints you had in mind.  Maybe you
could elaborate?

All the best,
Mark

[1] http://web.cecs.pdx.edu/~mpj/pubs/fundeps-design.html

Thomas DuBuisson wrote:

Cafe,
I am wondering if there is a way to enforce compile time checking of
an axiom relating two separate type families.

Mandatory contrived example:


type family AddressOf h
type family HeaderOf a

-- I'm looking for something to the effect of:
type axiom HeaderOf (AddressOf x) ~ x

-- Valid:
type instance AddressOf IPv4Header = IPv4
type instance HeaderOf IPv4 = IPv4Header

-- Invalid
type instance AddressOf AppHeader = AppAddress
type instance HeaderOf AppAddress = [AppHeader]


So this is  a universally enforced type equivalence.  The stipulation
could be arbitrarily complex, checked at compile time, and must hold
for all instances of either type family.

Am I making this too hard?  Is there already a solution I'm missing?

Cheers,
Tom
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Origins of '$'

2008-12-08 Thread Mark P. Jones

Don Stewart wrote:

Which suggests that $ was already in the 1.0 report going to SIGPLAN.
Perhaps Paul or Simon could shed light on it? Anyone have the 1.0 report
lying around to check if it was in earlier?


As far as Haskell is concerned, the first report-ed occurrence
of the $ operator was in the Haskell 1.2 report dated March 1992.
I don't see any mention of the $ operator in either the 1.0 or
the 1.1 reports (April 1990 and August 1991, respectively).

The 1.0 report did define the following operator, which is a
variant of $:

  let ::  a - (a - b) - b
  let x k  =  k x

This was exported from the prelude, but its definition actually
appeared in the PreludeIO section of the report, hinting at the
main motivation for its introduction in support of continuation
based I/O.  (Monadic I/O didn't officially arrive until the 1.3
report in May 1996.)

But the let operator was quickly promoted to a keyword in
Haskell 1.1 with the introduction of let expressions, replacing
the where expressions that Haskell 1.0 had provided for local
definitions.  With the move to 1.1, where became part of the
syntax for definition right hand sides, able to scope across
multiple guards and no longer part of the expression syntax.

A little history can sometimes be fun.

All the best,
Mark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-16 Thread Mark P Jones

Martin Sulzmann wrote:
We're also looking for (practical) examples of multi-range functional 
dependencies


class C a b c | c - a b

Notice that there are multiple (two) parameters in the range of the FD.

It's tempting to convert the above to

class C a b c | c - a, c - b

but this yields a weaker (in terms of type improvement) system.


I agree with Iavor.

In fact, the two sets of dependencies that you have given here
are provably equivalent, so it would be decidedly odd to have
a type improvement system that distinguishes between them.

While you're looking for unusual examples of fundeps, you
might also want to consider things like:

  class C a b c | a - b, b - c

Note that this is subtly different from a - b c because

  {a - b, b - c}  |=  {a - b c}

while the reverse does not hold.  Instead of type classes,
I'll give you some more down-to-earth examples of relations
that satisfy these dependencies:

  {Paper, Conference, Year}
  {Professor, University, Country}
  {Person, FavoriteFood, FoodGroup}
  ...

For further insight, you might want to take a look at the theory
of relational databases to see how functional dependencies are
used there.  In that context, some sets of dependencies (such
as {a - b, b - c}) can be interpreted as indicators of bad
design.  This, in turn, might give you some ideas about the kinds
of dependencies you can expect to encounter in well-designed
Haskell code.  (Of course, you might still find examples in other
Haskell code of dependencies that would make a database person
wince :-)

All the best,
Mark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: type families and type signatures

2008-04-07 Thread Mark P Jones

The surprising thing about this example is the fact that
the definition of foo is accepted, and not the fact that
the definition of foo' is rejected.  At least in Manuel's
equivalent program using functional dependencies, both
functions have ambiguous types, and hence both would be
rejected.  It sounds like your implementation may be using
a relaxed approach to ambiguity checking that delays
ambiguity checking from the point of definition to the
point of use.  Although this is probably still sound, it
can lead to puzzling error diagnostics ...

All the best,
Mark

Tom Schrijvers wrote:

type instance Id Int = Int

foo :: Id a - Id a
foo = id

foo' :: Id a - Id a
foo' = foo Is this expected?


Yes, unfortunately, this is expected, although it is very 
unintuitive. This is for the following reason.


Huh? This sounds very wrong to me, simply because  foo  and  foo'  
have the very same type.


Type systems reject programs that don't go wrong. It's hard to 
understand on the basis of such a single program why it should be 
rejected. The problem is decidability. There is no algorithm that 
accepts all well-behaved programs and rejects all ill-behaved programs. 
There probably is an algorithm that accepts a particular program.


So far we haven't found an algorithm that accepts this example, that is 
decidable and sufficiently general to cover many other useful cases. 
This is our motivation for rejecting this program.


Consider it a challenge to find a better algorithm in the design space.

Cheers,

Tom

--
Tom Schrijvers

Department of Computer Science
K.U. Leuven
Celestijnenlaan 200A
B-3001 Heverlee
Belgium

tel: +32 16 327544
e-mail: [EMAIL PROTECTED]
url: http://www.cs.kuleuven.be/~toms/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: type families and type signatures

2008-04-07 Thread Mark P Jones

Hi Tom,

It seems we are thinking of different things.  I was referring to
the characterization of a type of the form P = t as being ambiguous
if there is a type variable in P that is not determined by the
variables in t; this condition is used in Haskell to establish
coherence (i.e., to show that a program has a well-defined semantics).

I don't know if you have defined/studied corresponding notions of
ambiguity/coherence in your framework.  Instead, I was referring to
what Manuel described as the equivalent problem using FDs:

  class IdC a b | a - b
  instance IdC Int Int

  bar :: IdC a b = b - b
  bar = id

  bar' :: IdC a b = b - b
  bar' = bar

In this program, both bar and bar' have ambiguous types (the
variable 'a' in each of the contexts is not uniquely determined
by the variable 'b' appearing on the right), and so *neither*
one of these definitions is valid.  This behavior differs from
what has been described for your approach, so perhaps Manuel's
example isn't really equivalent after all.

Technically, one could ignore the ambiguous type signature for
bar, because the *principal* type of bar (as opposed to the
*declared type*) is not ambiguous.  However, in practice, there
is little reason to allow the definition of a function with an
ambiguous type because such functions cannot be used in practice:
the ambiguity that is introduced in the type of bar will propagate
to any other function that calls it, either directly or indirectly.
For this reason, it makes sense to report the ambiguity at the
point where bar is defined, instead of deferring that error to
places where it is used, like the definition of bar'.  (The latter
is what I mean by delayed ambiguity checking.)

Hope that helps,
Mark


Tom Schrijvers wrote:

On Mon, 7 Apr 2008, Mark P Jones wrote:


The surprising thing about this example is the fact that
the definition of foo is accepted, and not the fact that
the definition of foo' is rejected.  At least in Manuel's
equivalent program using functional dependencies, both
functions have ambiguous types, and hence both would be
rejected.  It sounds like your implementation may be using
a relaxed approach to ambiguity checking that delays
ambiguity checking from the point of definition to the
point of use.  Although this is probably still sound, it
can lead to puzzling error diagnostics ...


On the algorithmic level I don't see a relaxed approach to ambiguity 
checking.


If alpha is a unifiction variable, you get for the first body

(Id a - Id a)  ~ (alpha ~ alpha)

where there is no ambiguity: the unique solution (modulo equality 
theory) is alpha := Id a.


For the second body you get

(Id a - Id a) ~ (Id alpha - Id alhpa)

This equation reduces to Id a ~ Id alpha. Our algorithm stops here. 
There is in general no single solution for alpha (*) in such an 
equation, as opposed to the above case.


I hope this clarifies our algorithm.

Cheers,

Tom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: Re: RE: Extensible records: Static duck typing

2008-02-21 Thread Mark P Jones

Barney Hilken wrote:
I totally disagree. The great strength of Haskell is that, whenever 
important design decisions have been made, the primary consideration has 
not been practicality, but generality and mathematical foundation. When 
the Haskell committee first started work, many people said lazy 
evaluation was an academic curiosity: mathematically right, but far too 
inefficient for real programs. When Haskell adopted type classes, people 
said they were far too heavy a machinery to solve the relatively simple 
problems of equality, show and numbers. In each case the more general, 
abstract approach has shown enormous advantages in the long term. I'm 
sure the same will be true of associated types, which are a lot more 
complex than functional dependencies, but also more general, and more 
mathematical.


While I agree with your general argument, I wonder if you realize
that functional dependencies have a strong, general, and elegant
mathematical foundation that long predates their use in Haskell?
If you want even a brief glimpse, there's s short article at
http://en.wikipedia.org/wiki/Functional_dependencies that might
give you some ideas.  The mathematics of functional dependencies
plays an important role in the theory of relational databases.

I don't know what you consider as the mathematical foundations
for associated types, nor do I know why you consider that to be
either more general or more mathematical (whatever that means)
but I hope you'll enjoy the material on functional dependencies.

All the best,
Mark

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Re: [Haskell] Re: Re: RE: Extensible records: Static duck typing

2008-02-21 Thread Mark P Jones

[Redirecting to haskell-cafe]

Ben Franksen wrote:

TREX seems to be generally agreed to be too complicated to implement and
explain.


What evidence do you have for this?  Speaking as somebody who
implemented Trex for Hugs (and who also witnessed Ben Gaster
build an independent implementation), I'd have to disagree
about the first part of this.  As I recall, the implementation
was reasonably straightforward, and wasn't any more complicated
than other common type system extensions that I've worked on.
I'm also not sure why you think it is complicated to explain,
but I suppose that's even more subjective.

One of the most difficult things about the Trex implementation
was finding a syntax that meshed nicely with the rest of the
Haskell syntax.  This is likely to be a problem for any record
system extension of Haskell---unless you're prepared to accept
a more unconventional syntax---because many of the symbols that
you might want to use ({, }, ., |, \, for example) have already
been adopted for other purposes.  Ah, syntax!

All the best,
Mark
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-21 Thread Mark P Jones

Hi All,

Here are my responses to the recent messages, starting with some
summary comments:

- I agree with Martin that the condition I posted a few days ago is
  equivalent to the *refined* weak coverage condition in your
  paper.  The refined tag here is important---I missed it the
  first time and thought he was referring to the WCC at the start of
  Section 6.1, and I would strongly recommend against using that
  (unrefined) version.  I confess that I haven't read the paper
  carefully enough to understand why you included that first WCC at
  all; I don't see any reason why you would want to use that
  particular condition in a practical implementation, and I don't
  see any advantages to it from a theoretical perspective either.
  (Maybe you just used it as a stepping stone to help motivate the
  refined WCC?)

- I believe that the refined WCC (or the CC or even the original WCC
  if you really want) are sufficient to ensure soundness.  I don't
  agree that the (B) restriction to full FDs is necessary.

- I think it is dangerous to tie up soundness with questions about
  termination.  The former is a semantic property while the latter
  has to do with computability.  Or, from a different perspective,
  soundness is essential, while termination/decidability is only
  desirable.  I know that others have thought more carefully than
  I have about conditions on the form of class and instance
  declarations that will guarantee decidability and I don't have
  much to say on that topic in what follows.  However, I will
  suggest that one should start by ensuring soundness and then,
  *separately*, consider termination.  (Of course, it's perfectly
  ok---just not essential---if conditions used to ensure soundness
  can also be used to guarantee termination.)

Stop reading now unless you're really interested in the details!
I'm going to start by explaining some key (but hopefully
unsurprising) ideas before I respond in detail to the example
that Martin posted.

Interpreting Class, Instance, and Dependency Decls:
---
I'll start by observing that Haskell's class, instance, and
functional dependency declarations all map directly to formulas in
standard predicate logic.  The correspondence is straightforward, so
I'll just illustrate it with some examples:

  Declaration   Formula
  ---   ---
  class Eq a = Ord a   forall a. Ord a = Eq a
  instance Eq a = Eq [a]   forall a. Eq a = Eq [a]
  class C a b | a - b  forall a, b. C a b /\ C a c = b = c

This correspondence between declarations and formulas seems to be
very similar to what you did in the paper using CHRs.  I haven't
read the paper carefully enough to know what extra mileage and/or
problems you get by using CHRs.  To me, predicate logic seems more
natural, but I don't think that matters here---I just wanted to
acknowledge that essentially the same idea is in your paper.

In the following, assuming some given program P, I'll write I(P)
for the conjunction of all the instance declaration formulas in
P; C(P) for the conjunction of all the class declaration formulas;
and F(P) fo the conjunction of all the dependency declaration
formulas.

Semantics and Soundness:

We'll adopt a simple and standard semantics in which each
n-parameter type class is described by an n-place relation, that is,
a set of n-tuples of types.  Of course, in the most common, one
parameter case, this is equivalent to viewing the class as a set of
types.  Each tuple in this relation is referred to as an instance
of the class, and the predicate syntax C t1 ... tn is just a
notation for asserting that the tuple (t1, ..., tn) is an instance
of C.

We can assign a meaning to each of the classes in a given program P
by using the smallest model of the instance declaration formulas,
I(P).  (Existence of such a model is guaranteed; it is the union of
an increasing chain of approximations.)

What about the superclass properties in C(P)?  We expect to use
information from C(P) to simplify contexts during type inference.
For example, if we're using the standard Haskell prelude, then we
know that we can simplify (Eq a, Ord a) to (Ord a).  But how do we
know this is sound?  Haskell answers this by imposing restrictions
(third bullet in Section 4.3.2 of the report) on the use of
instance declarations to ensure that our (least) model of I(P) is
also a model of C(P).  This guarantees, for example, that every
type in the semantics of Ord will, as required, also be included
in the semantics of Eq.

That should all seem pretty standard, but I've included it here
to make the point that we need to something very similar when we
introduce functional dependencies.  Specifically, we want to be
able to use properties from F(P) to simplify/improve contexts
during type inference, so we have to ensure that this is sound.
If we follow the strategy that was used to ensure soundness of
C(P) in Haskell, 

[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-18 Thread Mark P Jones

[Sorry, I guess this should have been in the cafe ...]

Simon Peyton-Jones wrote:

The trouble is that
a) the coverage condition ensures that everything is well behaved
b) but it's too restrictive for some uses of FDs, notably the MTL library
c) there are many possibilities for more generous conditions, but
the useful ones all seem complicated

Concerning the last point I've dumped the current brand leader
for (c) into http://hackage.haskell.org/trac/ghc/ticket/1241#comment:15.

Better ideas for (c) would be welcome.


Let's take the declaration:  instance P = C t where ...
The version of the coverage condition in my paper [1] requires
that TV(t_Y) \subseteq TV(t_X), for each dependency (X-Y) \in F_C.
(I'm using the notation from the paper; let me know if you need more
help to parse it.)  This formulation is simple and sound, but it
doesn't use any dependency information that could be extracted from P.
To remedy this, calculate L = F_P, the set of functional dependencies
induced by P, and then expand the right hand side of the set inequality
above by taking the closure of TV(t_X) with respect to L.  In symbols,
we have to check that:

  TV(t_Y) \subseteq TV(t_X)^+_L, for each (X-Y) \in F_C.

I believe (but haven't formally proved) that this is sound; I don't
know how to make a more general coverage condition without losing
that.  I don't know if it's sufficient for examples like MTL (because
I'm not sure where to look for details of what that requires), but
if it isn't then I'd be very suspicious ...

All the best,
Mark

[1] http://www.cs.pdx.edu/~mpj/pubs/fundeps-esop2000.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-17 Thread Mark P Jones

Simon Peyton-Jones wrote:

| I am quite intrigued at the behaviour examplified in the attached module.
| It's true I am a newbie and probably don't quite get the whole consequence
| spectrum of -fallow-undecidable-instances, but why providing that dummy
| instance (commented out) get the thing to compile?

Sorry I must have missed this.  It's a nice example of the trickiness of
functional dependencies.  Here's what is happening.  First a very cut-down
version of your example:

class Concrete a b | a - b where
bar :: a - String

instance (Show a) = Concrete a b

wib :: Concrete a b = a - String
wib x = bar x

Now consider type inference for 'wib'. ...


Hold on a second!  There's a more serious problem here, before we
get to 'wib'.  The definition of class Concrete asserts that there
is a dependency from a to b.  In other words, it promises that, for
any a, there must be at most one b such that Concrete a b holds.
But then the following instance declaration says that Concrete a b
can be instantiated for *any* a and b, the only proviso being that a
is an instance of Show.  In particular, there is no functional
relationship between the parameters.  As such, these two
declarations are in direct conflict with one another!  To quote the
error message that Hugs produces, the Instance is more general than
a dependency allows.

I thought this must be a typo in your email, but then I discovered
that the ghci (6.6.1) installed on my machine accepts this code, at
least once the Concrete Bool Bool instance was added.  If the
instance declarations are not consistent with the functional
dependency, then improvement is unsound, and all bets are off!

Further experiments suggest that this behavior occurs only when 
the-fallow-undecidable-instances flag is specified.  But the reason you

need to check for consistency between instance declarations and
dependencies is to ensure soundness, not decidability.

I don't know if this was the problem in the original example, but
perhaps we should debug this cut down version first :-)

All the best,
Mark

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Haskell implementation of infixr and infixl/priorities

2004-10-25 Thread Mark P Jones
Hi Peter,

| I´m progarmming a parser for functional programs. Now
| I want to implement the infixL and infixR feature to increase
| the readability of the code. I would be very glad if anyone
| can send me some information about the implementation of
| this feature of  the Haskell parser or where I can find
| something about it.

The comment shown below is taken from the Hugs sources (the
file static.c) and might help to answer your questions.

Hope this helps,
Mark


/*
--
 * Dealing with infix operators:
 *
 * Expressions involving infix operators or unary minus are parsed as
 * elements of the following type:
 *
 * data InfixExp = Only Exp | Neg InfixExp | Infix InfixExp Op Exp
 *
 * (The algorithms here do not assume that negation can be applied only
once,
 * i.e., that - - x is a syntax error, as required by the Haskell report.
 * Instead, that restriction is captured by the grammar itself, given
above.)
 *
 * There are rules of precedence and grouping, expressed by two functions:
 *
 * prec :: Op - Int;   assoc :: Op - Assoc(Assoc = {L, N, R})
 *
 * InfixExp values are rearranged accordingly when a complete expression
 * has been read using a simple shift-reduce parser whose result may be
taken
 * to be a value of the following type:
 *
 * data Exp = Atom Int | Negate Exp | Apply Op Exp Exp | Error String
 *
 * The machine on which this parser is based can be defined as follows:
 *
 * tidy :: InfixExp - [(Op,Exp)] - Exp
 * tidy (Only a)  [] = a
 * tidy (Only a)  ((o,b):ss) = tidy (Only (Apply o a b)) ss
 * tidy (Infix a o b) [] = tidy a [(o,b)]
 * tidy (Infix a o b) ((p,c):ss)
 *  | shift  o p = tidy a ((o,b):(p,c):ss)
 *  | redo p = tidy (Infix a o (Apply p b c)) ss
 *  | ambig  o p = Error ambiguous use of operators
 * tidy (Neg e)   [] = tidy (tidyNeg e) []
 * tidy (Neg e)   ((o,b):ss)
 *  | nshift o   = tidy (Neg (underNeg o b e)) ss
 *  | nred   o   = tidy (tidyNeg e) ((o,b):ss)
 *  | nambig o   = Error illegal use of negation
 *
 * At each stage, the parser can either shift, reduce, accept, or error.
 * The transitions when dealing with juxtaposed operators o and p are
 * determined by the following rules:
 *
 * shift o p  = (prec o  prec p)
 *   || (prec o == prec p  assoc o == L  assoc p == L)
 *
 * red o p= (prec o  prec p)
 *   || (prec o == prec p  assoc o == R  assoc p == R)
 *
 * ambig o p  = (prec o == prec p)
 *(assoc o == N || assoc p == N || assoc o /= assoc p)
 *
 * The transitions when dealing with juxtaposed unary minus and infix
 * operators are as follows.  The precedence of unary minus (infixl 6) is
 * hardwired in to these definitions, as it is to the definitions of the
 * Haskell grammar in the official report.
 *
 * nshift o   = (prec o  6)
 * nred   o   = (prec o  6) || (prec o == 6  assoc o == L)
 * nambig o   = prec o == 6  (assoc o == R || assoc o == N)
 *
 * An InfixExp of the form (Neg e) means negate the last thing in
 * the InfixExp e; we can force this negation using:
 *
 * tidyNeg  :: OpExp - OpExp
 * tidyNeg (Only e)  = Only (Negate e)
 * tidyNeg (Infix a o b) = Infix a o (Negate b)
 * tidyNeg (Neg e)   = tidyNeg (tidyNeg e)
 *
 * On the other hand, if we want to sneak application of an infix operator
 * under a negation, then we use:
 *
 * underNeg  :: Op - Exp - OpExp - OpExp
 * underNeg o b (Only e)  = Only (Apply o e b)
 * underNeg o b (Neg e)   = Neg (underNeg o b e)
 * underNeg o b (Infix e p f) = Infix e p (Apply o f b)
 *
 * As a concession to efficiency, we lower the number of calls to syntaxOf
 * by keeping track of the values of sye, sys throughout the process.  The
 * value APPLIC is used to indicate that the syntax value is unknown.
 *
*/

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Syntax extensions (was: RE: The Future of Haskell discussion at the Haskell Workshop)

2003-09-11 Thread Mark P Jones
| We at GHC HQ agree, and for future extensions we'll move to 
| using separate options to enable them rather than lumping 
| everything into -fglasgow-exts.  This is starting to happen 
| already: we have -farrows, -fwith, -fffi (currently implied 
| by -fglasgow-exts).
| 
| Of course, if we change the language that is implied by 
| -fglasgow-exts now, we risk breaking old code :-)  Would folk 
| prefer existing syntax extensions be moved into their own 
| flags, or left in -fglasgow-exts for now?  I'm thinking of:
| 
|   - implicit parameters
|   - template haskell
|   - FFI
|   - rank-N polymorphism (forall keyword)
|   - recursive 'do' (mdo keyword)


Haskell gets pulled in many different directions to meet the needs
and whims of developers, researchers, and educators, among others.
For quite a long time, it seemed that the choice between Standard
Haskell 98 and Kitchen Sink Haskell with all the extras was
adequately dealt with using a single command line option.  Those
looking for the stability of Haskell 98 got what they wanted by
default, while the adventurers looking to play with all the new
toys just added an extra -fglasgow-exts or -98 or ... etc.

As the number of extensions grows (and the potential for unexpected
interactions), it is clear that we can't get by with that simple
scheme any more.  It's important that implementations continue to
provide the stable foundation, but people also need a more flexible
way to select extensions when they need them.

As a solution to that problem, the many-command-line-options
scheme described seems quite poor!  It's far too tool specific,
not particularly scalable, and somewhat troublesome from a software
engineering perspective.  We're not talking about a choice between
two points any more; there's a whole lattice of options, which, by
the proposal above might be controlled through a slew of tool-specific
and either cryptic or verbose command line switches.  Will you
remember which switches you need to give to compile your code for
the first time in two months?  How easy will it be to translate
those settings if you want to run your code through a different
compiler?  How much help will the compiler give you in tracking
down a problem if you forget to include all the necessary switches?
And how will you figure out what options you need to use when you
try to combine code from library X with code from library Y, each
of which uses its own interesting slice through the feature set?

I know that some of these problems can be addressed, at least in
part, by careful use of Makefiles, {-# custom pragmas #-}, and perhaps
by committing to a single tool solution.  But I'd like to propose
a new approach that eliminates some of the command line complexities
by integrating the selection of language extensions more tightly
with the rest of the language.

The main idea is to use the module system to capture information
about which language features are needed in a particular program.
For example, if you have a module that needs implicit parameters
Template Haskell, and TREX, then you'll indicate this by including
something like the following imports at the top of your code:

  import Extensions.Types.ImplicitParams
  import Extensions.Language.TemplateHaskell
  import Extensions.Records.TREX

Code that needs recursive do, O'Haskell style structs, rank-n
polymorphism, and multiple parameter classes might specify:

  import Extensions.Language.Mdo
  import Extensions.Records.Structs
  import Extensions.Types.RankN
  import Extensions.Types.Multiparam

Imports are always at the top of a module, so they're easy to
find, and so provide clear, accessible documentation.  (Don't
worry about the names I've picked here; they're intended to
suggest possibilities, but they're not part of the proposal.)

What, exactly is in those modules?  Perhaps they just provide
tool-specific pragmas that enable/disable the corresponding
features.  Or perhaps the compiler detects attempts to import
particular module names and instead toggles internal flags.
But that's just an implementation detail: it matters only to the
people who write the compiler, and not the people who use it.
It's the old computer science trick: an extra level of indirection,
in this case through the module system, that helps to decouple
details that matter to Haskell programmers from details that
matter to Haskell implementers.

Of course, code that does:

  import Extensions.Types.Multiparam

is not standard Haskell 98 because there's no such library in the
standard.  This is a good thing; our code is clearly annotated as
relying on a particular extension, without relying on the command
line syntax for a particular tool.  Moreover, if the implementers
of different tools can agree on the names they use, then code that
imports Extensions.Types.Multiparam will work on any compiler that
supports multiple parameter classes, even if the underlying
mechanisms for enabling/disabling those features are different.
When somebody 

RE: Case expressions, matching, and constants

2003-07-17 Thread Mark P Jones
Hi Alistair,

| I've just debugged a program that used a case expression, but 
| where I was trying to match on constants rather than literals.
| Here's a contrived example:
| 
|  module Main where
|  one = 1
|  two = 2
| 
|  test n =
|  case n of
|  one - one
|  two - two
|  _ - three
| 
|  main = putStrLn (test 2)

There's a 1992 tech report by Aitken and Reppy that provides a proposal
for doing this kind of thing (and a bit more) in the context of SML:

 ftp://ftp.research.bell-labs.com/dist/smlnj/papers/92-tr-aitken.ps

All the best,
Mark

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: seeking ideas for short lecture on type classes

2003-01-27 Thread Mark P Jones
Hi Norman,

| [looking for papers about type classes ...]
|   * Of all the many articles on the topic, which few might you
| recommend for beginners?

I wonder if my notes on Functional Programming with Overloading and
Higher-Order Polymorphism will be useful?  You can find them at:

  http://www.cse.ogi.edu/~mpj/pubs/springschool.html

They don't cover implementation aspects, but if your audience is more
interested in language/use than compilation issues, then I think they
might provide you with some reasonable starting material.

Hope this helps!
Mark

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: A problem about hGetContents

2003-01-19 Thread Mark P Jones
| I would like to use hGetContents just to retrieve the list of 
| the lines of a file, but if I code a function like:
| 
| linesFromFile :: FilePath - IO [String]
| linesFromFile f = do
|   h - openFile f ReadMode
|   l - hGetContents h
|   hClose h
|   return (lines l)
| 
| I obviously always get the empty list as a result. How should 
| I code the function?

Try the following:

  linesFromFile :: FilePath - IO [String]
  linesFromFile  = fmap lines . readFile

All the best,
Mark

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Best recursion choice for penultimax

2002-11-24 Thread Mark P Jones
Hi Mark,

| I have just implemented the function penultimax which takes a list
| of positive integers and produces the penultimate maximum, that is,
| the next biggest integer in the list after the maximum.  Eg:
| 
| penultimax [15,7,3,11,5] = 11

To your three implementations, let me add another two.  If you are
looking
for the smallest possible definition, consider the following:

  import List

  penultimax1 :: Ord a = [a] - a
  penultimax1  = head . tail . sortBy (flip compare)

In other words, to find the second largest, sort (in descending order,
which is why I use flip compare) and then extract the second element.
(You could also use (!!1), but I think that head . tail is nicer.)
Thanks to lazy evaluation, using sort in this way isn't as expensive
as you might think; because we ask only for the first two elements,
only a small part of the full sort computation will be needed.

A little more algorithmic sophistication leads to the following
alternative that can find the penultimax with only  n + log2 n
comparisons (approx), where n is the length of the list.

  penultimax :: Ord a = [a] - (a, a)
  penultimax  = tournament . map enter
   where enter x = (x, [])

 tournament [(x, xds)] = (x, maximum xds)
 tournament others = tournament (round others)

 round ((x,xds):(y,yds):others)
   | x=y  = (x, y:xds) : rest
   | otherwise = (y, x:yds) : rest
 where rest = round others
 round xs  = xs

The inspiration for this code is a knock-out tournament, treating
the values in the input list as teams.  To enter the competition,
each team is paired with the (initially) empty list of teams that it
has defeated.  In each round, we play the teams against each other in
pairs (if there are an odd number of teams, the last one gets a by
to the next round).  In each game, the team with the highest value
wins, and adds the opponent to its list of victories.  The tournament
concludes when only one team remains.  And here comes the clever
part:  the penultimax must be the largest entry in the victors list
of defeats because it would have won all of its games until, at some
point, being knocked out of the competition by the eventual winner.
And hence we need only scan that list for its maximum.

[I'm afraid I don't know who invented this---I learned about it while
teaching a class on algorithms---but the rendering above in Haskell
is mine, and could be buggy!]

Neat algorithm eh?  But be careful ...

| How do I work out which is best to use?  Is there
| one clear winner, or will they each have pros and
| cons?

Some quick tests with Hugs +s on a example list that I constructed
with 576 elements give food for thought:

  reductions cells
   my one liner  403511483
   tournament705312288
   your penultimax  1671520180
   your penultimax2  746610344
   your penultimax3  860513782

With the caveat that this is just one example (although others
I tried gave similar results), the conclusion seems to be that
my one liner is probably the winner, beating all of the others
in reductions, all but one of the others in space, and with the
simplest definition of all.  The fact that it is coded entirely
using prelude functions might also be a benefit if you use a
compile that provides fancy implementations or optimizations
for such functions.

My advice is that you should always start with the simplest
definition (i.e., the one that is easiest to code, easiest to
understand, and most easily seen to be correct).  You should not
worry about rewriting it in what you hope may be a more efficient
form unless you find later, by profiling or other means, that
its performance really is a problem.  (In which case, you'll be
able to collect some real, representative data against which
you can test and evaluate the alternatives.)  For starters,
a supposedly improved version might not actually be more
efficient (constant factors do matter sometimes!).  Moreover,
in attempting to optimize the code, you might instead break it
and introduce some bugs that will eventually come back and bite.

Hope this helps (or at least, is entertaining :-)

All the best,
Mark

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Functional dependencies and Constructor Classes

2002-11-18 Thread Mark P Jones
Hi Martin,

| The issue I want to raise is whether constructor classes are 
| redundant in the presence of FDs

No, they are not comparable.

Let fds = functional dependencies
ccs = constructor classes

Example of something you can do with ccs but not fds:

   data Fix f = In (f (Fix f))

Example of something you can do with fds but not ccs:

   class Collects e ce where ...   -- see fds paper for details
   instance Eq e = Collects e [e] where ...
   instance Eq e = Collects e (e - Bool) where ...

Your fds version of the Functor class is also incomparable with the
ccs version; the former will allow an expression like (map id 'a')
to be type checked, the latter will not, treating it instead as a
type error.  In this specific case you may regard the extra flexibility
provided by fds as a win for expressiveness.  On another occasion,
however, you may be disappointed to discover that you have delayed
the detection of a type error from the point where it was introduced.

There's a lot more that could be said about this, but I don't have
time to go into detail now.  Hopefully, I have at least answered your
basic question.  The approach you've suggested using fds reminds me
most directly of the work on Parametric Type Classes (PTC) by Chen,
Hudak and Odersky.  In my paper on Functional dependencies, I made
the following comment regarding that work:

 Thus, PTC provides exactly the tools that we need to define and
  work with a library of collection classes.  In our opinion, the
  original work on PTC has not received the attention that it
  deserves. In part, this may be because it was seen, incorrectly,
  as an alternative to constructor classes and not, more accurately,
  as an orthogonal extension.

I believe the same is true in this case.  Ccs and fds address
different problems.  They are complementary tools, each with their
own strengths and weaknesses.

All the best,
Mark

Refs: for those who want to follow along:

  Type Classes with Functional Dependencies
http://www.cse.ogi.edu/~mpj/pubs/fundeps.html

  Constructor Classes
http://www.cse.ogi.edu/~mpj/pubs/fpca93.html
(But read the JFP version instead if you can; it's
much better ...)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: constraints inside classes

2002-02-22 Thread Mark P Jones

| Hugs and GHC (but not NHC) erroneously accept the following:
| 
| class Foo a where
| f :: Eq a = a - Bool

This behavior of Hugs is intentional, and is documented as a non-bug
on the Hugs web pages:  (right at the end)

  http://cvs.haskell.org/Hugs/pages/bugsandfeatures.htm

I don't know how the non bug status was determined, but perhaps
it reflects a view that the real bug is in the language report,
where a demonstrably unnecessary, and potentially limiting restriction
is imposed.

All the best,
Mark

___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs



RE: syntax...(strings/interpolation/here docs)

2002-02-14 Thread Mark P Jones

|   Usually, I wouldn't make such a fuss, but here documents are really
|   not some new and experimental feature. They're an old hat, and a
|   very useful hat. The only question is how to integrate them into the
|   rest of Haskell.

At a time when many developers pay careful attention to
internationalization (I18n), which requires careful
separation of program text (functionality) and user text
(display), a feature that instead ties them more closely
together seems to be a step in the wrong direction.

The same argument can be made with respect to the widely
recognized advantages of separating functionality (e.g.,
model/controller) from user interface (e.g., view).

Personally, I would prefer a Haskell without Here documents.
Instead of:

greeting = ...
main = putStrLn $ trim text

I would be happy to write:

main = displayResource greeting

leaving the system to locate the text of the greeting from
a separate (perhaps locale sensitive) source at runtime, and
to display it in an appropriate manner.  In other words, I'd
prefer to see the effort that might be spent on standardizing
and implementing Here documents instead directed to designing
and building I18n libraries (probably based on a more general
resource bundle mechanism).

Ultimately, it comes down to a question of what you're
trying to achieve.  Do you envisage Haskell as an elegant
scripting language that competes with perl and ruby for
quick but useful hacks?  Or do you think it might better
serve as a platform for writing significant user level
applications with fancy user interfaces and international
appeal?  Of course, the two are not mutually exclusive,
and the presence or absence of Here documents or resource
bundles will not force the language in either way.  But
it will determine the feel of the language, and so have
an impact on the way that it is perceived and used.

All the best,
Mark

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Reference types

2002-02-05 Thread Mark P Jones

Hi Simon,

The one parameter scheme that you've described breaks down if you want
to generalize further and allow something like:

  class RefMonad r m where
new   :: a - m (r a)
read  :: r a - m a
write :: r a - a - m ()

  instance RefMonad IORef IO where ...
  instance RefMonad STRef ST where ...
  instance RefMonad Channel IO where ...-- note, this breaks the
  instance RefMonad MVar IO where ...   -- (m - r) dependency

  instance (RefMonad r m, MonadT t) = RefMonad r (t m) where ...
-- and this kills the
-- (r - m) dependency

[This is just an example, not a proposal.]

Note the complete lack of functional dependencies.  I really don't
think they are the right tool here.  Similar uses of fundeps have
appeared in some code for state monads; I don't think they are
appropriate there either.  Bidirectional dependencies are occasionally
useful, but, in general, it is also easy to overuse functional
dependencies (the same, I believe, is true for classes in general).
The simpler type structure you describe looks more appealing to me.

All the best,
Mark

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: n+k patterns

2002-01-29 Thread Mark P Jones

| On Tue, Jan 29, 2002 at 07:36:56AM -0800, Simon Peyton-Jones wrote:
|  The Haskell Report says of n+k patterns:
|  
|  A n+k pattern can only be matched against a value in 
|  the class Integral.
|  
|  This seems far too strong.   All that is needed are Ord (for the =)
|  and Num (for - and fromInteger), and indeed that's what GHC requires. 
|  Do Hugs or nhc actually require Integral?
| 
| Hugs demands Integral and the assumption that only one class is involved
| seems deeply entwined in the code.

Hugs demands Integral because that's what it was told to do to
follow the report.  So in that sense, yes, the code depends on
having only one class.  But it would be easy for someone to change
that.

Then again, if we're following the rules of minimal change for
Haskell 98, then I wouldn't have thought this was up for grabs.
(I'm thinking, for example, of the unnecessary same context
restriction on mutually recursive binding groups, which has more
practical impact, is very clearly a bug, and has not (AFAIK)
been fixed in Haskell 98.  Then there's David Wakeling's
generalized gap proposal, and ...)

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Precision of `Double's in Hugs

2002-01-27 Thread Mark P Jones

Hi Liyang,

| On Mon, Jan 14, 2002 at 12:39:48AM -0800, Mark P Jones wrote:
|  Hugs uses single precision by default because the implementation
|  using double precision relies on a hack whose behavior is not
|  assured in any way by the C language in which it is implemented.
| 
| I found this in src/unix/configure.in: (all commented out)
| 
| dnl Disabled for now because the plugin-code can't handle the change.
| dnl If you're not using plugins, you could turn this on manually
| dnl by setting USE_DOUBLE_PRECISION in options.h.in (before running
| dnl the configure script).
| dnl AC_ARG_ENABLE(double-precision, [  --enable-double-precision 
| use double precision arithmetic],  AC_DEFINE(USE_DOUBLE_PRECISION))
| 
| Could you clarify if this is the `hack' you were referring to? (And
| are there any more? I can live with it if this is the only
| disadvantage in enabling double precision support ...)

No, I don't know anything about this.  The hack I was referring
to has to do with the way that C unions are used to convert double
precision floats into pairs of ints (and back again).

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Programming style question

2002-01-14 Thread Mark P Jones

Hi Adrian,

| Ah, now I see the issue seems to be closeley related to
| full lazy lambda lifting.

That's right ...

| Do (should) Haskell compilers do this, as a general rule?
| It all seems bit vague to me :-(

I don't think they do, and I'm not sure they should
because the transformation can, in some circumstances,
result in a space leak.  It would probably be better to
specify this aspect of the language semantics more
precisely in the language report but I think there are
some open problems with the theory that would need to be
addressed first.  (e.g., how do you give a semantics for
Haskell that reflects expected/required implementation
behavior will also being abstract enough in accounting
for space usage?)

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Precision of `Double's in Hugs

2002-01-14 Thread Mark P Jones

Hi Dave,

| Note that I'd recommend getting some clarification about this from either
| the current hugs team or Mark Jones (who are probably reading this list),
| as I can vaguely recall a couple of times over the last few years where
| Mark Jone has said `Hugs should not be used for serious numerical work'
| without saying whether this is fixed just by that #define.

There are two issues.  First, Gofer (from which Hugs was later
derived) was not written with numerical work in mind: I am wise
enough to know that great care must be taken in numerical work
to obtain accurate results, but I was not (nor am I still) wise
enough to know exactly what is required.  Frankly, numerical
computation was never a priority in the development of Gofer;
fitting it onto an old 8086 seemed challenge enough.  Second,
Hugs uses single precision by default because the implementation
using double precision relies on a hack whose behavior is not
assured in any way by the C language in which it is implemented.
It does work just fine on many machines, but I would be wary
about depending on it, and hence I wouldn't encourage its use
as the default.

Hope this helps!

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Programming style question

2002-01-10 Thread Mark P Jones

Hi Adrian,

| If I have defined a function like this..
|   f args = blah args
| it could be re-written..
|   f = blah
| 
| I had always assumed the internal representation of
| these 2 definitions would be identical (and should
| yield identical code), but it appears that isn't so
| (with ghc at least). So..
| 
|  Is there a semantic difference between the two? 
|  Which form is likely to result in faster code?

There are several differences:

- The first will not be subject to the monomorphism
  restriction; the second will require an explicit
  type signature for f to avoid that fate.

- The second will compute a value of blah at most
  once, then cache the result for future use.  That
  could make a program run faster, but if the result
  of blah takes a lot of space, then it could result
  in a space leak.  The first might end up repeating
  the computation of blah each time f is called.

  For example, compare the evaluation of:

let f = (+) (sum [1..1000]) in (f 1, f 2)

  with:

let f x = (+) (sum [1..1000]) x in (f 1, f 2)

  (Hint: run it in Hugs on a slow computer with :set +s
  to see the difference, or replace 1000 with a bigger
  number :-)

  Denotationally, the two expressions are the same.
  (In other words, they both produce the same value.)
  But the example above shows an operational difference
  in some implementation.  (As far as I can tell, however,
  nothing in the language definition either guarantees or
  prevents such behavior.)

- There could be other differences in generated code and
  operational behavior, even when blah is an expression
  that cannot be further evaluated without an argument.
  In Hugs, for example, the definition:

f = \x y - (x,y)

  will be translated into:

f  = f'-- note the extra indirection here!
f' x y = (x,y)

  A compiler with more brains, of course, could do better,
  but the Haskell report doesn't specify which behavior
  you'll get in general.

Personally, I'd tend to let considerations other than
performance affect my choice.

For example, if I'd declared  f :: a - String - [(a, String)]
then I might use a definition like:

   f x s = [(x, s)]  -- two parameters in the type, so two
 -- parameters in the definition

But if the type signature was  f :: a - Parser a  and if I'd
defined:

   type Parser a = String - [(a, String)]

then I'd write the definition of f in the form:

   f x  =  \s - [(x, s)]   -- f is a function of one argument
-- that returns a parser as a result.

Just my 2 cents, however, ...

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: instance declarations

2001-12-10 Thread Mark P Jones

Hi Marcin,

|  There's no solid technical reason for this, but Haskell doesn't allow
|  it at the moment because there isn't an easy way to name an instance
|  declaration.
| 
| There is another problem: even if we created a syntax to name them,
| if they would not be exported by default then current programs would
| have to be changed.

You're right of course, although I consider this a pragmatic issue
rather than a technical problem:  I'm thinking of future languages
that are inspired by current Haskell standards but not constrained
by details of the current definition or existing codebase.

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Pointers in Haskell??

2001-12-07 Thread Mark P Jones

|   Simon Peyton-Jones. The implementation of functional
|   programming languages. Prentice-Hall, 1987
|  
|  is this book could be made available online ? cos on amazon it seems
|  out of print.
|
| This book is already on-line at
| 
| http://research.microsoft.com/Users/simonpj/Papers/student.ps.gz
| 
| (It was noted on the Books and Tutorials link from haskell.org, which
| is usually the best place to start looking for something Haskell.)

That's a useful resource too, but it's not the book that the first
poster mentioned.  The earlier book was more advanced, more
research-oriented, and (in most respects) covered more material
than the later one (which was intended as an executable tutorial).

Personally, I honestly don't think I would have been able to put
Gofer together without many hours poring over Simon's 1987 book.
(Noting that I'm getting quite old (:-), perhaps I should explain
that Gofer was the predecessor of Hugs ...)  I'll have to take
extra special care of my well-worn copy now that the book is out
of print!

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Picky details about Unicode (was RE: Haskell 98 Report possible errors, part one)

2001-07-23 Thread Mark P Jones

| 2.2. Identifiers can use small and large Unicode letters ...

If we're picking on the report's handling of Unicode, here's
another minor quibble to add to the list.  In describing the
lexical syntax of operator symbols, the report uses:

   varsym- (symbol {symbol | :})_reservedop
   symbol- ascSymbol | uniSymbol
   uniSymbol - any Unicode symbol or punctuation

The last line seems to include more characters than I'd expect.
Specifically:

  ()[]{}  are punctuation (Unicode type Pe, Ps)
  `   is a symbol, modifier (Unicode type Sk)
  ':;,   are punctuation, other (Unicode type Po)
  _   is punctuation, connector (Unicode type Pc)

And, so, if I read the report correctly, I should be able to
define :-) as a consym and `div`, [], and hello as varsyms!
(Not to mention some altogether more bizarre choices!)

I guess the intention here is that:

  symbol  - ascSymbol | uniSymbol_special | _ | : |  | '

In fact, since all the characters in ascSymbol are either
punctuation or symbols in Unicode, the inclusion of ascSymbol
is redundant, and a better specification might be:

  symbol  - uniSymbol_special | _ | : |  | '

All the best,
Mark

P.S.  A caveat: I'm not a Unicode expert!  Perhaps Marcin can
advise ...


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: HUGS error: Unresolved overloading

2001-05-20 Thread Mark P Jones

Hi David,

| Can anyone shed some light on the following error? Thanks in advance.
| 
| isSorted :: Ord a = [a] - Bool
| isSorted [] = True
| isSorted [x] = True
| isSorted (x1:x2:xs)
| | x1 = x2  = isSorted (x2:xs)
| | otherwise = False

I'm branching away from your question, but hope that you might find some
additional comments useful ... the last equation in your definition can
actually be expressed more succinctly as:

  isSorted (x1:x2:xs) = (x1 = x2)  isSorted (x2:xs)

This means exactly the same thing, but, in my opinion at least, is much
clearer.  In fact there's a really nice way to redo the whole definition
in one line using standard prelude functions and no explicit recursion:

  isSorted xs = and (zipWith (=) xs (tail xs))

In other words: When is a list xs sorted?  If each element in xs is
less than or equal to its successor in the list (i.e., the corresponding
element in tail xs).

I know this definition may look obscure and overly terse if you're new
to either (a) the prelude functions used here or (b) the whole style of
programming.  But once you get used to it, the shorter definition will
actually seem much clearer than the original, focusing on the important
parts and avoiding unnecessary clutter.

I don't see many emails on this list about programming style, so this
is something of an experiment.  If folks on the list find it interesting
and useful, perhaps we'll see more.  But if everybody else thinks this
kind of thing is a waste of space, then I guess this may be the last
such posting!

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: argument permutation and fundeps

2001-05-10 Thread Mark P Jones

Hi Jeff,

| Without delving too deeply into your example, it looks like 
| you've bumped into a known bug in Hugs implementation of 
| functional dependencies.  You should try GHCI if you can - it 
| doesn't suffer from this bug.

Are there any plans to fix the bug in Hugs?  (And is there
anywhere that the bug is documented?)

All the best,
Mark


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Scoped type variables

2001-05-07 Thread Mark P Jones

Hi Simon!

| This is the message that Marcin referred to, proposing a change in
| the semantics of scoped type variables.  I may just go ahead and
| implement it in GHC.   (The implementation is easy: delete a couple
| of lines; and I guess the same is so for Hugs.  The question is whether
| it's a desirable change.)

I think that it is an undesirable change.  For example, the type
theoretic and practical implications of the current semantics are
known quantities.  I'm not aware of similar foundations to support
the new semantics.  Nor have I seen any believable motivation for
the change.  For example:

| I must say that I agree with Marcin's point.  When we write
|   f x = e
| we mean that x is a name for whatever argument f is given.   Marcin
| argues that the same should apply for 
|   f (x :: (a,b)) = e
| Namely, that x is a name for the argument, a is a name for the type of
| the fst of x, and slly b.
| 
| Arguments in favour:
|   - It's more like term-variable pattern matching

Term variables in a pattern are binding occurrences, but type
variables are not.  Making the latter look more like the former
would appear to be a recipe for unnecessary confusion given that
that they are actually different.

(As an aside, pushing an argument like the above, you might also
argue that when we write f x = e we mean that f is a name for
whatever function x is passed to as an argument.  I'm sure we
can all agree here: that would definitely be slly!)

There's good precedent for the current semantics, both in practice
(e.g., Standard ML) and in theory (e.g., typed lambda-calculi).  It
would be a mistake to switch to an untested and unusual semantics,
especially without compelling motivating examples.

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Syntax for implicit parameters

2001-04-26 Thread Mark P Jones

| Marcin Kowalczyk ([EMAIL PROTECTED]) writes:
|  I would like to replace with and dlet with let. But SimonPJ
|  said he won't do it in ghc unless Hugs does it too, and Mark P Jones
|  said he won't do it in Hugs now (without deep reasons: no
|  people/hours to do that, and no plans to release next Hugs version
|  this year).
| 
| I'd really, really like to see a fresh release of Hugs ...

Noticing that Alastair listed my email address in his message, and
that Marcin called me by name in his comments above, I think it's
time for a gentle reminder: I don't work on Hugs any more!

In fact, as I announced at the time, I stopped working on Hugs in
January 2000.  After working on it for almost a decade, I figured
that it was time for me to move on.  That decision was not easy,
in part because of the time and energy that I have invested in
its development; because of the pleasure that your success stories
bring; and because of the dissappointment that I feel when I hear
from people who are frustrated by its limitations and weaknesses.
I am astonished, quite frankly, that Hugs is still in widespread
use today.  I never expected it to last this long, or to have come
along quite so far.  And yet, of course, it is still lacking in many
ways.  But the system has acquired a life of its own, and no longer
reflects my views, needs, or efforts as it once did.

I still retain an interest in Haskell, and (for now at least)
I continue to read (and sometimes reply to) messages on the Hugs
mailing lists.  And I do talk to Johan quite frequently about Hugs;
after all, his office is just a few doors away from mine.  (Johan
is the current maintainer of Hugs, although I sometimes think people
don't realize that it's only a small part of his work, and not a
full-time activity.)  I don't/won't/can't control Hugs as Marcin
suggests so it's not up to me to decide whether a change gets made
... nor am I the one that will make any changes.

[Incidentally, if I did control Hugs, I wouldn't make the suggested
change to dlet/with at this point.  Marcin says I have no deep
reasons ... Hmm, I don't know about deep, but I do have reasons
for this, both technical and pragmatic.  But I'm not going to go into
detail because I don't think it will serve any useful purpose, and
because, if I'm going to let go, then I really do have to let go ...]

Please do with Hugs as you all see fit!  Its future is in your hands
now, not mine!  I hope you'll all have fun with it!

All the best,
Mark


___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs



Functional Dependencies (Was RE: Dimensional analysis with fundeps)

2001-04-10 Thread Mark P Jones

Dear All,

| 1) What is a fundep?

Fundeps are "functional dependencies", which have long been used to
specify constraints on the tables used in relational databases.  In
the current context, people are using "fundeps" to refer to the way
that this idea has been adapted to work with multiple parameter type
classes.  My paper on this subject was presented at ESOP last year,
but the publishers asked me not to distribute the paper electronically
until a year after the conference.  Now that year has passed, and
I am happy to make the paper available:

  http://www.cse.ogi.edu/~mpj/pubs/fundeps-esop2000.pdf

Note that, for reasons of space, I wasn't able to include all the
technical details of the type system in the paper.  If you are
interested in such details, or you are looking for more background,
then you might want to check out my earlier work on "Simplifying
and Improving Qualified Types", which is available from:

  http://www.cse.ogi.edu/~mpj/pubs/improve.html

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Inferring from context declarations

2001-02-21 Thread Mark P Jones

| [One way to compile polymorphic code is to inline all uses of
| polymorphic values...]
| 
| It would require to keep bodies of all polymorphic functions in
| a form allowing instantiation in different modules. For example
| ghc already requires too much memory (even a hundred MB for large
| modules), and it leads to code bloat, so in the current state of
| technology it's not advisable to always compile out polymorphism.

Separate compilation is an issue here, but I'm not at all sure that
code bloat would be a problem in practice.  I say this on the basis
of some experiments that I did (a long time ago now), which you can
find described in my paper:

   Dictionary-free Overloading by Partial Evaluation
   http://www.cse.ogi.edu/~mpj/pubs/pepm94.html
   Be sure to get the (longer) Yale Technical report version,
   not the PEPM paper, which omits some specific material that
   is relevant to this discussion.

The primary purpose of this paper was to investigate what would happen
if overloading was implemented by generating specialized versions of
each overloaded function.  (In other words, by using a specialized form
of partial evaluation to eliminate the need for dictionaries.)  In every
single case that I tried, specialization resulted in *smaller* programs:
although some functions were duplicated, others---which had previously
sat unreferenced inside dictionaries---could now be identified (and
hence removed) as dead code.

I also experimented to see what kind of effect specialization might
have if used to eliminate polymorphism.  (You need to read the Technical
Report version of the paper for this.)  The results of that suggested
that the resulting programs might contain perhaps twice as many functions
as the original.  But note:

 1) Polymorphic functions tend to be quite small (intuitively, the
bigger the code for a function gets, the more constrained its
type becomes).  Because these are the functions whose bodies are
duplicated, a doubling in the number of functions may result in
something much less than a doubling in overall code size.

 2) Small functions are obvious candidates for inlining, so worries
about code bloat may not be realized because the code for many
of these functions is already being duplicated in existing,
inlining compilers.

 3) It may not be necessary to have distinct versions of the code
for a polymorphic function at every possible type; instead,
the implementations at different types could share the same
code.  For example, if, at the lowest level, integers and
list values are represented as 32 bit quantities, then the
implementations for the identity function on Int and for the
identity function on lists could well be the same.  In other
words, we might only need to index implementations on the size
of polymorphic type parameters, and not on the parameters
themselves.  This, I believe, would address the problems that
Lennart described involving polymorphic recursion.  It might
also help to solve the separate compilation issue.

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: kind inference question

2001-02-12 Thread Mark P Jones

Hi Bernie,

You ask why Haskell infers kinds for datatypes in dependency order.
As you point out, if Haskell tried instead to infer kinds for all of
the datatypes in a program at the same time, then it would sometimes
accept programs that are currently rejected.  For example:

|data C x = Foo (B x) (x Int)
|data B y = Bar 
| 
| According to the report (and ghc and hugs) this is ill-kinded. 
| The reason is that kinds must be inferred in dependency order,
| and when parts of a kind are not fully determined they default
| to * (star).
| ...
| However, an alternative kind inference algorithm might allow the
| above declarations, by performing kind inference of C and B together
| (effectively unifying the kind of x and the kind of y). This would
| result in:
| 
|kind (C) = (* - *) - *
|kind (B) = (* - *) - *
|
| I'm not suggesting that allowing this inference would be a good 
| idea, but I am wondering why haskell requires the dependency ordering. 
| Perhaps there are better examples that elucidate the motivation for
| dependency ordering.  Basically I'm just curious.

I think the motivation for using dependency ordering was that then
you base inferred kinds only the minimum amount of information that
is needed.  At the opposite end of the spectrum, you could, in general,
wait until the whole program (i.e., every single module) has been seen.
But then the kind that we infer for a given datatype definition would,
in general, depend on the context in which the definition appeared
(and hence vary from one program to the next) and not depend on the
datatype alone.  That would be very strange!

Should Haskell's currently conservative position be modified to take
into account all of the definitions in a given module?  Or perhaps
all of the definitions in a collection of mutually recursive modules?
Each of these pushes us closer to the problem I described previously.
For my money, they are not a good solution.  I think it would be better
to infer polymorphic kinds for datatype definitions instead of the
current "default to *" wart.

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Yet more on functional dependencies

2001-01-15 Thread Mark P Jones

| I am finding functional dependencies confusing.  (I suspect I am 
| not alone.)  Should the following code work?
| 
| class HasConverter a b | a - b where
|convert :: a - b
| 
| instance (HasConverter a b,Show b) = Show a where
|show value = show (convert value)

It's a separate issue.  There's no reason why a system using
functional dependencies *should* support this.  But it is an
attractive and useful extension that such a system would
probably *want* to include.  (i.e., it's a desirable feature,
not a requirement.)

I typed your example into Hugs on my machine and it seemed to
accept the syntax (which suggests that the implementation is
intended to allow this kind of thing).  But then it choked on
the definition with a curious error message that I suspect is
an indication of a bug in Hugs' treatment of functional
dependencies.  And, for that reason, I'm crossposting this
message to hugs-bugs.  [Let me take the opportunity to remind
good readers of these lists that it is now more than a year
since I retired from the joys of maintaining Hugs, so I'm not
planning to try and track down the source of this bug myself!]
Of course, it could be that my version of Hugs is out of date!

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Problem with functional dependencies

2001-01-04 Thread Mark P Jones

Hi Marcin,

| In particular, should the following be legal:
| 
| class C a b c | a - b c
| instance C [a] b b
| f:: C [a] b c = a
| f = undefined
| 
| ghc panics and Hugs rejects it.

No, it is not legal.  Even if you delete the definition of f, the code
is still not legal because the class and the instance declaration are
inconsistent.

The class declaration says that you want to define a three place relation
on types called C.  We can think of the entries in this relation as rows
in a table with columns headed a, b, and c:

   a  |  b  |  c
 C = -+-+--
  | |

Before we take a look at any instance declarations, this table is empty
(i.e., there are no rows).  But the functional dependency a - b c that
you have specified establishes a constraint that any data that gets added
to the table by subsequent instance declarations must satisfy.  It says
that, if two rows have the same type in the a column, then they must also
have the same types in the b and c columns; "the values of b and c are
uniquely determined by the value of a."

So here are two plausible instance declarations that you could use:

  instance C Int Int Char
  instance C (Maybe t) t Int

Notice that the second declaration here is really an instance scheme;
the presence of a variable "t" means that it introduces a whole family
of instances, one for each possible instantiation of the variable "t".

With these two instance declarations in hand, our table looks something
like the following:

 a | b |  c
 C = --+---+--
Int|   Int | Char
Maybe Int  |   Int | Int
Maybe Bool |   Bool| Int
Maybe Char |   Char| Int
Maybe [Int]|   [Int]   | Int
 Maybe (Maybe Int) | Maybe Int | Int
...|...| ...

Conceptually, of course, there are now infinitely many rows in the table,
so what you see here is just a small part of the relation.  But notice that
the data in the table is entirely consistent with the functional dependency
a - b c because no two rows have the same type in the a column.

Now consider the instance declaration that you have given:

  instance C [t] s s

Again, this is an instance scheme, generating one row in the table
for each possible instantiation of variables "t" and "s".  (To avoid
confusion with the names of the columns in C, I've chosen different
variable names from the ones you've used.)  For example, based on
this instance declaration, we would expect to continue adding rows
to the table along the following lines:

  a  |   b   |   c
 C = +---+---
[Int]|  Int  |  Int   t=Int, s=Int
[Int]|  Bool |  Bool  t=Int, s=Bool
[Bool]   |  Int  |  Int   t=Bool, s=Int
 ... |  ...  |  ...

I hope now that the problem is becoming clear: this instance declaration
is not consistent with the dependency; in the first two lines above, for
example, we see two rows that violate the specification because they have
the same value of "a", but different values for "b" and "c".

In summary, the class declaration and its associated dependency are not
consistent with the instance declaration.  If you really wanted the rows
described by the instance declaration to be part of the relation C, then
the dependency you have written is not valid.  If you really did want the
restriction captured by the dependency, then the instance declaration is
not valid.  Hugs can't tell which of these is the real source of the
problem, but it does report, correctly, that there is an inconsistency.

A little more generally, given the class declaration and the dependency
that you've specified, Hugs will not allow any instance declaration for
something of the form  C t1 t2 t3  if there are variables in t2 or t3
that do not appear in t1.  If this restriction were not enforced, then
it would again be possible for there to be multiple rows with the same
"a" value, but different "b" and "c" entries.

I noticed the same problem in one of the earlier examples that you sent
to the list:

| class Foo a b | a
| instance Foo Int [a]
| -- This is rejected by Hugs (with fundep a-b) but I would definitely
| -- accept it.

I hope that it is now clear why Hugs rejects this definition.

| I don't fully understand fundeps.

The specific point described above is actually discussed twice in my ESOP
paper, once informally, and once in a more general setting.  I encourage
you to take a look at that paper for more details.  If you're basing your
knowledge of fundeps on the (now quite outdated) note on my web page, or
on the section of the Hugs manual on which it was based, you may well have
some gaps to fill in.  I'm not too happy with the ESOP paper either; I
couldn't include as much technical material there as I wanted because of
limited 

RE: Problem with functional dependencies

2001-01-03 Thread Mark P Jones

| I think you can simplify the example.  Given
| 
|   class HasFoo a b | a - b where
| foo :: a - b
|   instance HasFoo Int Bool where ...
| 
| Is this legal?
|   f :: HasFoo Int b = Int - b
|   f x = foo x

The theoretical foundation for functional dependencies goes back to
the work I did on "Simplifying and Improving Qualified Types".
(Interested parties can find a 1994 report on this on my web pages;
email me if you need a pointer.)

According to that theory, the type above is a "principal satisfiable
type" for f, as is the more accurate Int - Bool: under the
satisfiability ordering described in the report, these two types
are (satisfiably) equivalent.  There is, therefore, no technical
reason why the function f could not be treated as having the
polymorphic type shown above.

On the other hand, from a practical perspective, one can argue that
the polymorphic type is misleading, obfuscating, and cumbersome:
Misleading because f doesn't really have a polymorphic type as the
declaration pretends; Obfuscating because it forces a reader to
study instance declarations that are not included in the type;
and Cumbersome because it includes an unnecessary (HasFoo Int b)
constraint that could be eliminated to produce a shorter, simpler
type.

So it comes down to a language design *decision* for which functional
dependencies, by themselves, do not force a particular choice.

- The current Hugs implementation does not allow the polymorphic
  type; the intention in that implementation was to infer more
  accurate, less complex types.  The idea here is to make programs
  easier for programmers to read, write, and understand.

- Marcin indicates that he would prefer the more relaxed approach
  that allows polymorphic types; he is writing a preprocessor that
  generates type signatures, and his task is easier if he doesn't
  have to worry about the "improvement" of class constraints.
  The idea here is to make programs easier for generators to read,
  write and manipulate.

Clearly, some compromise is needed because neither approach is right
for all purposes.  If we look to other aspects of the language for
inspiration, then the best way to deal with this is (probably):
  (i) to infer simpler types whenever possible, but
 (ii) to allow more polymorphic types when they are requested by
  means of an explicit type signature.
(Incidentally, in the interests of consistency, such a system should
also programmers to use types like  Num Int = Int - Bool.)

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Green Card for untyped lambda calculus?

2000-11-28 Thread Mark P Jones

Hi Elke,

| A possible (though probably unusual) characterization
| of list data structures is three functions
| 
| nil :: List a  
| cons:: a - List a - List a
| forlist :: b - (a - List a - b) - List a - b
|
| The implementation I'm interested in (one without 
| constructors) is:
| 
| nil  fornil forcons= fornil
| consx xs fornil forcons= forcons x xs
| forlist  fornil forcons ls = ls fornil forcons

For fans of the untyped lambda calculus, or for enthusiasts of
second order polymorphic lambda calculus, this characterization
of lists is not at all unusual.  But for those of us who spend
most of our time in the Hindley-Milner land, between those two
systems, it may well seem a little unusual.  If you're interested
in a technical understanding of the problems that occur here,
then you might find my old paper on "First-class Polymorphism
with Type Inference" (http://www.cse.ogi.edu/~mpj/pubs/fcp.html)
to be of interest.  It includes this representation of lists as
one example.  As Marcin has pointed out, you can code up these
examples in GHC and Hugs, providing you enable the right command
line settings.  (But beware that there are some differences in
syntax between the paper and the implementations.)

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Hugs and Linux

2000-11-13 Thread Mark P Jones

| Actually, the July2000 release hasn't been noted on
| http://haskell.org/hugs -- is there somewhere else I should be looking
| to keep track of it?

There is/was no July 2000 release of Hugs; Jeff put out a snapshot
from the CVS archive for the convenience of Redhat 7.0 users and
used the July 2000 tag to (try and) avoid confusion.

The most recent Hugs 98 release is still the February 2000 version,
with another release (bug fixes rather than substantial changes)
looking quite likely before the end of the year.

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Mutually recursive bindings

2000-11-05 Thread Mark P Jones

Hi Tom,

Thanks for an interesting example!

| For this code (an example from the Combined Binding Groups section of
| Mark Jones's "Typing Haskell in Haskell"):
| 
| f  :: Eq a = a - Bool
| f x = (x == x) || g True
| g y = (y = y) || f True
| 
| Haskell infers the type:
| g  :: Ord a = a - Bool
| but if the explicit type signature for f is removed, we get:
| f, g :: Bool - Bool

Let's take a closer look at this code.  Polymorphism, of course,
allows us to describe multiple functions with a single piece of
code ... which is what you can see here ... there are really two
versions of f and g: one pair that takes an argument of type "a"
and another that takes an argument of type "Bool".  Let's call
these functions f_a, g_a, f_Bool, g_Bool.  Then the definitions
above are morally equivalent to the following expansion:

  f_a x= (x == x) || g_Bool True
  g_a y= (y = y) || f_Bool True

  f_Bool x = (x == x) || g_Bool True
  g_Bool y = (y = y) || f_Bool True

Note here that f_Bool and g_Bool are mutually recursive.  We can
infer a monomorphic type of Bool - Bool for each of them.  The
definitions of f_a and g_a, however, are not recursive, and it
should be easy to see that we can infer the fully polymorphic
types for each one.

What we have here is an example of the way that duplicating
code can enhance Hindley-Milner style polymorphism, allowing
different copies to be assigned different types.

| So, why do both GHC and Classic Hugs accept the following program?
| ...
| fFix g x = (x == x) || g True
| gFix f y = (y = y) || f True
| 
| fMono x = fFix gMono x
| gMono y = gFix fMono y
| 
| f x = fFix gMono x
| g y = gFix fMono y

I hope this will be clear by now.  You've taken the transformation
that I described above one step further, abstracting out the common
pattern in the definitions of f_a and f_Bool into fFix, and of
g_a and g_Bool into gFix.  Your f, fMono are just my f_a and f_Bool,
while your g, gMono are just my g_a and g_Bool.  Although you've
turned the code back into a single, mutually recursive binding
group, the same basic argument applies.

| Would it be an outright win to have this done automatically?

In general, I think you need to know the types to determine what
transformation is required ... but you need to know the transformation
before you get the types.  Unless you break this loop (for example,
by supplying explicit type signatures, in which case the transformation
isn't needed), then I think you'll be in a catch-22 situation.

Why do you need the type to determine the transformation?  Here's another
example to illustrate the point:

h x = (x==x) || h True || h "hello"

This time, you need to make three copies of the body of h---one for a
generic "a", one for "Bool", and one for "String"---but I don't think
you could have known that 3 versions were needed (as opposed to 2 or 4,
say) without looking at the types.

Incidentally, the transformation can be understood by looking at a
version of the program that makes polymorphism explicit by passing
types as arguments (a la System F/polymorphic lambda-calculus), and
then generating specialized versions for each different argument type.
(The basic method parallels something I did quite a few years ago to
eliminate dictionaries from an implementation of Haskell-style
overloading; see my paper "Dictionary-free Overloading by Partial
Evaluation" for more details.)  This also throws up another issue;
with polymorphic recursion, you might need an *infinite* family of
specialized functions.  Consider the following example:

   r x = (x==x)  r [x],

whose translation to include type parameters would produce:

   r a x = (x==x)  r [a] [x].

(For simplicity, I'm pretending that (==) :: a - a - Bool, making
it fully polymorphic.  Adding type classes doesn't change anything
but obscures the real point.)  Now when you ask for r_a you'll get:

   r_a x = (x==x)  r_[a] [x]
   r_[a] x = (x==x)  r_[[a]] [x]
   r_[[a]] x = (x==x)  r_[[[a]]] [x]
   r_[[[a]]] x = (x==x)  r_a [x]
   ...

This reply has been longer than I'd intended; to anyone still reading,
I hope you had fun, and I hope this made sense!

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Num class

2000-10-18 Thread Mark P Jones

Hi Koen,

| If Show were not a super class of Num, the following program
| would generate an error:
| 
|   main = print 42
| 
| If Eq were not a super class, the following program would
| not work:
| 
|   main = print (if 42 == 42 then "koe" else "apa")
| 
| These programs are all fixed by inserting Show and Eq as
| super classes of Num. So that one does not even notice!

Your claims are incorrect.  Both of these examples type check
without any errors, and regardless of whether Show and Eq are
included as superclasses of Num.  It is easy to verify this
using "Typing Haskell in Haskell" (http://www.cse.ogi.edu/~mpj/thih);
I'll attach the script that I used for this below.  Put this in
the same directory as all the other .hs files and load it into
Hugs.  Then edit StdPrel.hs to remove the superclasses of cNum,
(replace [cEq, cShow] with []), and it will still work.

| For years I have wondered why the Num class has the Eq class
| and the Show class as super classes.
| 
| Because of this, I cannot make functions an instance of Num
| (because they are not in Eq or Show). Or a datatype
| representing an infinite amount of digits (because Eq would
| not make any sense).
| 
| Now I have found out the reason!

I don't think you have.

I do not know the reason either, but I suspect that it is largely
historical; when Haskell was first designed, the only types that
people wanted to put in Num were also equality and showable types.
By making Eq and Show superclasses of Num, types could sometimes
be stated more concisely, writing things like (Num a) = ... instead
of (Num a, Eq a, Show a) = ...

In the past ten years since the Haskell class hierarchy was, more or
less, fixed, we've seen several examples of types that don't quite
fit (Like functions, computable reals, etc. which might make sense
in Num but not in Eq).  A natural conclusion is that several of the
superclass relations between classes should be removed.  But realize
that there is an unavoidable compromise here: generality versus the
convenience of shorter types.  I suggest that there is no point on
the spectrum that would keep everybody happy all the time.

| It is of the defaulting mechanism of course!
| ...

Defaulting is a red herring in trying to understand why Show
and Eq are superclasses of Num.  Marcin has already pointed
out that your description of the Haskell defaulting mechanism
is not correct by quoting from the Haskell report.  You can
find another description, again based on the report, in the
thih paper.

| So I define a type class:
|   class Num a = Number a where
| convertToDouble   :: a - Double
| convertFromDouble :: Double - a
|... 
| All my library functions now have the shape:
|   libraryFunction :: Number a = ... a ...
| ...
| And now the bad thing... When I use "libraryFunction" on a
| numeric constant, such as 42, I get the error:
| 
|   ERROR "library.hs" (line 8): Unresolved overloading
|   *** Binding : main
|   *** Outstanding context : Number b
| 
| So here are my questions. Why does the default mechanism
| have this restriction? I know that the default mechanism is
| already broken (some desirable properties are destroyed) --
| what properties will be broken by lifting this restriction?

Defaulting only kicks in if (a) at least one class is numeric,
and (b) all classes are standard.  Number is not a standard
class (you just defined it yourself), so defaulting will not
apply.  Defaulting was designed to work in this way so that
(i) it would catch and deal with the most common problems
occurring with numeric literals, and (ii) it would not be used
too often; defaulting is in general undesirable because it
can silently change the semantics.  Again, defaulting is an
example of a compromise in the design of Haskell.  Ideally,
you'd do without it all together, but if you went that way,
you'd end up having to write more type information in your
programs.  And again, I don't suppose there is a universally
satisfactory point on this spectrum.

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Want to do a PhD or PostDoc?   Interested in joining PacSoft?   Let us know!


module SourceFortyTwo where

import Testbed
import HaskellPrims
import HaskellPrelude

-
-- Test Framework:

main :: IO ()
main  = test imports fortyTwo

saveList :: IO ()
saveList  = save "FortyTwo" imports fortyTwo

imports  :: [Assump]
imports   = defnsHaskellPrims ++ defnsHaskellPrelude

-
-- Test Program:

fortyTwo :: [BindGroup]
fortyTwo
 = map toBg
   [[("main", Nothing, [([], ap [evar "print", elit (LitInt 42)])])],
[("main'", Nothing,
 [([], ap [evar "print", 
   eif (ap [econst eqMfun, elit (LitInt 42), elit (LitInt 42)])

RE: type class

2000-10-09 Thread Mark P Jones

Hi Zhanyong,

| In Haskell, instances of a type class can only be well-formed type
| constructors ...
| Note there is no type constructor abstraction.
| 
| In practice, I found this rule too restrictive.

There are good reasons for the restrictions that were alluded to in
my constructor classes paper, and again in Typing Haskell in Haskell.
Some text from emails written when this topic came up previously is
attached to the end of this message.

Actually, the first part of the attached email deals with a different
problem (making Set an instance of Monad), but since that also came
up for discussion again quite recently, I don't think it will hurt to
include it again here.

| How about extending TC with a branch for abstraction:
| 
| TC ::= ...
|  | /\a. TC  -- abstraction
| 
| This is too powerful and will get out of control -- we surely don't want
| to give TC the full power of lambda-calculus.  So let's impose a
| restriction: in /\a.TC, a must occur free in TC *exactly once*.  This
| way, abstraction can only be used to specify with respect to which
| argument a partial application is.  (or I think so -- I haven't tried to
| prove it.)

My instinct (which perhaps somebody will prove incorrect) is that this will
not help.  Suppose, for example, that you needed to unify ([a],[b]) with f c
as part of the type inference process.  How would you solve this problem?
Alas, there are several different, and incompatible ways:

   ([a], [b]) =  (/\a. ([a],[b])) a
  =  (/\b. ([a],[b])) b
  =  (/\c. (c, [b])) [a]
  =  (/\d. ([a], d)) [b]
  =  (/\e. e) ([a], [b])

Note that the /\-terms in each of these examples satisfies your restriction.
So I don't think you'll be able to obtain most general unifiers or principal
types with this restriction.

In my opinion, Dale Miller's work on Higher-order patterns (introduced, I think
in about 1991, but I don't have references) would probably be the best starting
point for serious experimentation in this area.

Hope this helps,
Mark


-- From the archives: ---
Hi Michael,

| "...type synonyms must be fully applied".  I think the above
| example is a valid objection to this.

I'll append some text that I wrote on a previous occasion when somebody
asked why type synonyms couldn't be partially applied.  I hope that it
will help to explain why the restriction is not easy to lift, however
desirable it might be.  The example there was a little different, but
I'm sure that you'll see the correspondence.

| The other example of something that I want to declare as a monad, but
| which I can not is this:  Consider a type of collection of some sort that
| requires the types of the elements to be instances of some specific class.

This too is a problem that has come up quite a few times in the past.
As yet, I'm not sure that anyone has a definitive answer for it either,
although the work that John Hughes presented at the Haskell workshop on
Restricted Datatypes is perhaps the closest that anyone has come so far.
A general problem here is that there are differences between conventional
mathematics---where you can have sets of any type---and the mathematics of
programming languages---where interesting set datatypes can only be
constructed on types whose elements have, at least, an equality.  In Haskell
terms, mathematics has an equality function of type: forall a. a - a - Bool;
the same operator is available to mathematicians who reason about Haskell
programs.  But Haskell programmers have to make do with a more restrictive
operator of type forall a. Eq a = a - a - Bool.  (Which is not actually
an equality operator at all when you look at what's really going on; it's
just a kind of identity function or projection!)

All the best,
Mark
 
Here's the text I promised:

| I'd like to use monadic code on the following type
| type IOF b a = b - IO a
| The following seemed reasonable enough:
| instance Monad (IOF b) where ...
| But Hugs and GHC both object ...

The example is rejected because type synonyms can only be used if a
full complement of arguments has been given.  There are at least two
kinds of problem that can occur if you relax this restriction, but
both are related to unification/matching.

Suppose that we allow your definition.  And suppose that we also allow:
  instance Monad ((-) env) where ...
which is a perfectly reasonable thing to do (it's the reader monad).
Now what should we do when faced with the problem of unifying two
type expressions like:  m c  and  b - IO a ... Haskell unifies these
with the substitution:  {m +- ((-) b), c +- IO a}, but with your
instance decl, you might have preferred { m +- IOF b, c +- a }.
In other words, it's ambiguous, and the choice between these two could
change the semantics because you'll end up picking different instances
depending on which choice you make.

Or consider what you really mean when you write (IOF b) ... my guess
is that 

Typing Haskell in Haskell in HTML (was RE: Literate Programming)

2000-09-27 Thread Mark P Jones

Richard,

| for almost a year now, it has been on my list of things to do to read
| thih.  for reasons too detailed to get into now, except to say that I
| still use a 486 computer (sans printer) at home, I find reading dvi, ps
| and pdf inconvenient and tend to postpone reading them whereas I tend to
| read text and html docs right away.

So why didn't you ask almost a year ago instead of waiting all this time
to complain?  I haven't provided an HTML version before because I didn't
think that it would be useful, and nobody ever wrote to tell me otherwise
... until now.

For you, and for anyone else who would like an HTML version, please make
your way to the thih web page (http://www.cse.ogi.edu/~mpj/thih/) where
you'll find a freshly generated version of thih in HTML.  (Be aware that
the conversion to HTML may have introduced errors; I found and fixed one,
but perhaps there are more.  The pdf file is still the most strongly
recommended version for reading or printing.)

All the best,
Mark





RE: Literate Programming

2000-09-26 Thread Mark P Jones

Hi Koen,

I think that literate programming is a great idea, but I don't
think Haskell does it justice.  I'll suggest a simple solution
to your problem at the end of this message.  You can skip there
now if you want ... or else read on while I rant about this some
more and describe what seems to me to be a better solution to
your problem.

The literate programming conventions using leading ''s (also known
as "Bird tracks" or the "inverted comment convention") go back to
Orwell, a delightfully elegant language that I had the privilege to
use in my own introduction to functional programming.  (Alas, it was
never widely distributed.)  The requirement that there be a blank line
between text and code served two purposes.  The first was to catch
errors that (really do) occur when the leading "" on a line of
code has been forgotten.  The second was to introduce some visible
space between code and text that made the document easier to read
on a terminal screen.  This was the only commenting convention in
Orwell.  It was simple,  elegant, and quite different to anything
that I had seen before.  Some folks were not open to new ways of
doing things (one of the same reasons that FP does not get more
widely used perhaps?), but those of us who were found that it worked
very well.

The joys of literate programming were greatly diluted in Haskell and
its kitchen sink of commenting conventions.  And the later addition
of \begin{code}...\end{code}, for the benefit of LaTeX hackers, was
a further mistake based on a misunderstanding of literate programming.
In particular, the goal is not to have a single document that *does*
everything, serving simultaneously as plain text, LaTeX source, html,
postscript(!), etc.  The real goal is to have a single source document
that contains both program text and comments, and from which any of
the other forms that you want can be *generated*.

We do not (normally) write executable files directly, but instead use
a compiler or assembler to generate them from higher-level sources.
In the same way, I would prefer not to clutter my Haskell programs with
html or LaTeX junk that makes it harder to read, while also limiting
its portability.  I think you should write code as a clean, simple,
literate script.  And then write, as a separate but more general tool,
a utility that will convert those literate scripts into whatever form you
need.  For example, Such a tool might replace the blank line preceding
a "" line with a single "pre" tag, while a blank line after a "" line
would be replaced with "/pre".  Perhaps it trims out the leading ""s
too, because they're not needed now that you've moved to multiple fonts.

 You write:  You get:
 +-+ +--+
 | Blah blah blah  | | Blah blah blah   |
 | | | pre|
 |  foo = bar + zub   | |   foo = bar + zub|
 | | | /pre   |
 | and so on.  | | and so on.   |
 +-+ +--+

(Markup is often required for text too; use whatever seems appropriate
(and sufficiently general) for the application you have in mind.)

There was never any need whatsoever for \begin{code}...\end{code}; a
simple preprocessor would have served just as well for those who needed
it, without cluttering language specifications and implementations with
unnecessary junk!

Tools like this are not difficult to write.  Indeed, they make great
exercises for introductory functional programming classes.  Previous
versions of the Haskell report used to include the code for one such
tool in an appendix.  It was dropped in later versions, perhaps so that
it could be set as an exercise in FP classes (that was my only non-serious
comment in this email; in truth, I think it might have had more to do with
switching to monadic I/O and with reducing the length of the report).

I use exactly the kind of techniques described here in my work.  For
example, I use literate programming for the "Typing Haskell in Haskell"
project (http://www.cse.ogi.edu/~mpj/thih).  From a single source, I
generate LaTeX source (from which dvi, ps, and pdf forms can be obtained)
as well as two different executable versions (a single file version and
a multiple file version).  It all works pretty well.

Oh yes, the simple solution that I promised: try the command line
option -e, which turns off the test for non-blank lines next to
program code.  It's a short term hack, but it will do what you want.

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Long live literate programming!!  \begin{code} ... \end{code}?  Just say No!





RE: Help! Hugs type checking limit

2000-09-24 Thread Mark P Jones

[Let's take any further correspondence on this over into Hugs-bugs,
or at least into haskell-cafe ... thanks!]

| When I try to load the parser generated by Happy for a modest-sized
| language into Hugs I get:
| 
| Too many variables (16000) in type checker
| 
| I'm running Hugs 98 on a Power Macintosh.  I increased the constraint
| cutoff limit to no avail (which didn't surprise me).

Right, the constraint cutoff limit isn't a factor here.

| I am unable to increase the heap size (much) above its
| default 25 cells.  It sounds from the error message,
| though, that 16000 is simply a limit set in the
| Hugs interpreter.

No, it's not an intrinsic limit in Hugs.  The message appears
when a call to malloc (i.e., the memory allocation function
provided by the underlying C implementation) fails.

Perhaps there is a limit in the Mac version of Hugs on the
amount of memory that can be allocated to a program?  Perhaps
the folks that have worked on Mac Hugs can answer this.  Or
perhaps there is a way for you to change memory limits for
Hugs within the MacOS.

| Any ideas on how to make this combination of Happy-generated
| parser and Hugs work together?

Parser generators are good tools for stressing an implementation;
they tend to generate code that no human would ever write, often
with huge groups of mutually recursive functions.  In these days
of multimegabyte machines, the kind of problem you describe doesn't
happen too often any more.  But even on bigger machines, you'll
often see a noticeable pause when Hugs loads the source for a
generated parser.

All the best,
Mark






RE: help, classes!

2000-09-15 Thread Mark P Jones

Hi Kirstin,

| Surely this is obvious, but I cannot figure out how to properly deal with
| class constraints and monads. For instance, when trying 
| 
| instance Monad Set.Set where 
|   xs = f =  Set.unionSet (Set.map f xs)
|   return x =  Set.single x
|   fail s   =  Set.empty
| 
| hugs complains that it "Cannot justify constraints in instance member
| binding" for =. unionSet type is Eq a = Set (Set a) - Set a

This is a long standing problem in Haskell.  Off the top of my head, I
can think of a couple of papers that talk about ways of addressing it.
The first is John Hughes' paper on "Restricted Datatypes in Haskell"
from the 1999 Haskell Workshop:

http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps

The second is Simon Peyton Jones' paper on "Bulk types with class" from
the 1996 Glasgow FP Workshop:

http://research.microsoft.com/Users/simonpj/Papers/collections.ps.gz

Hope these references are of some help!

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD?  Interested in joining PacSoft?  Let us know!





RE: Empty classes of type variables which take an argument.

2000-09-11 Thread Mark P Jones

| The attached file is accepted by GHC 4.08 and Hugs 98.

Here is what was in the attached file:

   module Foo where

   class A a where
  foo :: a value - ()

   class A a = B a where
  toList :: a value - [value]

| However if you remove the declaration of "foo" (and for Hugs,
| the now unnecessary "where"  in the declaration of class A),
| both compilers complain.  It appears that in the absence of
| any information, GHC and Hugs assume that the subject of a
| class declaration takes no type parameters.  So how can I
| declare a trivial class for something which does take a type
| parameter? (Yes, I do have a reason for doing this . . .)

Your problem is with the kind of class A.

In the absence of any information about kinds, Haskell assumes
that the kind * was intended.  Thus a Haskell system sees:

  class A a

and deduces that A contains types.  You want it instead to
contain things of kind (* - *).  See Section 4.6 of the Haskell
report on Kind inference for more explanation.

What you really want here, I suspect, is a kind annotation on
the class parameter ... class A (a :: * - *).  Haskell 98 does
not support this.  It would be useful for datatypes too in some
cases.  The dummy "foo" member serves as an indirect kind annotation
in your original code.  Other ways to achieve a similar effect are
to declare an appropriate superclass (such as specifying that
instances of the class are also Functor or Monad instances).

All the best,
Mark





RE: frantk / overlapping instances

2000-09-01 Thread Mark P Jones

| does type-checking remain decidable (in general) for overlapping instances
| (:+o in hugs)? 

Type checking in Hugs (with -98, at least) isn't decidable,
either with or without overlapping instances!  But decidability
could be recovered by placing stronger syntactic requirements
on the form of class constraints that are allowed on the left
of the = sign in the first line of an instance declarations
(again, either with or without overlapping instances).  In other
words, the two things are pretty much independent.

Back in the old days of Gofer, the combination of overlapping
instances with another, more obscure feature of the type system
actually resulted in unsoundness.  So far as I know, very few
people ran in to this problem in practice, but it was there.
Hugs doesn't include the second feature (use of "instance
specifics"), so doesn't suffer from the same problem.

Incidentally, the jury is still out, as far as I know, on the
interaction of functional dependencies with overlapping instances.
The theoretical work that I've done on functional dependencies
does not consider the possibility of overlaps.  But the implementation
in Hugs does have some experimental extensions (mostly written by
Jeff Lewis) as a first attempt to explore this area.

All the best,
Mark





RE: unlines: the mystery of the trailing \n

2000-08-07 Thread Mark P Jones

Hi Sigbjorn,

| Here's a Prelude inconsistency that's been irking me once 
| in a while for a loong time - today it came up again, so here goes:
| 
|   unlines   ["a","b"]   == "a\nb\n"
|   unwords ["a","b"]   == "a b"
|
| [... unwords adds space between items, not at the beginning or end;
| unlines puts a newline after each item, including at the end ...]

I quite like the fact that the definition for unlines gives us laws
like:

  unlines (xs ++ ys) = unlines xs ++ unlines ys
  unlines . concat   = concat . map unlines

Of course, the fact that unwords doesn't add a terminating space
means that we don't get quite such nice laws for unwords ...

All the best,
Mark





RE: Fundeps

2000-06-25 Thread Mark P Jones

Hi Marcin,

| module M where
| 
| class Seq s a | s - a where
| m :: Seq s b = (a - b) - s a - s b

This combination of constructor classes and functional dependencies
looks very odd!  The dependency says that, if you pick a particular
implementation s of sequences, then there will be at most one choice
for the element type a.  I can have lists of Char or lists of Int,
but not both together!

Perhaps you wanted something more like the following:

  class Seq s a | s - a where
  m :: Seq t b = (a - b) - s - t

Note that the kinds have changed here; s and t are both of kind *,
not * - * as before.  The trouble with this version is that it
doesn't require/ensure that you've used the same form of sequence
for both s and t.  The parametric type classes folks had an answer
for this in the form of a constraint s ~ t requiring that the two
types had "the same outermost constructor", but I don't think it
was really a good solution either.

| instance Seq PS Char where
| m f (PS s) = PS (map f s)
| 
| This is not accepted by Hugs:

Rightly so.  With this definition, you've promised that the element
type corresponding to PS will be Char, which means that you can't
have a different element type in the returned collection!

Hope this helps!

All the best,
Mark





RE: Instance of Functor for functions of = 2 arguments

2000-06-24 Thread Mark P Jones

Hi Matt,

The problem you describe is not uncommon.  It's been a restriction
with constructor classes since they were first introduced.  In fact
it's actually the key to making constructor classes tractable.  You
might find that the following back issue from the Haskell list gives
you some more insight on this question.  (If the Haskell list had a
FAQ, this would surely be on it by now!):

   http://www.mail-archive.com/haskell@haskell.org/msg05356.html

(Thanks to Sven Panne who posted this URL in a previous message)

| I'm stuck.  I'd like to do this:
| 
| instance Functor (a-b-) where
| fmap g f x y = g (f x y)
| 
| ...but of course it has invalid syntax.  I can't think of a way to write
| (a-b-) in the ((-) a) form.  (a-b-c) would be ((-) a ((-) b c)).

One way to deal with this might be to collapse the two parameters of
type a and b into a pair of type (a,b) and then use the Functor
((a,b) -) instead of (a-b-).  Other than that, something like
your newtype code is the only real option.  Functional dependencies
can't help here either because Functor isn't (and shouldn't be)
defined as a multiple parameter class.

In fact you can start to see why the example you have here causes
problems if you think about how you would match up a type of the
form  a - b - c  with the application of a functor  f t.  There
are just too many choices!

   (a - b - c)
= (a -) (b - c)
= (a - b -) c
= Id (a - b - c)
= (\a. a - b - c) a
= ... etc ...

Hope this helps!

All the best,
Mark





RE: Library conventions

2000-06-23 Thread Mark P Jones

|   1) Hugs's error messages don't qualify names, so they become 
|  very difficult to read when you use this convention.
| ...
| ... #1 is the least important in theory, since it's fixable and
| implementation-dependent, but turned out for me to be the most
| important in practice; Hugs' atrocious behavior on this score has 
| caused me to disregard my own better judgement here for serious
| projects.

Despite the way that many people use it, and even with all the
changes that have been made to it, Hugs simply wasn't designed
for "serious projects".  It was intended for small projects,
and as a tool for education and research.  Using qualified names
in error messages can make things unnecessarily harder to read
in such contexts like that.  As an expert user working on large
projects, your perspective is different.  But I don't think the
design choices are as clear cut as you suggest.

Hugs is also quite old; it's core goes back nearly ten years!
With a more "modern" interface, we might solve the interface
dilemma by arranging for fully qualified names, types, etc. to
pop up in a "tooltip" when the user mouses over an identifier
in an error message.  Perhaps other people can suggest more
modest proposals for making intelligent choice of qualifying
prefixes that would fit in more directly with the existing
framework.

A final comment: don't forget that you have full access to the
source code for Hugs, and hence the opportunity not just to
identify weaknesses, but also to fix them, and to share the
results so that everyone benefits!  The same goes for all of
our Haskell systems, not just Hugs.  There just aren't enough
developers to get the job done on their own.  I'm convinced
that the only way we will ever have truly excellent tools is
by working on them together as a community.

All the best,
Mark





RE: negate and sections

2000-06-01 Thread Mark P Jones

Hi Jeff,

| You can write the section (+ x) to specify a function to add `x' to
| something. That's great, then you need to specify a function for 
| subtracting `x' from something.

This is why the "subtract" function is included in the Prelude:

Prelude map (subtract 1) [1..10]
[0,1,2,3,4,5,6,7,8,9]
Prelude

| No so, of course.  (- x) means `negate x'.  Bummer.  What an 
| unpleasant bit of asymmetry!

Of course, (-x) as a negative number probably has more of a precedent
than (-x) as the "subtract x" reading.  Some folks have suggested a
different notation for sections in which an underscore indicates the
position of missing arguments.  That would make (_-x) look different
from (-x), and from (_-_), but it doesn't look particularly pretty
either ...

All the best,
Mark





RE: Binary Search Tree debugging

2000-04-19 Thread Mark P Jones

Hi Andrew,

| Hey all.. I was wondering if somebody might offer me some assistance in
| trying to debug some code I wrote to check whether a tree is a binary
| search tree.. For some reason it always comes back as false! :(  Thanks
| much!

One of the great things about functional programming is the
opportunities that it brings to explore old problems from new
angles.  The code that you wrote is an attempt to describe a
test for binary search trees as a recursive algorithm, which
is the kind of implementation that you'd expect to see in an
imperative language.  As your experience has shown, this kind
of code can be hard to read, and hard to get right.  I'd
encourage you to try a different approach.  Consider the simple
question: When is a tree a binary search tree?  Answer: When its
leaves are arranged in increasing order.  Translating that idea
directly to Haskell gives:

  isBST :: Ord a = Tree a - Bool
  isBST  = increasing . leaves

where:

  leaves :: Tree a - [a]
  increasing :: Ord a = [a] - Bool

The idea here is that leaves and increasing are general purpose
functions that you can use to enumerate the leaves of a tree
from left to right, and to determine whether the values in a
list are increasing, respectively.  I'll leave the definitions
to you, but it shouldn't be too difficult: a simple two line
recursive definition suffices for leaves, while a mildly cunning
one-liner will get you to increasing (hint: think zipWith (=)).
The whole definition is shorter and (IMHO) simpler than the code
you wrote, while at the same time providing useful functions that
might find applications elsewhere in a larger program.

I hope that you'll find my comments here interesting.  Perhaps I
should explain that I responded to your message because it reminded
me of some similar examples that got me excited when I was first
learning a functional programming language.  I'd been an imperative
programmer for some time before then, but as I looked at those
examples, I began to see new ways of solving all kinds programming
problems.  And, in fact, many of those ideas are still useful to me
today, whatever language I happen to be using.

Enjoy!
Mark





RE: libraries for Integer

2000-04-19 Thread Mark P Jones

Hi Sergey,

| In what way the Haskell implementations may use the GMP library?
| (GNU Multi-Precision integers ?)

Hugs 98 doesn't use gmp at all.  For legal reasons (later rendered
irrelevant by changes to the Hugs license), Hugs used it's own
implementation of multi-precision integers.

| And there also exist other powerful libraries for Integer and for the
| number theory. Probably, some of them written in C. One could consider
| exploiting them in the Haskell implementation.

I guess that H/Direct would be the best way to take advantage of these
right now.

All the best,
Mark





RE: Dictionary arguments (was: ServiceShow ..)

2000-04-03 Thread Mark P Jones

| ... Keith's comments:
| ...
| remind me of a Mark Jones idea from a few years ago (PEPM'94; Lisp 
| Symbolic Computation 8, 3, 1995) to use partial evaluation to achieve
| overloading without dictionary arguments.
| 
| Is this idea sitting on a back burner somewhere or was it abandoned due
| to problems with separate compilation and code explosion?

I never saw a single program where code explosion was a problem.
Indeed, in every case, partial evaluation/specialization actually
resulted in *smaller* compiled programs.  To be honest, it wasn't
specialization itself that made programs smaller.  Instead, by
eliminating dictionaries, it made it easier to detect components
that weren't actually needed.  The elimination of this dead code
more than compensated for the increase caused by specialization.

Separate compilation is still an issue, I guess.

Another difficulty arises from the presence of polymorphic recursion
in Haskell.  Gofer didn't have that, so you could be sure that, even
if there was a code explosion, you could at least rely on getting a
finite program.  Consider, however, the following Haskell program:

   f  :: Eq a = a - Bool
   f x = (x==x)(f [x])

To evaluate an expression like (f True) using specialization, you
would have to implement an infinite family of versions of f with
type variable a ranging over Bool, [Bool], [[Bool]], ... etc.
There are a couple of ways that you might try and deal with this,
for example, using a hybrid of dictionary passing and specialization,
but it is one further complication to deal with ...

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!
 





RE: speed of compiled Haskell code.

2000-03-21 Thread Mark P Jones

| In that regard, I think the biggest problems remaining are the lack of a
| standard "fast" string type, and some remaining warts in hugs.  These are
| maybe easiest to see when you do something like "strace -c" on a hugs
| program and the comparable perl program.  So, in my naive version of
| "hello world", the hugs version generates 803 calls to 'lstat', 102 calls
| to 'stat', and a performance-killing 13 calls to 'write'; yup, that's
| one for every character. :-(  throw most of those out, and you're within
| shouting distance of perl.  And that would be something to shout about.

I see big problems in using Hugs as an example in discussions about the
speed of compiled code.  Hugs derives from Gofer, which was designed to
fit on a 16 bit machine with a fairly small memory (several times smaller
than the PDAs, digital cameras, and video cards in use today).  It was
also designed, from the beginning, to be an interactive system based
around a simple read-eval-print loop.  Performance was never the priority.
After all, there were a couple of other places you could turn for a compiler
if you did want performance, and those folks had spent a lot of time and
effort on building their systems.  Hugs was intended to complement rather
than compete with them.

Because interactivity was a goal, Hugs, by default, does indeed call
fflush after every character, which causes the repeated calls to write
that you see.  If it didn't do that, then programs on slow machines or
with expensive underlying computations might have behavior that is counter-
intuitive and confusing, especially to beginners.  We are, after all,
talking about a lazy language, and so you wouldn't think that you'd have
to wait for a whole line of output before you saw the first character.

Surprisingly, perhaps, the performance of Hugs turned out to be good enough
for many tasks, particularly on the machines that we use today, and so
tools like runhugs have become a viable option for some purposes.  But you
should always remember that Hugs came before runhugs, and that the default
distribution is tuned primarily for use as an interactive environment.

There is, in fact, a compile-time setting that you can use to prevent
Hugs from calling fflush after every character (I think it's FLUSHEVERY,
but you should check).  If you set that appropriately, then Hugs I/O
will (or should) run a little more quickly.

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!





RE: The return of the Void [Was: newtypes]

2000-03-19 Thread Mark P Jones

I'd like to respond to recent comments about `strange' newtype
definitions.  Definitions like

  newtype Void = Void Void

or variants using mutual recursion are, in my opinion, entirely
reasonable (even if they are perhaps almost as entirely useless
in practice).  I don't really think it has anything to do with
representations.

I start with the axiom that every type contains at least one
element: a bottom value.

In the case of a "data"type, all remaining elements of the type
are generated by applying its constructors to values of the
appropriate type.

In the case of a "newtype", all remaining elements of the type
are, once again generated by applying its constructor to a value
of the appropriate type.  The only difference here---apart from
allowing only a single constructor, N, and with only a single
argument---is that we expect N bottom = bottom.

So the values of type Void are just bottom, Void bottom,
Void (Void bottom), Void (Void (Void bottom)), ...  But, of
course, because Void bottom = bottom, it quickly follows that
all of these values are just bottom.  Thus bottom is the only
value of type Void.

For the purposes of type checking, "newtype"s are treated just
like "data"types.  For the backend, you can use exactly the
same representation for the argument of the newtype constructor
as you use for the "newtype" itself.  And then constructor
functions like N or Void (and their inverses, used in pattern
matches) can be translated into identity functions, many of
which can be removed by a trivial optimization:  id e = e.

All the best,
Mark




RE: Help! Is there a space leak here?

2000-02-22 Thread Mark P Jones

|   Both simulations run fine for small values of
|   counter "n" but fail badly when "n" becomes big,
|   40,000 say. I happened to have a presentation
|   on the subject of Hawk about two months ago, and
|   the audience was not much impressed when they saw
|   Hugs failing on such simple (in their view)
|   examples. I knew beforehand what would happen for
|   large "n" and I tried to restrict my presentation
|   to small n's, but unfortunately the audience was
|   very inquisitive. You see, those people are accustomed
|   to running their tests for hours a time, so it
|   was natural for them to ask for some big values
|   of n.
| 
|   Not a good publicity for Hugs, unfortunately.

Jan: I don't think you're being very fair.  Do you really
know that the failures you demonstrated were due to a bug
in Hugs?  Is it possible that they might have been caused
by bugs in the program that you were running?  Or perhaps
you simply didn't have Hugs configured with a big enough
heap for the size of the bigger problems that you tried?
If it truly was a bug in Hugs, I hope that you reported
it so that the Hugs maintainers could do something to fix
it?

Joe: As you've observed, the space behavior of Haskell
programs is often very subtle, and hard to understand.
I glanced quickly over your program but didn't see any
immediate signs of problems.  My first suggestion would
be that you try using the rudimentary heap profiler that
Hugs provides to see if this gives some insight into the
source of the leak.  (You'll probably need to be able
to recompile Hugs with profiling enabled for this.)
Failing that, it might be worth trying to put together
a complete example (program and data) that demonstrates
the problem.  I find it rather hard to think about examples
like this in the abstract.  Having code that I can actually
run, can make a big difference in situations like this.  (I'm
not actually promising that I'll have time to investigate
it myself, but I might, and so might some other readers on
this list.)

All the best,
Mark




RE: Haskell update on polymorphic objects

2000-02-09 Thread Mark P Jones

Your example suggests that update  was, perhaps, not the best choice
of terminology!

| data Foo a = Foo { foo :: a
|, bar :: Int 
|} deriving Show
| 
| up s t = s { foo = t }
| 
| Now, what should the type of the function 'up' be?
| 
|   up :: Foo a - a - Foo a
|   ** OR **
|   up :: Foo a - b - Foo b
| 
| Both Hugs and GHC give the later type, 
| ie. the updated object can have
| a different type than the parent object. 
| Pages 26,27 of the Haskell98 report does not
| say anything about polymorphic fields in records.

The typing is implied by the translation of update
expressions [*].  Your example translates to:

  up s t = case s of
  Foo f b - F t b

and the principal type of this is the second type that
you gave above (the one reported by GHC and Hugs).

| What do people think about this? Does it surprise anyone 
| else, or is this a known design decision?

The latter methinks, although it might not have been *well*
known before know :-)

All the best,
Mark

[*] From the beginning of Section 3, cut and pasted direct
from the pdf:

  In this section, we describe the syntax and informal
  semantics of Haskell expressions, including their
  translations into the Haskell kernel, where appropriate.
  Except in the case of let expressions, these translations
  preserve both the static and dynamic semantics.




RE: rounding in Haskell -- a bug in hugs

2000-02-09 Thread Mark P Jones

Hi John,

| This is a "bug" in hugs.
| 
| To illustrate the problem, the next floating point number after 
| 5.0 is 5.0047,
| which hugs also prints as 5.0. One might argue that to display it 
| as 5.005 would
| be misleading, since this number is the closest representable to 
| everything from 
| 5.003 to 5.007. On the other hand, to print only the 
| "certain" digits makes
| show many-to-one, with unfortunate consequences when show and read are
| used as a portable way to store data.
| 
| In fact, hugs' behaviour isn't consistent with the Haskell 98 report.
| ...
| This behaviour is actually documented in the hugs manual, under 
| "conformance with Haskell 98". So as a documented bug, it must be
| a feature -- but perhaps one which might be removed in a future version?

It looks like this might actually be a bug in the Hugs documentation:
the comment that you refer to should perhaps be removed.  Hugs used to differ
from what the Haskell report specified, which is when that comment was
added to the Hugs manual.  But as far as I can tell, the Haskell 98 report
eliminated this discrepancy by avoiding saying anything much about how
Floats or Doubles should be displayed.  The showFloat function from the
Numeric library could be used to implement the show function, but this
isn't required.  And while the Hugs behavior doesn't suit all users
(e.g., those who are interested in every last bit), it is arguably more
appropriate for others (which is why the designers of C chose it as
their default).

The Numeric.showFloat function is there for the more expert programmers
who care about the last few bits after the floating point.  That's the
function that a Haskell programmer should use if they need this kind of
functionality.  I can imagine a complaint along the lines of: but I can't
use Numeric.showFloat because I'm really using "show" on a more complex
data structure, and it doesn't use Numeric.showFloat.  Indeed.  But that's
an argument against the use of overloading, not about how numbers are
displayed.  An overloaded operator can only have one interpretation at
any given type, and if you find yourself with two useful operations, then
you have a problem.  Whichever you choose will be wrong for somebody.
(Possible solutions to this might be found in the use of implicit parameters,
or in a beefed up showsPrec function that takes more than an integer
precedence as its argument.)

The argument that show/read can be used as a portable way to store data
doesn't work for Haskell because the report does not guarantee a portable
implementation for Float and Doubles.  String representations are for
humans; for the benefit of machines, a binary representation would be
better.  And for portability, a portable binary representation is needed.

Finally, as I've said many times, Hugs is not the right language to choose
if you are interested in doing careful floating point work.  An example
that Jerzy mentioned recently was the fact that, by default, floating point
numbers in Hugs (both Float and Double) are represented by single precision
C floats.  Last time the language lawyers looked at this, the consensus
seemed to be that this was technically legal, but obviously not ideal.
Double precision floats can be supported in Hugs, but their implementation
requires a non-portable hack, and so they are turned off by default (much
to Jerzy's frustration, I'm afraid :-().

All the best,
Mark




Implementation of IO monad (was RE: Haskell Clean)

2000-01-24 Thread Mark P Jones

Simon Marlow writes:

| All known Haskell compilers implement the IO type as a function type,
| something like (World - (World, a)).  You can think of the monad 
| as just a convenient way to hide the passing around of the world token.
| 
| And because it is abstract, compilers are free to implement it 
| however they like.

I would refute Simon's first point (it depends on how you interpret
the word "compiler"), but strongly support his second.  In Hugs, I
did actually use a different implementation of the IO monad, based
on continuation passing, which looked something like:

   IO a  =  (a - Ans) - (IOError - Ans) - Ans

In other words, there is no explicit World, and a choice of two
continuations, one for success, and one for failure.  I haven't
looked to see if things have changed since the new team took over
Hugs; presumably they will because Hugs and GHC will have to agree
on what an IO a is if they're going to interoperate.  Nevertheless,
my point remains: you shouldn't think of an (IO a) value as a world
passing function, but as an abstract datatype that represents a
certain kind of computation, and which may admit more than one
implementation.

All the best,
Mark




RE: Type inference and binding groups

2000-01-19 Thread Mark P Jones

Hi Keith,

| Type inference for Haskell (as described in Mark Jones' paper _Typing 
| Haskell In Haskell_ and as performed by GHC) requires first splitting 
| groups of let bindings into strongly-connected components.  It then 
| assumes that all binders in such a component will be generalised over 
| the same vector of type variables.

It isn't strictly necessary to quantify all types in the same binding
group over the same vector of type variables.  And, in fact, I'd disagree
with your claim: the type checker in my paper does not quantify each type
over the same set of variables.  It is true that a single set of type
variables is passed to the quantify call in each case, but these are only
"candidates" for quantification.  Look more closely at the definition of
quantify and you'll see that it selects only a subset of those variables
(and potentially in a different order) when it builds the quantified type.

| What is the justification for this assumption?  Is it always the case 
| that in a strongly-connected component
| 
| let x1 = e1
| x2 = e2
| ...
| xn = en
| in
| e
| 
| if xi has the type forall a b c . ti for some monotype ti then xj must 
| have type forall a b c . tj for some monotype tj?  (modulo 
| permutations, of course)

No, here's a simple counterexample:

 m x y = n y  -- inferred type: a - b - c
 n y   = m undefined y-- inferred type: b - c

Of course, nothing stops you from quantifying over extra type vars if you
want.  Haskell doesn't provide this syntax, but you could treat n as a
function of type (forall a. forall b. forall c. b - c).  The extra
forall a quantifier doesn't do anything useful here, but it doesn't hurt
either.  In fact you can do this with any pair of types.  If A and B are
sets of type variables, then you can rewrite types (forall A. ta) and
(forall B. tb) in the form (forall C. ta) and (forall C. tb), respectively,
for any set of type variables C that includes the union of A and B.

Adding redundant quantifiers like this has some implications in an
implementation that translates to a System F style, type-passing
intermediate language ... which includes GHC and some ML compilers:
An extra quantified variable means an extra parameter in the
translation.

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!




RE: VisualHaskell

1999-12-15 Thread Mark P Jones

| Is there a way to make platform/vendor independent development
| tools for Haskell?

Of course!  That's what a number of us have been doing for some time.  But
platform/vendor specific tools, such as a plugin for Visual Studio or a new
editing mode for Emacs, are also valuable, especially if they make Haskell
more accessible either to new or existing audiences.

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!
 




RE: Constructor Classes

1999-12-08 Thread Mark P Jones

Hi Michael,

| "...type synonyms must be fully applied".  I think the above
| example is a valid objection to this.

I'll append some text that I wrote on a previous occasion when somebody
asked why type synonyms couldn't be partially applied.  I hope that it
will help to explain why the restriction is not easy to lift, however
desirable it might be.  The example there was a little different, but
I'm sure that you'll see the correspondence.

| The other example of something that I want to declare as a monad, but
| which I can not is this:  Consider a type of collection of some sort that
| requires the types of the elements to be instances of some specific class.

This too is a problem that has come up quite a few times in the past.
As yet, I'm not sure that anyone has a definitive answer for it either,
although the work that John Hughes presented at the Haskell workshop on
Restricted Datatypes is perhaps the closest that anyone has come so far.
A general problem here is that there are differences between conventional
mathematics---where you can have sets of any type---and the mathematics of
programming languages---where interesting set datatypes can only be
constructed on types whose elements have, at least, an equality.  In Haskell
terms, mathematics has an equality function of type: forall a. a - a - Bool;
the same operator is available to mathematicians who reason about Haskell
programs.  But Haskell programmers have to make do with a more restrictive
operator of type forall a. Eq a = a - a - Bool.  (Which is not actually
an equality operator at all when you look at what's really going on; it's
just a kind of identity function or projection!)

All the best,
Mark
 
Here's the text I promised:

| I'd like to use monadic code on the following type
| type IOF b a = b - IO a
| The following seemed reasonable enough:
| instance Monad (IOF b) where ...
| But Hugs and GHC both object ...

The example is rejected because type synonyms can only be used if a
full complement of arguments has been given.  There are at least two
kinds of problem that can occur if you relax this restriction, but
both are related to unification/matching.

Suppose that we allow your definition.  And suppose that we also allow:
  instance Monad ((-) env) where ...
which is a perfectly reasonable thing to do (it's the reader monad).
Now what should we do when faced with the problem of unifying two
type expressions like:  m c  and  b - IO a ... Haskell unifies these
with the substitution:  {m +- ((-) b), c +- IO a}, but with your
instance decl, you might have preferred { m +- IOF b, c +- a }.
In other words, it's ambiguous, and the choice between these two could
change the semantics because you'll end up picking different instances
depending on which choice you make.

Or consider what you really mean when you write (IOF b) ... my guess
is that you're thinking of it as adding a kind of lambda, so that

   IOF b = \a. a - IO b

This is appealing, but also means that we'd need to move up to higher-order
unification which is undecidable and non-unitary.  For example, now
we could match m c  to  b - IO a  in all kinds of interesting ways:

 b - IO a  =  (\b . b - IO a) b
=  (\a . b - IO a) a
=  (\z . b - z) (IO a)
=  (\z . b - IO a) Int
=  ...

Now we really have ambiguity problems to worry about!

Requiring type synonyms to be fully applied --- in effect, telling us
that a synonym is nothing more than an abbreviation, and has no other
consequences for the semantics --- seems like a nice way to avoid these
problems.


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!




RE: Existential types, save me now?

1999-11-22 Thread Mark P Jones

Hi Alex,

| Here's some of the threatened examples:
| 
|  data OrdFuncExist = OE (Ord a = Char - a)
|  data OrdListExist = OLE (Ord a = [a])

Perhaps this is a GHC/Hugs difference, but the syntax that you've
used here isn't permitted in Hugs ... and in old versions where it
might have been allowed, it would suggest *universal* quantification
rather than existential.

I just rewrote the definitions to fit with the Hugs syntax for
existentials as follows:

 data OrdFuncExist = forall a. Ord a = OE (Char - a)
 data OrdListExist = forall a. Ord a = OLE [a]

and now the rest of the code that you sent type checks without any
apparent problem, including emap, blah, and emax.

[We keep trying to argue that it makes sense to use the "forall" keyword
here (because the constructor function that is being defined really does
have a polymorphic type).  But perhaps this is a further indication that
the time for a "exists" keyword has come!]

Hope this helps!

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!




RE: Existential types, save me now?

1999-11-22 Thread Mark P Jones

Hi Alex,

| Here's some of the threatened examples:
| 
|  data OrdFuncExist = OE (Ord a = Char - a)
|  data OrdListExist = OLE (Ord a = [a])

Perhaps this is a GHC/Hugs difference, but the syntax that you've
used here isn't permitted in Hugs ... and in old versions where it
might have been allowed, it would suggest *universal* quantification
rather than existential.

I just rewrote the definitions to fit with the Hugs syntax for
existentials as follows:

 data OrdFuncExist = forall a. Ord a = OE (Char - a)
 data OrdListExist = forall a. Ord a = OLE [a]

and now the rest of the code that you sent type checks without any
apparent problem, including emap, blah, and emax.

[We keep trying to argue that it makes sense to use the "forall" keyword
here (because the constructor function that is being defined really does
have a polymorphic type).  But perhaps this is a further indication that
the time for a "exists" keyword has come!]

Hope this helps!

All the best,
Mark


[EMAIL PROTECTED]  Pacific Software Research Center, Oregon Graduate Institute
Looking for a PhD or PostDoc?  Interested in joining PacSoft?  Let us know!




RE: Default declarations

1999-11-03 Thread Mark P Jones

Sven,

You've already heard from nhc, hbc, and ghc, so here's the
perspective from Hugs-land to complete your set.

| What is the rationale for the second condition, i.e. why is no
| defaulting done when a user-defined class is involved? Example:

Defaulting is one of those places where Haskell makes an
uncomfortable compromise between theory and practice.
From a theoretical standpoint (e.g., providing a coherent
semantics for programs using type classes), there is no
justification at all for defaulting.  It is a wart in the
language design that lets a compiler make choices---and
significant ones too, because the can change the
semantics---without direct involvement from the programmer.
But, in practice, the overloading of numeric literals makes
it hard to get by without some defaults around.  This is
especially true in an interactive environment like Hugs,
where people expect to be able to type in 1+1 and get 2,
instead of some bizarre error message about unresolved
overloading.

So the rules for defaulting were designed to try and limit
the number of times that defaulting would be used, but
still allow it to kick in for the most common 1+1-style cases.
You could drop the restriction that you've mentioned.  You
could drop the first condition too that requires at least
one numeric class.  This wouldn't fundamentally break
anything.  It would just mean that the defaulting mechanism
kicks in more frequently, quietly making more decisions about
the semantics that it assigns to your programs.  Note that
the compilers have to do extra work to check these conditions;
it would be easy just to skip those checks if you wanted the
more liberal policy.

One day, there might be a better alternative to defaulting
that gives programmers much more control, and isn't just
restricted to numeric types (anyone for default monads?).
In the meantime, I prefer to use a compiler that enforces
the checks specified in the Haskell report, and so limits
the scope for defaulting in my programs.  It's true that,
once in a while, I have to work a little harder to specify
which instance is intended.  But I'm also grateful for the
opportunity that it gives me to participate more fully in
choosing the intended semantics for my program.

All the best,
Mark




RE: Announcement of AsmGofer (Gofer with state)

1999-10-13 Thread Mark P Jones

Hi Joachim,

| AsmGofer is an extension of TkGofer. TkGofer is an extension of
| Gofer in order to support graphical user interfaces. Gofer is a
| subset of Haskell.

I'd like to urge you to consider making the AsmGofer distribution
more publicly available.  I was shocked to find that I couldn't
just download a copy from your website.  Of course, nothing in the
conditions of distribution and use that are attached to Gofer requires
you to do this.  But a more liberal distribution of AsmGofer would make
it easier for people to find out about the system, and perhaps to adapt
some of the ideas to other systems, particularly Gofer derivatives like
Hugs.  Perhaps you have some specific reasons to be cautious about
making the release more widely available, but it surely can't be
something that is required by Siemens; after all, for better or
worse, the Gofer license doesn't allow commercial use.  (Fortunately,
we've learnt a bit about licensing since then, so Hugs doesn't have
this problem...)

All the best,
Mark







ANNOUNCE: Typing Haskell in Haskell Source Code

1999-10-05 Thread Mark P Jones

The source code for the current version of `Typing Haskell in Haskell'
is now available from:  http://www.cse.ogi.edu/~mpj/thih/  It is, of
course, written in Haskell 98, and has been tested and developed using
Hugs 98.

For those who don't know, `Typing Haskell in Haskell' is an attempt to
produce a formal specification for the Haskell type system in the form
of an executable Haskell program.  A paper describing this system was
presented at the Haskell workshop in Paris last Friday, and is also
available from the web page above.

I hope that `Typing Haskell in Haskell' will be a useful resource for
anyone interested in understanding the Haskell type system, or in
experimenting with new extensions, and I welcome any feedback.  I should
also mention that this is an ongoing project, and further revisions of
the specification/program are already planned or in progress.

All the best,
Mark







RE: tuple component functions

1999-09-16 Thread Mark P Jones

| As Haskell has the standard functions  fst, snd  to decompose  (a,b),
| maybe, it worths to provide also
|   tuple31, tuple31, tuple31,
|   ...
|   tuple51, tuple52, tuple53, tuple54, tuple55
| 
| for the tuples of  n = 3,4,5 ?

Simon PJ and I have written a proposal for Lightweight extensible
records in Haskell.  You can find the most recent version on Simon's
web page or in the link that Erik posted to the proceedings of the
upcoming Haskell workshop.  With the ideas described there, tuples
could be implemented as records, with labels chosen in some natural
way.  (This idea comes from SML.)  For example, you could treat
(e,f,g) as a short hand for {1=e, 2=f, 3=g}.  And, if you have a
tuple t with at least three components, then you can write t.1, t.2,
and t.3 to extract those values.  Notice that you don't have to say
anything about the size of the tuple; the type system will figure
that out for you, and complain as appropriate if you try to access
a component that isn't there.  I think this would be a more attractive
solution than introducing a new, potentially unbounded family of
somewhat awkwardly named projection operators..

All the best,
Mark






RE: Haskell Wish list: library documentation

1999-09-16 Thread Mark P Jones

|  * stToIO . This is often necessary for programs that do 
|stateful things as well as IO. A few years ago, having read
|all relevant papers, I was very perplexed by the problem of
|doing stateful things and IO at the same time.  Eventually I
|realised it is not possible to nest monads,
| 
| But it is possible! You just need to use a monadtransformer:

While the discussion about monad transformers etc. is interesting,
it might be worth pointing out that Hugs 98 already has stToIO.
It's a standard function of type:  ST s a - IO a  that is exported
from both the ST and LazyST libraries.  Tim already explained that
his wishlist was based on Hugs about a year ago.  But that predates
Haskell 98, Hugs 98, and another major effort that we made to
bring Hugs and GHC that little bit closer together.  But if you
haven't downloaded Hugs 98 already, I'd suggest waiting a little
longer for the next release.

All the best,
Mark






RE: Functional Dependencies

1999-09-14 Thread Mark P Jones

Hi Fermin,

| Should redundant dependencies trigger an error or a warning? I'd 
| say that if I'm writing some haskell code, I wouldn't mind if a
| redundancy is flagged as an error; most likely, it'd take a short
| time to fix. However, if someone is generating haskell automatically
| (maybe with Derive, PolyP, a GUI designer, ...), the easiest thing
| to do is to generate all the dependencies that will make the types
| correct, without trying to avoid redundancies. In this case,
| I think a warning is better.

Generating a warning instead of an error might indeed be
preferable.  But I disagree with your motivation.  Except in
cases where a language is specifically designed to be used as
a target such programs, I believe that implementations should
be optimized for the benefit of their human users.  For the
comparatively rare cases where someone is writing a program
that generates Haskell class definitions, including dependencies,
it surely isn't too much to expect them to add a line or two to
filter out any that are redundant?  After all, they are probably
having to work much harder just to pretty print the rest of the
text.

A better solution still would be to have a standardized library
called HaskellSyntax, which all of the code generating tools that
you describe (and more!) could use.  The module would export a
set of datatypes for describing the abstract syntax of Haskell
programs and an associated set of pretty printers.  (Or perhaps
the printer functions by themselves would be enough?)  The pretty
printers would take care of all those minor little issues like
deciding when parentheses were needed, when an operator should
be written with applicative or infix syntax, breaking long strings
across line boundaries, etc.  It could also take care of filtering
out redundant dependencies, for example, if this was considered
useful.  I think this would be a neat library to have around,
especially if the authors of tools like Derive, PolyP, H/Direct,
Happy, or anything else that generates Haskell code could be
persuaded to use it.  I hope somebody will take up the challenge!

All the best,
Mark






RE: Haskell Wish list: library documentation

1999-09-14 Thread Mark P Jones

In a previous message, I wrote:

| Some folks out there want to use Haskell to write real programs.  For
| them, there's Haskell 98.

To which Alex replied:

| To be clear, I am not an academic researcher.  I develop real world
| web sites.  I would really like to use Haskell for this process, but
| the tools and libraries are not yet there to get this done in an
| effective manner.

And Tim added:

| My interest in Haskell is also for writing real programs, some of 
| them business applications. I'm also interested in using it on web
| sites (and CGI applications).

So now I realize that my message could have been taken the wrong way.
I didn't mean to suggest that people writing real world programs
wouldn't (or shouldn't) use any of the various Haskell extensions
floating around.  Far from it!  Without you, there would be little
practical motivation to continue developing any of these extensions.
But "Haskell", as it is today doesn't yet provide those features, so
when you say that you'd like to use Haskell, what I think you really
mean is that you'd like to use what Haskell has the potential to be,
given further enhancements.  So perhaps I should have said: "Some
folks out there want to write programs in a stable language.
For them, there's Haskell 98."  For the rest, there are choices to be
made.  One person may decide that programming in "ghc" rather than
"Haskell" will suit them best.  Another may be more hesitant, but
hopeful that Haskell will soon become the language they want it to be.
And so on ...

To the specifics of Tim's comments:

| I agree. Computer science and other scientific applications tend 
| to be clever programs such as compilers, where data is well
| structured and processing is complex. But business applications
| typically have rather shallow processing, with lots of semi-arbitrary
| types of data records to be edited, moved around, and stored; handling
| these records often accounts for most of the code in a business
| application. In these cases polytypic programming techniques can save
| a lot of work.

I'd really like to know more about this, as might others with interests
in polytypic programming.  Could you provide some examples to illustrate
this?

I ask because most of the examples of polytypic programming that I have
seen are targeted at fancy data structures.  And that doesn't have to
mean much in this setting ... almost anything with recursion in it goes
beyond the kinds of things that I remember from the time that I've spent
writing business applications.  As I recall, much of the code that I
wrote back then spent its time editing and moving around record values,
which sounds much the same as what you've described.

| Also, the categorical prelude (not sure about PolyP) does not provide
| a way to get access to type/field names (this is not interesting from
| a CS POV, but would be very useful for automatically generated GUIs
| for editing records).

I don't understand why you think this wouldn't be interesting from a
CS point of view!  (Or did I just misunderstand the acronyms?)  Haskell
folks have known how to do this sort of thing for a long time (i.e.,
I remember discussions on the Haskell list about it, and that must
have been at least seven years ago now (eek!)), but it didn't make
it to the language or libraries, perhaps because nobody thought it
was likely to be used in practice!

| So far as I'm concerned, for practical purposes, the Haskell language
| is defined by what ghc compiles.

To avoid confusion then, I think you should really say that you are
programming in "ghc", or the ghc dialect of Haskell.

| I'd like to also use Hugs, for a more interactive development
| environment, but it shows little sign of ever being sufficiently 
| compatible (it is becoming increasing compatible in core aspects,
| but I want to use most of the features of ghc, and the benefit of
| having an interpreter is quickly lost in having to support two
| different language dialects).

"shows little sign"?  How big a sign would you like?!  Perhaps you
haven't read the plans for future development of Hugs and GHC, in
which case you won't know that the two systems are well on their
way to merging into one happy, unified bundle, with ghc's compiled
code working side by side with Hugs' interpreted code?  Take a look
at http://www.cse.ogi.edu/~mpj/Hugs98/news.html (also accessible
from the Hugs home page) if you want more details.  Currently, both
systems have things to offer that the other cannot provide, but folks
on both sides of the gap are working hard to bring them closer together.

All the best,
Mark






RE: Functional Dependencies

1999-09-12 Thread Mark P Jones

Hi Heribert,

Thanks for your feedback!

| at the end of section 2 of http://www.cse.ogi.edu/~mpj/fds.html you
| might want to mention that there is a "standard" work-around whenever a
| type constructor is needed but not available: Introduce a newtype.

Yes, an in fact this idea is mentioned at the end of the constructor
classes paper!).  But it doesn't always work, and, even when it does,
you have to deal with the extra newtype constructors in both types and
terms.  The BitSet example shows when it doesn't work ... your reworking
of that example depended on generalizing the collection type to add an
extra parameter.  But suppose that you didn't want to, or couldn't, make
that generalization.  For example, this might occur (and indeed, has
occurred in some of our experiments at OGI) in programs that package up
somebody else's code to be used via a class.  And if you can't generalize,
then you're stuck.

| For the hash-table example I am pretty sure that the work-around works
| as well, even though I could not figure out your intended
| implementation.

For the hash table example, one possible newtype encoding might be
as follows:

   newtype HTable bucket a = Array Int (bucket a)
   instance (Hashable a, Collects a b)
  = Collects a (HTable b a) where ...

But this has some problems too ... suppose (somewhat bizarrely in
this case perhaps) that you wanted to use a composition of two
collection types to build the bucket type.  The type of the
corresponding collection would be: Array Int (outer (inner a)).
This would work just fine with the functional dependencies version,
but for the constructor classes version you'd have to introduce
yet another newtype and instance:

   newtype Compose f g x = Comp (f (g x))
   instance (Collects a g, Collects (g a) f)
  = Collects a (Comp f g) where ...

And so on ...

| Of course this is just a work-around and does not make functional
| dependencies superfluous.

That's true.  There are plenty of things you can do with dependencies
that you can't do with constructor classes, and vice versa.

I picked the Collects example because I thought it would be something
that would be reasonably familiar, but perhaps it has the danger of
giving people the impression that functional dependencies are an
alternative to constructor classes.  That's unfortunate because they're
really pretty orthogonal.

All the best,
Mark






Problems with Haskell 98 Random Specification/Implementation

1999-09-11 Thread Mark P Jones

To those who use or know about random numbers and Haskell:

A couple of months ago, John Hughes sent me mail about a problem that
he had uncovered with the implementation of the Random library in Hugs.
He had been using the "split" function in an attempt to generate a
stream of random number generators, each of which he hoped would be
different from the others.  But instead he found that he actually
ended with many different copies of the *same* random number generator.
A disappointing, and frustratingly non-random result.

If you don't happen to recall, split is a member of the RandomGen class,
with type RandomGen g = g - (g,g); it takes a single random number
generator as its argument, and returns a pair of two new generators as
its result.  The only thing that the specification requires is that the
two generators returned are (a) distinct and (b) `independently robust'
from a statistical point of view.  To the best of my knowledge, the
implementation in Hugs meets this modest specification.  Sadly, assuming
only this specification, you cannot write the function that John was
looking for and be sure that it will generate more than two different
generators.

For example, the specification allows even the following trivial
implementation for split:  split _ = (g1, g2), where g1 and g2 are some
arbitrary but constant pair of distinct, and independently robust
generators.  With this implementation, you can split as often as you
want and you'll never get more that two generators.

Hugs and GHC (as far as I can tell) both use definitions of the form:

   split g = (g, f g)

for some function f.  (My understanding of the code in GHC is that it
uses an unsafe function for f, breaking referential transparency; I hope
the optimizer knows about this.)  Note that this definition returns the
argument as a result; the specification doesn't prohibit that; all it
requires is that the two results returned be distinct.  But if you try
to generate a list of n different generators using:

   take n (iterate (fst . split) g)

then you will be sorely disappointed; you might as well have written
replicate n g.  (On the other hand, if you were lucky enough to have
used (snd . split), instead of (fst . split), then you wouldn't have
noticed the problem ...)

I know very little about the mathematics or pragmatics of random
number generators, so I'm not sure that I know how to fix this
problem.  However, starting from this position of ignorance, I have
hacked up a new version of "split" for the standard "StdGen" that
will appear in the next release of Hugs (real soon now!).  Judging
from the tests that I've tried so far, it seems to work much
better than the old version.  That said:

 - Take care if you use Random.split in your programs, because it
   may not do what you expect.

 - There should probably be an errata about this for the Haskell 98
   library report ... if somebody can figure out what it should say.

 - If you use Hugs, be aware that the implementation of Random.split
   was hacked up by someone who has no way of justifying that
   implementation, beyond some simple experiments.

 - If you know something about the mechanics of random number
   generators, here's an area where Haskell specifications and
   implementations could benefit from your knowledge!

All the best,
Mark






Functional Dependencies

1999-09-11 Thread Mark P Jones

[Simon mentioned my work on `functional dependencies' in one of his
messages a couple of days ago, so I thought I'd better post an
explanation!]

A couple of months ago, I developed and implemented an extension to
Hugs that has the potential to make multiple parameter type classes
more useful.  The key idea is to allow class declarations to be
annoted with `functional dependencies'---an idea that has previously
received rather more attention in the theory of relational databases.
For example, we could use the following declaration as the starting
point for a simple `collection' class library:

   class Collects e ce | ce - e where
  empty  :: ce
  insert :: e - ce - ce
  member :: e - ce - Bool

The new part here is the clause "| ce - e", which tells us that the
"ce" parameter (the type of the collection) uniquely determines the
"e" parameter (the type of the elements).  Dependencies like this
can be used to avoid ambiguity problems; to obtain more accurate, and
less cluttered inferred types; and to allow more general sets of
instances than constructor classes.  If this has got your interest,
you can find more information at http://www.cse.ogi.edu/~mpj/fds.html.

Support for this extension is included in the imminent September 1999
release of Hugs 98.  In the meantime, Jeff Lewis, a colleague here at
OGI, has been putting together an implementation for GHC.  I've also
written a longer paper that gives more technical details, and explains
how this idea fits in with other work, but there are a few things I
want to do to that before it is ready for distribution, probably after
I get back from the Haskell Workshop.

I hope folks will find this useful.  A month or two back, somebody on
this list said that we had no `peer review' process ... people
just do something, then say "here it is, use it".  Well I've done
something ... here it is ... I hope that some of you will use it ...
and I do very much appreciate your feedback.  The peer review
starts today!

All the best,
Mark






RE: Implementation of type classes

1999-09-11 Thread Mark P Jones

Hi Heribert,

| The idea is that for every class assertion in the type of a variable,
| the variable gets an additional parameter that will be instantiated by a
| value representing the appropriate instance declaration. These values
| are tuples (let's call them "instance tuples") containing
| - one component for every superclass (viz. an instance tuple) and
| - one component for every class method (viz. a function with an
|   additional argument expecting the instance tuple)

This is exactly the scheme proposed by Wadler and Blott in their paper
"How to make ad-hoc polymorphism less ad-hoc"!  The things that you've
called "instance tuples" here are more commonly referred to as
"dictionaries", and so the translation that is used to implement
overloading by adding these extra dictionary parameters is known as
the "dictionary passing translation".  It is also the implementation
technique used in all current Haskell compilers/interepreters, as far
as I'm aware.  I'm not particularly proud of them, and they weren't
the main topic, but you can find a couple of chapters, going into
many of the gory details of all this in my thesis (Chaps. 7 and 8, to
be precise.

[Historical aside: I did, however, produce a version of Gofer at one
stage that didn't use dictionaries: the compiler still used a
dictionary passing translation internally, but then followed that
with a specialization phase so that no dictionary values were used
at run-time.  To my initial surprise, it actually resulted in smaller
compiled programs in every example that I tried.  Two caveats: (1) the
technique doesn't work well with separate compilation, and would need
a very fancy linker; (2) the introduction of polymorphic recursion in
Haskell means that there are programs for which the technique cannot
be used.  You can read more about this in my paper on "Dictionary-free
overloading".]

| - Does this work completely as intended or have I missed something, such
|   as strictness problems or the like?

Well I hope I've dealt with all this above.  One thing to note,
however: when you add extra arguments to a variable definition, you
can end up with a function where there wasn't one before, and that
has an impact on which computations get shared, and which don't.
So it can have an impact, and that was one of the original motivations
for the "dreaded monomorphism restriction", which is still there
lurking in the definition of Haskell 98.

| - Does this still work if more complex features of the type/class system
|   are used?

Yes to all the things you listed.  (Except your last point --- "dependent
types" --- to which I'd reply that I'm not exactly sure what you were
referring to ... perhaps the kind of work that Hongwei Xi has been
doing?  Or maybe to the kinds of things that you've seen in Lennart's
Cayenne?  Or perhaps something else.)

| - Can the transformed program be evaluated as efficiently as the
|   original one? (The original program gets an advantage if some class
|   constraints are resolved at compile time. But perhaps a similar effect
|   for the transformed program can be achieved with GHC's transformation
|   rules.)

The real question here is how can the program be executed at all
*without* doing the translation.  This is one of the criticisms
that has been made of Haskell's class mechanisms, and one of the
motivations for other proposals.  You might, for example, want to
take a first look at Odersky, Wadler, and Wehr's "A second look
at overloading" to see the kind of compromises that you might need
to make and the benefits that you might hope to gain.

| - Do type classes provide an advantage with respect to expressiveness of
|   types? For example, can the types of an API be formulated in a more
|   flexible way with classes as compared to the transformed style?

They shouldn't, in theory, but along the route to Haskell, some
additional expressiveness snuck in.  Dictionary components in
Haskell, for example, can have polymorphic types (think of fmap
in the Functor class), but the components of a tuple in a regular
Haskell type cannot.  Similarly, before Haskell went official and
allowed polymorphic recursion, you could still code it up by
hacking with classes.  You can restore the balance by adding
support for rank-2 polymorphism, as is done in Hugs and GHC.
Once that's don't you don't get an extra expressiveness by programming
with type classes, but it might still make some programs seem easier
to code (because you don't have to add the extra parameters yourself).

[Historical aside 2: when constructor classes were first introduced,
several people (myself included) wrote about the "Expressiveness
of constructor classes" ... it was only later that I realized how
much of that expressiveness came from rank-2 polymorphism instead
of higher-order kinds.  I wrote something about this in the tail
sections of my paper on "First-class polymorphism with type inference"
if you want to know more.]

| Most of this is probably well-known 

RE: Haskell on your Web page

1999-09-08 Thread Mark P Jones

Hi Havoc,

Thanks for your comments about Haskell, which Manuel forward to the
Haskell list.  Of the many points that you raised, one on which I
can offer some good news is the following:

| One concern that I have is the Artistic license on Hugs; this might make
| it hard to embed Hugs in other applications (such as the embedded Python,
| Perl, and Guile interpreters in the Gnumeric spreadsheet). An Artistic/GPL
| option like Perl might be nicer, or just the X-style license. I'm sure the
| Hugs authors are sick of hearing about this though.

We've been working on this, and the next release of Hugs is dropping
the Artistic license in favor of a more liberal license; essentially
the same license that is being used for GHC (in part to make the
integration of Hugs and GHC that little bit smoother).  In fact we'd
been planning for the new release of Hugs to go out in August; the
main reason for the delay is that we're waiting for the new license
to be approved.  (I expect this to go through any day now; we're just
trying to make sure that things are done properly, and we're not not
expecting any problems!)

All the best,
Mark






RE: Newbie question

1999-08-20 Thread Mark P Jones

| I have a little problem with the "getArgs" function in Hugs/GHC.
| Something like
|  
|  import System
| 
|  main = do argv - getArgs
| print argv
|  
| prints "[]" (the empty list) and not the list of arguments.
| Since this is definitely not the right behaviour, I must be
| making a mistake.
| I'm using GHC-4.03 and Hugs98 on Windows 98.

As it says in the Hugs manual:

  When using getArgs, only the stand-alone system passes arguments to the
  executing program. The interactive system always uses an empty argument
  list when runnning a program.

In other words, if you run a program from the interpreter, there isn't
any way to pass command line arguments (it certainly wouldn't make sense
to grab the command line arguments that were passed to Hugs, for example).
But command line arguments are available in programs executed using
runhugs.

If this is a problem, you might considering structuring your code something
like:

   main = do argv - getArgs
 begin argv

   begin :: [String] - IO ()
   begin  = ... rest of your code here ...

Now you can call begin during interactive sessions with an extra parameter
to pass in the command line args.

All the best,
Mark






RE: Question

1999-08-19 Thread Mark P Jones

| Actually, I have fond memories of Algol compilers that gave error
| messages pretty much as comprehensible as those above.  I guess the
| problem is that Haskell compilers are prepared by people who have more
| pressing tasks than repeating old work on user friendly error messages
| :-(

Jon's comments bring up an issue that's been on my mind
for some time now.  Although this isn't a direct reply
to Jon's message, I think it might still be a good time
to raise the topic.

One of the greatest disappointments to date of the move
to more liberal (i.e. free software) licenses for systems
like Hugs and GHC, is that it has done almost nothing to
stimulate contributions to the implementations themselves
from outside the immediate (and small) group of developers
concerned.  Compare this, for example, with the Linux
community where the number of external contributors is
often cited as one of the benefits of the development
model used there.  Of course, it may just be the size
of our community, and the subject area: there's a much
greater demand for operating systems than there is for
lazy functional language implementations, and there are
probably a lot more people with expertise in the former
than there are in the latter.  And we shouldn't discount
or forget the valuable contributions that quite a lot
of people already make to Haskell in other ways, by
answering questions on this or related lists, by using
Haskell to build interesting applications, and so on.
What I'd like to do is to stimulate more in the way of
contributions to the implementations.

So perhaps we should be more explicit: I'm sure that all of
us involved in developing Haskell systems would welcome
contributions from the community that will help to make the
tools better.  Better tools will benefit the whole community,
and will make them accessible and useful to a much wider
audience.

This doesn't mean that people shouldn't post bug reports
or gripes about the systems --- the poster may not know
how to fix the problems, but perhaps their message will
inspire somebody else to tackle it.  But I do think that
we need to move away from a "them and us"/"developer and user"
picture, and towards a more community oriented "us".

All the best,
Mark






RE: Question

1999-08-19 Thread Mark P Jones

| Ok my last post was a bit of a silly question on my behalf, but this
| has be stumped.
|  
| data BTree Integer = Leaf Integer | Node Integer (BTree Integer) (BTree Integer)
| ...
| can anyone tell me why I get this error when I compile this.
| ERROR "Btree.hs" (line 2): Illegal left hand side in datatype definition

Yes.  The parameters in a datatype definition are supposed to be type
variables, not type constants like Integer.  If you only need BTrees
with Integer values in them, then you don't need a parameter --- use:

  data BTree = Leaf Integer | Node Integer BTree BTree

and then mkTree will be a function of type Integer - BTree.

If you want a parameter, use:

  data BTree a = Leaf a | Node a (BTree a) (BTree a)

and then mkTree will be a function of type Integer - BTree Integer.
(or Num a = a - BTree a, in its most general form.)

All the best,
Mark






RE: Question

1999-08-19 Thread Mark P Jones

Hi Alex,

| Out of curiosity, how big is the user community?  How many downloads of
| the software?  How many are on this list?

I don't know the answers to any of these, but I think you're implying
"very small", and I'm sure you're right.  Perhaps you're also suggesting
that our community is too small for this kind of development model, and
again that may well be true.  What does this say about the future of
Haskell?

| Also, why are there so many Haskell compilers for so few users?
| There is really only one PD C compiler, GCC, and only one PD Perl
| interpreter, perl (or vice versa on the capitalization).
| Haskell has Hugs, GHC, NHC, HBC, for the core language ...

I think there are plenty of counterexamples here.  For example, we don't
just have Linux ... there's also NetBSD, FreeBSD, and GNU Hurd.  And for
C compilers, we also have egcs and lcc.  And there are several Java
compilers out there, both free and commercial.  But you're right again:
I'm sure that we fragment our small community to some degree by having
multiple implementations, perhaps without achieving critical mass.  One
positive effect of having multiple implementations is that it reduces
the element of risk: If Haskell was the product of one small group,
perhaps without clear funding or long term commitment to maintaining
it, then you'd probably have a harder time justifying any decision to
use it in a new project.  On the other hand, the differences between
implementations can also work against us.  The groups involved have been
actively working together to avoid such problems, but it's not easy.

| As an aside, the cost of this very powerful type system will probably be
| error messages that are probably incomprehsible to those not versed in
| category theory ...

I don't think that's a foregone conclusion.  Also, this is one of the
few areas in current Haskell systems where the developers can actually
justify the effort involved because it raises genuine and interesting
questions for research.  But note that the error messages that prompted
Jon's comment didn't have anything to do with sophisticated type systems.
Dealing with those kinds of things requires some hard work, but it isn't
research, and so it's hard to justify, at least in an academic context.

All the best,
Mark






Typing Haskell in Haskell

1999-08-18 Thread Mark P Jones

Over the past few months, I've been putting together a type checker for
Haskell that is also written in Haskell.  One of the goals of this
project was to obtain a program that was clear and concise enough to serve:

 - As a formal specification of the Haskell 98 type system;

 - As a testbed for experimenting with future extensions of the type
   system.

To meet these goals, the type checker must be consistent with the Haskell
community's interpretation of the Haskell report, and it should be presented
in whatever way (or ways) will best support future experimentation.  I am
therefore very keen to get feedback and suggestions on these or other points.

The type checker is currently available in the form of a 14 page paper that
can be downloaded from http://www.cse.ogi.edu/~mpj/thih/.  Various formats
are available, and a source code distribution will be added soon.  (It will
be distributed as free software.)

Please send comments directly to me ([EMAIL PROTECTED]) and I will summarize
to the list if appropriate.  I will be away next week, but look forward to
seeing any responses, either before I go, or when I return.

Thanks,
Mark






RE: Is their a *good* online tutorial and reference for Haskell?

1999-08-10 Thread Mark P Jones

| Is this why the PDF version of the Haskell report looks so strange? On my
| system (Win98 and Acrobat Reader 4.0) it looks like the baseline
| oscillates up and down between each letter. I find it very difficult to
| read.

I made a pdf version of the Haskell report using pdflatex; fans of
pdf can obtain a copy from http://www.cse.ogi.edu/~mpj/h98.pdf.
This version of the report includes hyperlinks and bookmarks, and
looks *much* better on screen than the version derived from Postscript.
It's even (slightly) smaller ... what a deal!  I've actually found this
version of the report easier to navigate than a printed version.  If
you've ever seen me referring to a section or page number in the
Haskell 98 report, you can be pretty sure that I tracked it down using
this pdf version.  I hope it will be as useful to other people!

Before anyone asks: I'm afraid that I haven't done a pdf version of the
library report.

Personal recommendations follow:
If you want to get the best out of the pdf version of the report, use
Adobe's free reader (http://www.adobe.com/prodindex/acrobat/readstep.html).
If you want to generate nice pdf documents from LaTeX source, use pdflatex,
which is part of the pdftex package available from CTAN (e.g. www.ctan.org).
If you use a Win32 PC, try miktex (www.miktex.de), which includes pdftex
and all the fonts that you need to use it, without further ado!

All the best,
Mark






Clarifying Defaults

1999-07-26 Thread Mark P Jones

I would like some clarification about the circumstances under which defaults
are applied.  The relevant section of the Haskell report is on P49, and
reads
as follows (the annotations are mine):


 "In situations where an ambiguous type is discovered, an ambiguous type
  variable is defaultable if at least one of its classes is a numeric class
 ~~~
  (that is, Num or a subclass of Num) and if all of its classes are defined
  in the Prelude or a standard library (Figures 6–7, pages 83–84 show the
  numeric classes, and Figure 5, page 77, shows the classes defined in the
  Prelude.) Each defaultable variable is replaced by the first type in the
  default list that is an instance of all the ambiguous variable’s classes.
  
  It is a static error if no such type is found."


I'm not sure what the underlined phrases mean: what are the classes of an
ambiguous variable?  I suspect that this language dates back to old versions
of Haskell that insisted on class constraints of the form (C a) in contexts,
where "C" is the name of a class and "a" is a type variable.  In that
setting,
the text above makes good sense and it is entirely reasonable to say that
"C"
is one of "a"'s classes.  But with Haskell 98, we can have constraints of
the
form C (a t1 ... tn) where "a" is still a type variable, but "t1", ..., "tn"
are arbitrary types.  Suppose that an ambiguous variable "v" appears in one
of those types ... is "C" still to be regarded as one of "v"'s classes?  If
so, then the second underlined part above doesn't make a lot of sense: if
"C"
is actually "Monad", for example, then no choice of "v" (necessarily of kind
*)
will be an instance of "Monad" (whose instances have kind * - *).  And even
if "C" is a class of kind *, then it seems quite illogical to require "C v"
to hold just because "v" appears ambiguously in some constraint of the form
"C (a v)".

Current versions of Hugs 98 avoid this problem by adding an extra, implicit
condition to the definition of defaulting, marked with * in the
following:

  If "v" is an ambiguous variable, and
  if "v" appears only in constraints of the form "C v", and *
  if one of "v"'s classes is numeric, and
  if all of "v"'s classes are defined in the prelude or std libraries, and
  if one of the default types is an instance of all those classes,
  then "v" can be defaulted.

Notice that this extra condition, *, is necessary to make sense of the
phrase "v"'s classes in subsequent lines, as I've outlined above.

I'd be grateful for any clarification.  Is there something wrong with my
interpretation of the report, the behavior of Hugs 98, the report itself,
or some subset of these three?

All the best,
Mark

PS.  If you like a concrete example where this makes a difference, here's
a simply (but admittedly bizarre) example that Hugs rejects, where others
might expect defaulting to kick in:

  foo x | c==c  = c  return x
  where c = return 0

The ambiguous type for this example is:

  (Monad a, Eq (a b), Num b) = c - a c







RE: Punning

1999-07-21 Thread Mark P Jones

Hi George,

I believe that punning was taken out of Haskell 98 because of some (IMO,
mistaken) concerns about renaming.  If memory serves me, somebody
had complained that punning `broke alpha-renaming'.  For example, you
can't treat (\x - C{x}) as being equivalent to (\y - C{y}) ... which
all seems perfectly reasonable to me, and in just the same way that you
wouldn't want (\x - "x") to be treated as being equivalent to (\y - "y").
The correct renaming of (\x - C{x}) is, of course, (\y - C{x=y}).

So I think it was a mistake to remove punning.  But that was what the
Haskell committee decided, so Hugs 98 doesn't allow punning in Haskell 98
mode.  It will, however, support punning in extended mode (run with the -98
flag).

Hope that helps!

Mark





RE: Haskell 98

1999-07-13 Thread Mark P Jones

Hi Simon,

| I'm a bit unsure about the wording of the proposed copyright notice, which
| a few people have asked for.  Who owns the copyright?  For lack of better
| I have nominated the editors, myself and John Hughes, but given very
| free-wheeling  permission to reproduce the report.  I hope that you find
| that acceptable.

You might want to model your wording on what the Scheme folks have done.
The Revised^5 Report on the Algorithmic Language Scheme, which is a nice
model for language definitions in other respects too, declares:

 "We intend this report to belong to the entire Scheme community, and so we
  grant permission to copy it in whole or in part without fee. In
particular,
  we encourage implementors of Scheme to use this report as a starting point
  for manuals and other documentation, modifying it as necessary."

Copyright is not explicitly assigned, but my understanding is that it
naturally
defaults to the editors and contributors, without further ado.  If you want
to
see it for yourself, visit: http://swissnet.ai.mit.edu/~jaffer/Scheme.html

This seems less restrictive, and hence more appealing than your current
proposal.

All the best,
Mark






RE: Deriving Enum

1999-07-11 Thread Mark P Jones

| To me, it seems unsatisfactory to have a solution to this pure 
| list problem with auxiliary functions relying on integers.
| It turns out to be a nice exercise to implement
| 
|  diagonalise :: [[a]] - [a]
| 
| without any reference to numbers.

Here's my definition of an integer free diagonalization function.
It is quite different to Wolfram's version because it doesn't
use higher-order functions as data.  However, as written, I think
it is a nice example of programming with higher-order functions,
and, in particular, using function composition to construct a
pipelined program:

 diag :: [[a]] - [a]
 diag  = concat . foldr skew [] . map (map (\x - [x]))
 where skew [] ys = ys
   skew (x:xs) ys = x : comb (++) xs ys

This uses an auxiliary function comb, which is like zipWith
except that it doesn't throw away the tail of one list when it
reaches the end of the other:

 comb:: (a - a - a) - [a] - [a] - [a]
 comb f (x:xs) (y:ys) = f x y : comb f xs ys
 comb f [] ys = ys
 comb f xs [] = xs

Notice that there is only one recursive call in this whole
program, and that's in the implementation of comb, not in diag!

How does it work?  Think of the input list of lists like this:

   [ [  a1,   a2,   a3, ... ],
 [  b1,   b2,   b3, ... ],
 [  c1,   c2,   c3, ... ],
 ... ]

Applying map (map (\x - [x])) to this replaces each element with
a singleton list:

   [ [  [a1],   [a2],   [a3], ... ],
 [  [b1],   [b2],   [b3], ... ],
 [  [c1],   [c2],   [c3], ... ],
 ... ]

Next, we use (foldr skew []) to skew the picture like this:

   [ [  [a1],   [a2],   [a3], ... ],
 [  [b1],   [b2],   [b3], ... ],
 [  [c1],   [c2],   [c3], ... ],
 ... ]

and concatenate the lists in each column:

   [ [a1],  [a2,b1],  [a3,b2,c1],  ... ]

(This is the key part, and you may need to think about it some more
to see how it actually works!)  Finally, the concat function flattens
this into a single list:

   [ a1, a2, b1, a3, b2, c1, ... ]

This works for any type and for any combination of finite and
infinite lists (provided, of course, that none of the lists or
tails are _|_ !)

Have fun!
Mark






RE: Strange lexical syntax

1999-07-01 Thread Mark P Jones

Hi Simon (again!)

| I just uncovered a couple of strange cases in the Haskell lexical syntax.
| If you're not especially bothered about such things, don't bother to read
| on!
|
| Quick quiz:  how many Haskell lexemes are represented by the following
| sequences of characters?
|
|   1)  M.x
|   2)  M.let
|   3)M.as
|   4)  M..
|   5)  M...
|   6)  M.!

Interesting examples!  However, I don't agree with your proposed fix, which
is to regard all of them as single lexemes.  Instead, I think that the time
has come to find another symbol for composition, and let `.' be a token all
by it's lonesome with the traditional reading of selection.  This wasn't an
option for Haskell 98, where it would have been unthinkable (I assume) to
loose (.) as function composition, but it's definitely an option to consider
for any future Haskell 2.  With this change, qualified names would be
handled
at the level of the context free grammar rather than the lexical syntax, and
the rules would be much easier to understand.  Your six examples would then
be  M . x  (3 tokens)  M . let (3 tokens, not a valid name)  M . as (same
again)
M ..   (2 tokens)  M .. . (3 tokens) M . ! (3 tokens)
I'm assuming that both . and .. would be special tokens, and that . would
not
otherwise be permitted in a symbol or identifier (except as a decimal
point!).

Look at the contortions that we're making to keep . for function
composition!
It has to go!  How about ; instead as a nice syntax for forward composition?
(Explicit layout?  What's that?  Maybe it should go too :-)

All the best,
Mark






RE: Overlapping instances?

1999-06-14 Thread Mark P Jones

Let me define some terms.  If pi and pi' are two class constraints,
then we say that pi and pi' are overlapping if S(pi) = S'(pi') for
some substitutions S and S'.  Thus C Int and C [a] do not overlap,
but C (a,Int) and C (Bool, a) do overlap.

As it says in the Hugs manual, overlapping instances are allowed,
providing that one of the instances in each overlapping pair is
strictly more specific than the other.  The definition: pi is
more specific than pi' if S(pi) = pi' for some substitution S.
Let us write this as  pi = pi'.  For example:

   C (Bool, Int) = C (a, Int) = C (a, b) = C a.

For pi to be *strictly* more specific than pi', written pi  pi',
we require that pi = pi', but that pi' /= pi.  For example, all
the inequalities above are strict:

   C (Bool, Int)  C (a, Int)  C (a, b)  C a.

Thus a program containing these four instance decls is acceptable.
The strict inequality gives us a simple rule for choosing which
instance to use in any given situation without ambiguity:  use
whichever is the closest match.  For example, the class constraints
C (Bool,Int), C (Char,Int), C (Bool,Bool), C Bool would be resolved
using the 1st, 2nd, 3rd, and 4th of the constraints above.

For an example of class constraints that satisfy pi = pi' but where
pi is not strictly more specific that pi', note that:

 C (a,b) = C (c,d) = C (a,b).

A pair of overlapping instances in which neither is more specific than
the other leaves an ambiguity: for example, given two instances C (a,Int)
and C (Bool,a), which should we choose to deal with an instance of the
form C (Bool,Int)?  Either alternative could apply, with potentially
different results depending on the definitions in each instance.  One
way to allow overlapping pairs like this *without* ambiguity, would be
to require that the programmer provide a further instance declaration
to cover precisely the overlapping cases.  For the time being, our
implementations take the simpler step of banning such overlaps altogether.

In the first pair of instances in the question that opened this mini-thread,
one instance was strictly more specific than the the other, hence the pair
was allowed.  In the second pair, there was an overlap, as illustrated by
Lars' example, and neither instance was more specific than the other.
Hence the pair of instances is rejected.

All the best,
Mark






RE: y2k compliance

1999-06-09 Thread Mark P Jones

Hi Hugo,

| I would like to know whether haskell compilers (hugs, hbc and lmlc) are
| fuly y2k compliant. Can anyone fill me in on this?

My understanding is that none of the current Haskell implementors
can afford to answer a question like this because none of us have
the developer, support, or legal resources that would be required
to back it up.  It must therefore be left to individuals or groups
of users to determine an appropriate notion of y2k compliance for
their own use, and to take responsibility for determining whether
the appropriate systems meet those standards.  We already provide
source and documentation to help with any such assessment, but we
cannot take it any further.  Of course, if someone does carry out
a y2k  evaluation on one or more Haskell systems, and chooses to
publish their results, then everybody can benefit.

All the best,
Mark






Announcing Hugs 98

1999-06-01 Thread Mark P Jones

__   __ __  __     ___ _
||   || ||  || ||  || ||__ Hugs 98: Based on the Haskell 98 Standard
||___|| ||__|| ||__||  __||Copyright (c) 1994-1999
||---|| ___||  World Wide Web: http://haskell.org/hugs
||   ||Report bugs to: [EMAIL PROTECTED]
||   || Version: May 1999  _


   We are pleased to announce a new release of Hugs, a Haskell
   interpreter and programming environment for developing cool
   Haskell programs.  Sources and binaries are freely available
   by anonymous FTP and on the World-Wide Web.  The release and
   supporting documents can be downloaded from the Hugs home page
   at: http://haskell.org/hugs

   This release is largely conformant with Haskell 98, including
   monad and record syntax, newtypes, strictness annotations, and
   modules.  In addition, it comes packaged with the libraries defined
   in the most recent version of the Haskell Library Report and with
   extension libraries that are compatible with GHC 3.0 and later.

   Additional features of the system include:

   o "Import chasing": a single module may be loaded, and Hugs will
 chase down all imports as long as module names are the same as
 file names and the files are found in the current path.

   o A simple GUI for Windows to facilitate program development.

   o Library extensions to support concepts such as concurrency,
 mutable variables and arrays, monadic parsing, tracing (for
 debugging), graphics, and lazy state threads.

   o A Win32 library for complete access to windows, graphics, and
 other important OS functionalities and a graphics library for
 easy access to Win32 graphics.

   o A "foreign interface" mechanism to facilitate interoperability
 with C.

   Hugs is best used as a Haskell program development system: it boasts
   extremely fast compilation, supports incremental compilation, and
   has the convenience of an interactive interpreter (within which one
   can move from module to module to test different portions of a
   program).  However, being an interpreter, it does not nearly match
   the run-time performance of, for example, GHC or HBC.

   Send email to [EMAIL PROTECTED] to join the hugs-users
   mailing list.  Bug reports should be sent to [EMAIL PROTECTED]
   Send email to [EMAIL PROTECTED] to subscribe to the
   hugs-bugs list.

   The home page for Hugs is at http://www.haskell.org/hugs.

--
 Hugs 98 is Copyright (c) Mark P Jones, Alastair Reid and the Yale Haskell
 Group 1994-99, and is distributed as Open Source software under the
 Artistic License; see the file "Artistic" that is included in the
 distribution for details.
--






RE: Randoms, arrays, monads etc.

1999-05-27 Thread Mark P Jones

It wasn't your main question, but I'd like to respond to one point that
you raised:

| (Works with ghc. Doesn't work with Hugs because of incompatible Random
| module (?).)

The Haskell 98 Random library changed quite significantly, and we didn't
get to do anything about it before the last pre-release of Hugs 98 in 
March.  I'm happy to report that the official release of Hugs 98 will
include an updated version of Random, so you should soon be able to get
your code running under Hugs too.

All the best,
Mark






RE: Contexts on data type declarations

1999-05-18 Thread Mark P Jones

Hi Phil!

|  (Mark says it was very tricky to implement what Hugs does)
| 
| I can't imagine why.  Perhaps Mark can explain?

I should clarify.  Arranging for constraints that appear in the types of
constructors to show up whenever that constructor is used --- whether in
an application or a pattern match --- was not at all tricky.  Indeed, it
seems very natural, and Hugs uses exactly the same code to deal with the
two cases.

The thing that did cause me grief was the section of code to calculate
the types of selectors.  There's no fundamental reason why it should
have been so difficult though.  After all, the rule for calculating the
context of a selector is pretty straightforward, at least in theory: you
just take the union of the contexts for all the constructors to which
that selector might be applied, and use that as the selector's context.
And if you look at the final version of the code, you'll not find any
hint of the battles I fought with it.  But it did cause me problems at
the time.  I don't honestly remember the details, but I think it had
more than a little to do with the task of matching up the constraints
from different constructors so that I could form the union, further
complicated by the interaction with rank 2 polymorphism (which is not
a Haskell 98 feature anyway).

I remember Lennart suggesting at the time that I was crazy to do it
this way anyway; he had just used the rules in the report to generate
code for selectors, and then used the normal type inference mechanisms
to figure out what the type should be.  I thought I could get better
error messages doing it my way, but perhaps I should have just taken
Lennarts advice!

All the best,
Mark







RE: Haskell Type System

1999-05-17 Thread Mark P Jones

| The biggest one is that I would like to be able to make [(ix,el)], [Pair
| ix el], and Array ix el all members of a Find class while should look
| something like this:
|
|   class Find c ix el where
| find :: ix - c - Maybe el
|
| without having to define a new type or introducing the possibility of
| unresolved overloading when the return type isn't explicitly known.
|
| I would _really_ like to know if
|
| 1) Am I correct in assuming this is not possible with current Haskell as
| implemented in Hugs and GHC (ie not just Haskell 98)?

If I understand the terms of your question correctly, then I believe that
you are correct.  For a more precise answer, I would need a more precise
question.

| 2) Does anyone care that this type of thing is not possible?

Yes, they do.  That said, I've often found that people are usually happy
simply to write something like:

   newtype Assocs i e = MkAssocs [(i,e)]

I know it's a bit of a pain to be forced to use the MkAssocs constructor,
but this kind of thing usually works pretty well in practice.  I don't think
you've explained why this solution isn't suitable for your purposes?

| 3) Does any one have a solution to this problem?

I believe that "parametric type classes", introduced by Chen, Hudak and
Odersky at the Lisp and FP conference in 1992 (but as yet unimplemented)
would provide a solution.  You can download their paper from Yale, but
I'm afraid I don't have the URL to hand right now.  (My own paper on
"Simplifying and Improving qualified types" from 1995 is also relevant,
but is more abstracr and has also not been implemented as yet in any general
form.)

| 4) What did you think of my idea for getting around this problem (and a
| lot more) in the post titled "Idea: Nameable type parameters"

Well I think you'd need to spell out your idea in a lot more detail before
anyone could give any firm conclusions.  All you have given us so far is
an example.  What exactly are you proposing in more general terms, and how
would it interact with type inference and modules?  I don't want to
discourage
you from fleshing out the proposal more fully, but I think you should pay
particular attention to these last two points.  For example, the essence
of your proposal seems to be a notation for defining functions on types,
and I suspect that you will have problems making unification work properly
in that setting.

I hope that this helps.

All the best,
Mark






RE: {-# rules

1999-05-03 Thread Mark P Jones

I've seen a couple of messages now about Simon's proposal for
a RULES mechanism in Haskell, but it's clear that I've missed
several of the messages, including the original proposal.  I
suspect this is a result of recent changes in the way that the
list is handled, which should be resolved by now.  Unfortunately,
the missing articles are also not available from the archive at
haskell.org.

[Aside: As a general comment to all readers of the Haskell mailing
list, perhaps I can suggest:  if you've posted something to the list
within the last two weeks, and it hasn't received the kind of
response that you were expecting, then please consider reposting,
as many of us may have missed it the first time round.  Thanks!]

Returning to the main subject of this posting ... the idea of adding
rules to class declarations has been around for a long time.  In fact
the original Wadler and Blott paper that introduced type classes
(How to make ad-hoc polymorphism less ad-hoc) hints briefly at this
exact idea in its conclusions.  Later, in the closing section of my
own paper on Computing with Lattices (JFP 2, 4, 1992), I wrote about
this in a little more detail, and observed that extending the syntax
of Haskell to include rules would allow rules to be type checked and/or
fed as input to a proof checker.

Shortly after that, in a joint report with Luc Duponcheel on `Composing
Monads' (http://www.cse.ogi.edu/~mpj/pubs/composing.html), we used an
uninterpreted === operator to state the monad laws directly in Haskell
notation.  The trick here was to define:

   data Law a   -- Uninterpreted data type
   (===)   :: a - a - Law a
   x === y  = error "uncomputable equality"

Now we can state laws such as the following:

   mapId  :: Functor f = Law (f a - f a)
   mapId   = fmap id === id

   mapCompose :: Functor f = (b - c) - (a - b) - Law (f a - f c)
   mapCompose f g  = fmap f . fmap g === fmap (f . g)

Note that free variables in these laws are represented by variables on
the left hand side, with names like mapId and mapCompose serving as names
for each rule.  Because these are treated as normal Haskell definitions,
they are also subjected to the same process of type checking and type
inference.  I've written out explicit type signatures for these rules,
but they could also have been inferred from just the definition.  There
is still a distinct benefit in having a compiler type check laws like this,
even if you take them no further, for example as a hint to optimizers or
as input to proof checkers.  Note also that by embedding the laws in
Haskell, we get consistency in the type system ... Haskell has constructor
classes, and so this shows up in the laws: the above laws are intended to
hold for each instance of the "Functor" class, as reflected in the types.

Several observations:

 - It might be nice to hide the === operator and the law datatype inside
   a compiler so that they are truly uninterpreted.  Also, one might add
   a few additional operators and connectives.

 - It might be nice to use a syntax that distinguishes laws from regular
   function definitions.  If this is done, then it will be easier for a
   Haskell compiler will be able to identify laws as dead code, and for a
   Haskell--ProofChecker tool to identify the laws.  The syntax that I
   played with looked like the following:

 MapCompose f g  =  fmap f . fmap g === fmap (f . g)

   This doesn't cause any serious parsing conflicts ... it doesn't even
   require any new input tokens!

 - I think it is a mistake to use a syntax that embeds laws inside some
   special comment pragma notation.  Treating laws as proper language
   objects has several advantages, not least being the ability to name
   the laws, and control uses of that name (such as import or export from
   a module) using precisely the same mechanisms that we have in the rest of
   the language.

 - Incorporating proofs into a script means defining an interpretation for
   the Law datatype and associated operators.  One could imagine having
   several implementations of these, each targeted at a different prover
   and/or logic.  Lennart Augustsson has experimented with at least one
   approach to this.

 - Since I wrote about this in my Computing with Lattices paper, I have
   realized that it is a mistake to think about laws as being *part of*
   a class declaration.  Laws should be allowed anywhere that a normal
   declaration is permitted.  There are several reasons for this:

   * Laws are useful for values that are not overloaded, and hence
 do not get defined in a class declaration.

   * Most laws describe *interactions* between several operators, and not
 just the properties of one operator.  So the MapCompose law, for
 example, is about the interaction of fmap and (.) and could equally
 well be placed with the definition of (.), or even in a separate
 module of useful laws.

   * The values in a class declaration have top-level scope, and 

RE: Hugs Error, Legal Haskell 98 Code?

1999-05-01 Thread Mark P Jones

| Is this legal Haskell 98 Code:
| 
| module Test where
| 
| class A a where
|   foo :: a - a
| 
| class (A a) = B a where
|   boo :: a - a
| 
|   foo a = a
| 
| GHC compiles it just fine but the latest vesion of Hugs98 gives me
|   ERROR "test.hs" (line 10): No member "foo" in class "B"
| 
| So I was wondering if giving default definitions for base member in a
| derived class is legal is Haskell 98.

No, I don't think it is.  See page 45 of the report for the details.

| If it is not legal is it a
| proposed extension that Hugs will eventually support?

There aren't any plans to support this kind of thing.  In fact it's
not exactly clear what the semantics for examples like this should
be.  Would you allow the definitions of classes A and B to be
in separate modules?  And would that mean that different defaults
might be applied in different modules?  If so, that would almost
certainly break the coherence property that is required to ensure
that overloading has a well-defined meaning.

I wonder what semantics GHC uses here, and how it is implemented.
Or perhaps it just treats the definition of foo as a local definition
with a scope that is restricted to the body of the second class
declaration.

All the best,
Mark






  1   2   >