Re: Fractional/negative fixity?

2006-11-10 Thread Ben Rudiak-Gould

[EMAIL PROTECTED] wrote:

I think that computable real fixity levels are useful, too.


Only finitely many operators can be declared in a given Haskell program. 
Thus the strongest property you need in your set of precedence levels is 
that given arbitrary finite sets of precedences A and B, with no precedence 
in A being higher than any precedence in B, there should exist a precedence 
higher than any in A and lower than any in B. The rationals already satisfy 
this property, so there's no need for anything bigger (in the sense of being 
a superset). The rationals/reals with terminating decimal expansions also 
satisfy this property. The integers don't, of course. Thus there's a benefit 
to extending Haskell with fractional fixities, but no additional benefit to 
using any larger totally ordered set.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Fractional/negative fixity?

2006-11-10 Thread Ben Rudiak-Gould
I'm surprised that no one has mentioned showsPrec and readsPrec. Anything 
more complicated than negative fixities would require their interfaces to be 
changed.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Class System current status

2006-05-12 Thread Ben Rudiak-Gould

Johannes Waldmann wrote:

class ( Show p, ToDoc i, Reader b, ToDoc b, Measure p i b )
= Partial p i b | p i - b  where ...   -- (*)

(*) A funny visual aspect of FDs is the absurd syntax.
On the left of |, the whitespace is (type arg) application,
but on the right, it suddenly denotes sequencing (tupling)


I think it's fine. The p i b on the left is effectively a tuple also. It 
could be a tuple---i.e. the MPTC syntax could be Partial (p,i,b) and it 
would still make sense.


The class declaration syntax is totally screwy anyway. Functional 
dependencies are constraints, and should be grouped with the typeclass 
constraints, but instead they're on opposite sides of the head. Plus the = 
implication is backwards. And the method declarations are also constraints. 
We oughta have


class Partial p i b where
  Foo p
  (p,i) - b
  grok :: p - i - b

or

class Partial p i b | Foo p, p i - b where
  grok :: p - i - b

or something. But I'm not proposing anything of the sort. I'm in favor of 
standardizing the syntax we've got. Syntax changes are disruptive, and I 
don't think they're justified unless they free useful syntax for another 
use, which this wouldn't.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-05 Thread Ben Rudiak-Gould

Andy Gill wrote:

- [various reasons for deepSeq]


You left out the one that most interests me: ensuring that there are no 
exceptions hiding inside a data structure.



deepSeq :: a - b - b


This ties demand for the (fully evaluated) normal form of an expression to 
demand for the WHNF of a different expression, which is a bit weird. I think 
it's cleaner for the primitive to be your strict, which ties demand for 
the normal form of an expression to demand for the WHNF of the same 
expression. In fact I'd argue that deepSeq should not be provided at all 
(though of course it can be defined by the user). The analogy with seq is a 
bit misleading---deepSeq is a lot less symmetric than seq. The expressions 
(x `deepSeq` y `deepSeq` z) and (strict x `seq` strict y `seq` z) are 
equivalent, but only the latter makes it clear that z doesn't get fully 
evaluated.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict tuples

2006-03-22 Thread Ben Rudiak-Gould

John Meacham wrote:

ghc's strictness analyzer is pretty darn good, If
something is subtle enough for the compiler not to catch it, then the
programmer probably won't right off the bat either.


Even the best strictness analyzer can't determine that a function is strict 
when it really isn't. The main point of strictness annotations, I think, is 
to actually change the denotational semantics of the program.



strictness does not belong in the type system in general. strictness
annotations are attached to the data components and not type components
in data declarations because they only affect the desugaring of the
constructor, but not the run-time representation or the types in
general. attaching strictness info to types is just the wrong thing to
do in general I think.


Your argument seems circular. Haskell 98 strictness annotations are just 
sugar, but they didn't *have* to be. You can say that f is strict if f _|_ = 
_|_, or you can say it's strict if its domain doesn't include _|_ at all. 
One feels more at home in the value language (seq, ! on constructor fields), 
the other feels more at home in the type language (! on the left of the 
function arrow, more generally ! on types to mean lack of _|_).


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict tuples

2006-03-22 Thread Ben Rudiak-Gould

Bulat Ziganshin wrote:

Taral wrote:
T I don't see that more optimization follows from the availability
T of information regarding the strictness of a function result's
T subcomponents.

ghc uses unboxed tuples just for such sort of optimizations. instead
of returning possibly-unevaluated pair with possibly-unevaluated
elements it just return, say, two doubles in registers - a huge win


Mmm, not quite. Unboxed tuples are boxed tuples restricted such that they 
never have to be stored on the heap, but this has no effect on semantics at 
all. A function returning (# Double,Double #) may still return two thunks.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: small extension to `...` notation

2006-03-08 Thread Ben Rudiak-Gould

Philippa Cowderoy wrote:

On Wed, 8 Mar 2006, Doaitse Swierstra wrote:

 xs `zipWith (+)` ys


There is one problem with this: it doesn't nest [...]


Another problem is that it's not clear how to declare the fixity of these 
things. Should they always have the default fixity? Should they be required 
to have the form `ident args` and use the fixity of `ident`? Neither 
approach seems very clean.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: relaxed instance rules spec (was: the MPTC Dilemma (please solve))

2006-03-07 Thread Ben Rudiak-Gould

John Meacham wrote:

On Thu, Mar 02, 2006 at 03:53:45AM -, Claus Reinke wrote:

the problem is that we have somehow conjured up an infinite
type for Mul to recurse on without end! Normally, such infinite
types are ruled out by occurs-checks (unless you are working
with Prolog III;-), so someone forgot to do that here. why?
where? how?


Polymorphic recursion allows the construction of infinite types if I
understand what you mean.


No, that's different. An infinite type can't be written in (legal) Haskell. 
It's something like


type T = [T]

-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: overlapping instances and constraints

2006-03-01 Thread Ben Rudiak-Gould

Niklas Broberg wrote:

Ben Rudiak-Gould wrote:

Are there uses of overlapping
instances for which this isn't flexible enough?


Certainly!


Hmm... well, what about at least permitting intra-module overlap in Haskell' 
(and in GHC without -foverlapping-instances)? It's good enough for many 
things, and it's relatively well-behaved.



instance (Show a) = IsXML a where
 toXML = toXML . show

The intention of the latter is to be a default instance unless another
instance is specified.


I can see how this is useful, but I'm surprised that it's robust. None of 
the extensions people have suggested to avoid overlap would help here, clearly.


Are there uses of overlapping instances for which the single-module 
restriction isn't flexible enough, but extensions that avoid overlap are 
flexible enough?


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Export lists in modules

2006-02-23 Thread Ben Rudiak-Gould

Malcolm Wallace wrote:

An explicit interface would be useful for many purposes besides
machine-checked documentation.  For instance, it could be used to
eliminate the hs-boot or hi-boot files used by some compilers when
dealing with recursive modules.


Why *does* ghc require hs-boot files? What can be gleaned from an hs-boot 
file that couldn't be expressed in the corresponding hs file? For example, 
why doesn't ghc simply require that at least one module in a recursive group 
contain an explicit export list mentioning only explicitly typed symbols?


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Capitalized type variables (was Re: Scoped type variables)

2006-02-23 Thread Ben Rudiak-Gould

I wrote:

What I don't like is that given a signature like

x :: a - a

there's no way to tell, looking at it in isolation, whether a is free or 
bound in the type.  [...]


Here's a completely different idea for solving this. It occurs to me that 
there isn't all that much difference between capitalized and lowercase 
identifiers in the type language. One set is for type constants and the 
other for type variables, but one man's variable is another man's constant, 
as the epigram goes. In Haskell 98 type signatures, the difference between 
them is precisely that type variables are bound within the type, and type 
constants are bound in the environment.


Maybe scoped type variables should be capitalized. At the usage point this 
definitely makes sense: you really shouldn't care whether the thing you're 
pulling in from the environment was bound at the top level or in a nested 
scope. And implicit quantification does the right thing.


As for binding, I suppose the simplest solution would be explicit 
quantification of the capitalized variables, e.g.


f :: forall N. [N] - ...
f x = ...

or

f (x :: exists N. [N]) = ...

Really, the latter binding should be in the pattern, not in the type 
signature, but that's trickier (from a purely syntactic standpoint).


What do people think of this? I've never seen anyone suggest capitalized 
type variables before, but it seems to make sense.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Module System

2006-02-22 Thread Ben Rudiak-Gould

Simon Marlow wrote:

there's a lack of modularity in the current
design, such that renaming the root of a module hierarchy requires
editing every single source file in the hierarchy.  The only good reason
for this is the separation between language and implementation.


I don't see how this is related to implementation. Surely all the language 
spec has to say is that the implementation has some unspecified way of 
finding the code for a module given only its canonical name, along with (if 
desired) a way of expanding a glob to a list of canonical names. Then the 
module namespace reform boils down to rules for converting partial names 
into canonical names. I can't see how any useful functionality in the module 
system could depend on a particular way of organizing code on disk.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The worst piece of syntax in Haskell

2006-02-22 Thread Ben Rudiak-Gould

Ashley Yakeley wrote:

  foo :: (Monad m) = [m a] - m [a]
  instance Integral a = Eq (Ratio a)
  class Monad m = MonadPlus m


I think the most consistent (not most convenient!) syntax would be

   foo :: forall m a. (Monad m) = [m a] - m [a]
   instance forall a. (Integral a) = Eq (Ratio a) where {...}
   class MonadPlus m. (Monad m)  {...}

There's implicit forall quantification in instance declarations. It's 
currently never necessary to make it explicit because there are never type 
variables in scope at an instance declaration, but there's no theoretical 
reason that there couldn't be. There's no implicit quantification in class 
declarations---if you added a quantifier, it would always introduce exactly 
the type variables that follow the class name. I think it's better to treat 
the class itself as the quantifier. (And it's more like existential 
quantification than universal, hence the  instead of =.)


As far as syntax goes, I like

   foo :: forall m a | Monad m. [m a] - m [a]
   class MonadPlus m | Monad m where {...}

but I'm not sure what to do about the instance case, since I agree with the 
OP that the interesting part ought to come first instead of last.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: ExistentialQuantifier

2006-02-16 Thread Ben Rudiak-Gould

Ross Paterson wrote:

I don't think the original name is inappropriate: the feature described
is certainly existential quantification, albeit restricted to
alternatives of data definitions.


I think that existential quantification should mean, well, existential 
quantification, in the sense that term is used in logic. I don't like the 
idea of using it for the feature currently implemented with forall in 
front of the data constructor, given that these type variables are 
universally quantified in the data constructor's type. How about changing 
the name to existential product types or existential tuples? I would 
even suggest boxed types, but that's already taken.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: exported pattern matching

2006-02-09 Thread Ben Rudiak-Gould

Philippa Cowderoy wrote:
Myself I'm of the view transformational patterns (as described in 
http://citeseer.ist.psu.edu/299277.html) are more interesting - I can't 
help wondering why they were never implemented?


Maybe because of tricky semantics. I'm not quite sure what

case x of
  (y,z)!f - ...
where f _ = (z,3)

should desugar to.

-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Restricted Data Types

2006-02-07 Thread Ben Rudiak-Gould

John Hughes wrote:

That means that the Monad class is not allowed to declare

   return :: a - m a

because there's no guarantee that the type m a would be well-formed. The 
type declared for return has to become


   return :: wft (m a) = a - m a


I'm confused. It seems like the type (a - m a) can't be permitted in any 
context, because it would make the type system unsound. If so, there's no 
reason to require the constraint (wft (m a)) to be explicit in the type 
signature, since you can infer its existence from the body of the type (or 
the fields of a datatype declaration).


Okay, simplify, simplify. How about the following:

For every datatype in the program, imagine that there's a class declaration 
with the same name. In the case of


data Maybe a = ...

it's

class Maybe a where {}

In the case of

data Ord a = Map a b = ...

it's

class Ord a = Map a b where {}

It's illegal to refer to these classes in the source code; they're only for 
internal bookkeeping.


Now, for every type signature in the program (counting the type signatures 
of data constructors, though they have a different syntax), for every type 
application within the type of the form ((tcon|tvar) type+), add a 
corresponding constraint to the type context. E.g.


singleton :: a - Set a

becomes (internally)

singleton :: (Set a) = a - Set a

and

fmapM :: (Functor f, Monad m) = (a - m b) - f a - m (f b)

becomes

fmapM :: (Functor f, Monad m, m b, f a, m (f b), f b) =
 (a - m b) - f a - m (f b)

You also have to do this for the contexts of type constructors, I guess, 
e.g. data Foo a = Foo (a Int) becomes data (a Int) = Foo a = Foo (a Int).


Now you do type inference as normal, dealing with constraints of the form 
(tvar type+) pretty much like any other constraint.


Does that correctly handle every case?

-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Scoped type variables

2006-02-07 Thread Ben Rudiak-Gould
Simon PJ thinks that Haskell' should include scoped type variables, and I 
tend to agree. But I'm unhappy with one aspect of the way they're 
implemented in GHC. What I don't like is that given a signature like


x :: a - a

there's no way to tell, looking at it in isolation, whether a is free or 
bound in the type. Even looking at the context it can be hard to tell, 
because GHC's scoping rules for type variables are fairly complicated and 
subject to change. This situation never arises in the expression language or 
in Haskell 98's type language, and I don't think it should arise in Haskell 
at all.


What I'd like to propose is that if Haskell' adopts scoped type variables, 
they work this way instead:


   1. Implicit forall-quantification happens if and only if the type does
  not begin with an explicit forall. GHC almost follows this rule,
  except that forall. (with no variables listed) doesn't suppress
  implicit quantification---but Simon informs me that this is a bug
  that will be fixed.

   2. Implicit quantification quantifies over all free variables in the
  type, thus closing it. GHC's current behavior is to quantify over
  a type variable iff there isn't a type variable with that name in
  scope.

Some care is needed in the case of class method signatures: (return :: a - 
m a) is the same as (return :: forall a. a - m a) but not the same as 
(return :: forall a m. a - m a). On the other hand the practical type of 
return as a top-level function is (Monad m = a - m a), which is the same 
as (forall m a. Monad m = a - m a), so this isn't quite an exception 
depending on how you look at it. I suppose it is a counterexample to my 
claim that Haskell 98's type language doesn't confuse free and bound 
variables, though.


If rule 2 were accepted into Haskell' and/or GHC, then code which uses 
implicit quantification and GHC scoped type variables in the same type would 
have to be changed to explicitly quantify those types; other programs 
(including all valid Haskell 98 programs) would continue to work unchanged. 
Note that the signature x :: a, where a refers to a scoped type variable, 
would have to be changed to x :: forall. a, which is potentially 
confusable with x :: forall a. a; maybe the syntax forall _. a should be 
introduced to make this clearer. The cleanest solution would be to abandon 
implicit quantification, but I don't suppose anyone would go for that.


With these two rules in effect, there's a pretty good case for adopting rule 3:

   3. Every type signature brings all its bound type variables into scope.

Currently GHC has fairly complicated rules regarding what gets placed into 
scope: e.g.


f :: [a] - [a]
f = ...

brings nothing into scope,

f :: forall a. [a] - [a]
f = ...

brings a into scope,

f :: forall a. [a] - [a]
(f,g) = ...

brings nothing into scope (for reasons explained in [1]), and

f :: forall a. (forall b. b - b) - a
f = ...

brings a but not b into scope. Of course, b doesn't correspond to any type 
that's accessible in its lexical scope, but that doesn't matter; it's 
probably better that attempting to use b fail with a not available here 
error message than that it fail with a no such type variable message, or 
succeed and pull in another b from an enclosing scope.


There are some interesting corner cases. For example, rank-3 types:

f :: ((forall a. a - X) - Y) - Z
f g = g (\x - exp)

should a denote x's type within exp? It's a bit strange if it does, since 
the internal System F type variable that a refers to isn't bound in the same 
place as a itself. It's also a bit strange if it doesn't, since the 
identification is unambiguous.


What about shadowing within a type:

f :: forall a. (forall a. a - a) - a

I can't see any reason to allow such expressions in the first place.

What about type synonyms with bound variables? Probably they should be 
alpha-renamed into something inaccessible. It seems too confusing to bring 
them into scope, especially since they may belong to another module and this 
would introduce a new notion of exporting type variables.


I like rule 3 because of its simplicity, but a rule that, say, brought only 
rank-1 explicitly quantified type variables into scope would probably be 
good enough. Rules 1 and 2 seem more important. I feel like a new language 
standard should specify something cleaner and more orthogonal than GHC's 
current system.


-- Ben

[1]http://www.mail-archive.com/glasgow-haskell-users@haskell.org/msg09117.html

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Java-like

2006-02-07 Thread Ben Rudiak-Gould

Bulat Ziganshin wrote:

{-# OPTIONS_GHC -fglasgow-exts #-}
main = do return xx = ((\x - print x) :: Show a = a - IO ())
main2 = do return xx = (\(x:: (forall a . (Show a) = a)) - print x)
main3 = do (x :: forall a . Show a = a) - return xx
   print x

in this module, only main compiles ok


The other two need exists rather than forall, which isn't supported by 
GHC. As written, they say that x can produce a value of any type that's an 
instance of Show, but the value you're binding to x has type String, which 
can only produce a value via Show String.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Restricted Data Types

2006-02-07 Thread Ben Rudiak-Gould

Simon Peyton-Jones wrote:

Another reasonable alternative is

data Set a = Eq a = Set (List a)

The type of member would become
member :: a - Set a - Bool
(with no Eq constraint).


John Hughes mentions this in section 5.2 of the paper, and points out a 
problem: a function like (singleton :: a - Set a) would have to construct a 
dictionary out of nowhere. I think you need an Eq constraint on singleton, 
which means that you still can't make Set an instance of Monad.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns

2006-02-06 Thread Ben Rudiak-Gould
Pursuant to a recent conversation with Simon, my previous post to this 
thread is now obsolete. So please ignore it, and see the updated wiki page 
instead.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Restricted Data Types

2006-02-06 Thread Ben Rudiak-Gould

Jim Apple wrote:

Have we considered Restricted Data Types?

http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps


I'd never seen this paper before. This would be a really nice extension to 
have. The dictionary wrangling looks nasty, but I think it would be easy to 
implement it in jhc. John, how about coding us up a prototype? :-)


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FilePath as ADT

2006-02-05 Thread Ben Rudiak-Gould

Marcin 'Qrczak' Kowalczyk wrote:

Encouraged by Mono, for my language Kogut I adopted a hack that
Unicode people hate: the possibility to use a modified UTF-8 variant
where byte sequences which are illegal in UTF-8 are decoded into
U+ followed by another character.


I don't like the idea of using U+, because it looks like an ASCII 
control character, and in any case has a long tradition of being used for 
something else. Why not use a code point that can't result from decoding a 
valid UTF-8 string? U+ (EF BF BF) is illegal in UTF-8, for example, and 
I don't think it's legal UTF-16 either. This would give you round-tripping 
for all legal UTF-8 and UTF-16 strings.


Or you could use values from U+DC00 to U+DFFF, which definitely aren't legal 
UTF-8 or UTF-16. There's plenty of room there to encode each invalid UTF-8 
byte in a single word, instead of a sequence of two words.


A much cleaner solution would be to reserve part of the private use area, 
say U+109780 through U+1097FF (DBE5 DF80 through DBE5 DFFF). There's a 
pretty good chance you won't collide with anyone. It's too bad Unicode 
hasn't set aside 128 code points for this purpose. Maybe we should grab some 
unassigned code points, document them, and hope it catches on.


There's a lot to be said for any encoding, however nasty, that at least 
takes ASCII to ASCII. Often people just want to inspect the ASCII portions 
of a string while leaving the rest untouched (e.g. when parsing 
--output-file=¡£ª±ïñ¹!.txt), and any encoding that permits this is good 
enough.



Alternatives were:

* Use byte strings and character strings in different places,
  sometimes using a different type depending on the OS (Windows
  filenames would be character strings).

* Fail when encountering byte strings which can't be decoded.


Another alternative is to simulate the existence of a UTF-8 locale on Win32. 
Represent filenames as byte strings on both platforms; on NT convert between 
UTF-8 and UTF-16 when interfacing with the outside; on 9x either use the 
ANSI/OEM encoding internally or convert between UTF-8 and the ANSI/OEM 
encoding. I suppose NT probably doesn't check that the filenames you pass to 
the kernel are valid UTF-16, so there's some possibility that files with 
illegal names might be accessible to other applications but not to Haskell 
applications. But I imagine such files are much rarer than Unix filenames 
that aren't legal in the current locale. And you could still use the 
private-encoding trick if not.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: strict Haskell dialect

2006-02-04 Thread Ben Rudiak-Gould

Chris Kuklewicz wrote:

Weak uses seq to achieve WHNF for it's argument


newtype Weak a = WeakCon {runWeak :: a}
mkWeak x = seq x (WeakCon x)
unsafeMkWeak x = WeakCon x


This doesn't actually do what you think it does. mkWeak and unsafeMkWeak are 
the same function.


mkWeak 123 = seq 123 (WeakCon 123) = WeakCon 123
unsafeMkWeak 123 = WeakCon 123
mkWeak _|_ = seq _|_ (WeakCon _|_) = _|_
unsafeMkWeak _|_ = WeakCon _|_ = _|_

To quote John Meacham:

| A quick note,
| x `seq` x
| is always exactly equivalant to x. the reason being that your seq
| would never be called to force x unless x was needed anyway.
|
| I only mention it because for some reason this realization did not hit
| me for a long time and once it did a zen-like understanding of seq
| (relative to the random placement and guessing method I had used
| previously) suddenly was bestowed upon me.

I remember this anecdote because when I first read it, a zen-like 
understanding of seq suddenly was bestowed upon /me/. Maybe it should be in 
the docs. :-)


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Ben Rudiak-Gould

John Meacham wrote:

interestingly enough, the monomorphism restriction in jhc actually
should apply to all polymorphic values, independently of the type class
system.

x :: a 
x = x 


will transform into something that takes a type parameter and is hence
not shared.


Interesting. I'd been wondering how you dealt with this case, and now it 
turns out that you don't. :-)



I doubt this will cause a problem in practice since there
arn't really any useful values of type forall a . a other than bottom.


It could become an issue with something like

  churchNumerals :: [(a - a) - (a - a)]
  churchNumerals = ...

Maybe you could use a worker-wrapper transformation.

  churchNumerals' :: [(T - T) - (T - T)]
  churchNumerals' = ...

  churchNumerals :: [(a - a) - (a - a)]
  churchNumerals = /\ a . unsafeCoerce churchNumerals'

The unsafeCoerce is scary, but it feels right to me. There is something 
genuinely unsavory about this kind of sharing, in Haskell or any other ML 
dialect. At least here it's out in the open.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime