Re: bug in language definition (strictness)

2009-08-06 Thread Peter Gammie

On 06/08/2009, at 10:59 PM, Simon Marlow wrote:

On 06/08/2009 13:49, Thomas Davie wrote:

On 6 Aug 2009, at 14:37, Nils Anders Danielsson wrote:


On 2009-08-06 11:08, Malcolm Wallace wrote:
yet, because of the definition of $!, this applies the  
constructor to

its arguments right-to-left instead of the intuitive left-to-right.


I do not think that there is a bug: x `seq` y `seq` e has the same
denotation as y `seq` x `seq` e.


Not if one considers the kind of bottom one receives:

undefined `seq` error it exploded `seq` e will print  
Prelude.undefined

while
error it exploded `seq` undefined `seq` e will print Error: it  
exploded


There's only one kind of bottom in Haskell 98.  And even with the  
imprecise exceptions extension, both expressions still have the same  
denotation - they denote the same set of exceptions, one of which is  
non-deterministically picked when the program is run.


If the FFI Addendum is considered part of Haskell 98, then we have  
unsafePerformIO, and so an appeal to denotational equivalence is not  
sufficient. When grafting a pure interface onto a notionally-pure  
library (specifically a BDD library), I often used seq to get these  
effects buried in pure values under control.


I also think the principle of least surprise is clearly violated here.

cheers
peter

--
http://peteg.org/

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Simon Marlow

On 06/08/2009 14:20, Peter Gammie wrote:

On 06/08/2009, at 10:59 PM, Simon Marlow wrote:

On 06/08/2009 13:49, Thomas Davie wrote:

On 6 Aug 2009, at 14:37, Nils Anders Danielsson wrote:


On 2009-08-06 11:08, Malcolm Wallace wrote:

yet, because of the definition of $!, this applies the constructor to
its arguments right-to-left instead of the intuitive left-to-right.


I do not think that there is a bug: x `seq` y `seq` e has the same
denotation as y `seq` x `seq` e.


Not if one considers the kind of bottom one receives:

undefined `seq` error it exploded `seq` e will print
Prelude.undefined
while
error it exploded `seq` undefined `seq` e will print Error: it
exploded


There's only one kind of bottom in Haskell 98. And even with the
imprecise exceptions extension, both expressions still have the same
denotation - they denote the same set of exceptions, one of which is
non-deterministically picked when the program is run.


If the FFI Addendum is considered part of Haskell 98, then we have
unsafePerformIO, and so an appeal to denotational equivalence is not
sufficient. When grafting a pure interface onto a notionally-pure
library (specifically a BDD library), I often used seq to get these
effects buried in pure values under control.


That sounds like a very dangerous use of seq and unsafePerformIO to me!

The presence of unsafePerformIO doesn't change the meaning of the rest 
of Haskell.  You can use it to write programs that don't behave 
according to the denotational semantics if you want, but if you do that 
it's considered an unsafe use of unsafePerformIO.


What semantics would you like Haskell to have, in which (x `seq` y `seq` 
e) and (y `seq` x `seq` e) are not equal?



I also think the principle of least surprise is clearly violated here.


I do have some sympathy with that.  The fact that we're having this 
discussion is evidence that something is wrong - and indeed it took 
quite a while before we noticed that seq doesn't actually enforce a 
sequential evaluation order.  And seq was originally introduced to fix 
space leaks, but sometimes it can't be used for this because it doesn't 
provide enough operational guarantees (I haven't seen such cases myself, 
but Malcolm W tells me it really happens).


I'm against the change originally proposed in this thread, because on 
its own it doesn't make any difference.  But by all means let's think 
about ways to restore the lack of surprise, but which hopefully don't 
curtail the compiler's ability to optimise, or run programs in parallel.


Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Malcolm Wallace
What semantics would you like Haskell to have, in which (x `seq` y  
`seq` e) and (y `seq` x `seq` e) are not equal?


I can easily imagine that (x `seq` y `seq` e) might have *two*  
semantic denotations:  bottom (Exception: stack overflow), and e.  And  
I would like to be able to choose which one I get (please).  This is  
the declared purpose of seq, namely to improve performance by  
avoiding unneeded laziness.


Regards,
Malcolm

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Ross Paterson
On Thu, Aug 06, 2009 at 03:33:40PM +0100, Malcolm Wallace wrote:
 What semantics would you like Haskell to have, in which (x `seq` y
 `seq` e) and (y `seq` x `seq` e) are not equal?
 
 I can easily imagine that (x `seq` y `seq` e) might have *two*
 semantic denotations:  bottom (Exception: stack overflow), and e.
 And I would like to be able to choose which one I get (please).
 This is the declared purpose of seq, namely to improve performance
 by avoiding unneeded laziness.

There is no stack overflow in the denotational semantics.  Either you
would get an answer with a bigger stack (and that's the denotation),
or you wouldn't (in which case the value is bottom).  As Simon said,
the denotational semantics of seq is very limited for specifying
performance.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Thomas Davie


On 6 Aug 2009, at 17:34, Mathias Stearn wrote:

On Thu, Aug 6, 2009 at 11:00 AM, Ross Patersonr...@soi.city.ac.uk  
wrote:

There is no stack overflow in the denotational semantics.


There is currently no denotational semantics simulator with an
infinite stack. Until there is, Haskell programs will have to be
executed on lowly physical computers with all the limitations that is
implied. Can we please make things work well on them?


When writing a document that defines the denotation of values, no.   
Perhaps when considering operational semantics, but that's an entirely  
different matter.


Bob
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: bug in language definition (strictness)

2009-08-06 Thread Simon Marlow

On 06/08/2009 15:33, Malcolm Wallace wrote:

What semantics would you like Haskell to have, in which (x `seq` y
`seq` e) and (y `seq` x `seq` e) are not equal?


I can easily imagine that (x `seq` y `seq` e) might have *two* semantic
denotations: bottom (Exception: stack overflow), and e. And I would like
to be able to choose which one I get (please). This is the declared
purpose of seq, namely to improve performance by avoiding unneeded
laziness.


I'm afraid I don't really comprehend what you're getting at.  What do 
you mean by an expression having two semantic denotations, and how would 
you like to choose which one you get?  And I'm baffled by the mention of 
stack overflow, where does that come in?


Cheers,
Simon

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: NoMonomorphismRestriction

2009-08-06 Thread Isaac Dupree

The paper makes the (somewhat radical) case for not generalising local bindings 
at all; which would at a stroke remove most of the issues of the MR.  (We'd 
still need to think about the top level.)

We'd love to know what any of you think of the idea.


I read the paper (except section 5 which is very technical).

I like that it makes
(let x = ... in ...)
behave the same as
(\x - ...) (...)
. Understanding how to respond to type inference and error messages is 
hard enough without having additional differences in innocent-looking 
code.  Do you think my hope is reasonable that not-generalizing could 
lead to better error messages?  I don't quite understand the issues[*], 
but I suspect that not-generalizing would at least make *me* less 
confused when fixing error messages because there are fewer different 
typechecker behaviors to think about.  I guess it's still possible to 
use explicit type-signatures to make let-bindings polymorphic, in a way 
that is difficult or impossible for lambda or case? (I guess for lambda, 
it would require making the lambda into a rank-2 function, though I'm 
not sure how to do that syntactically.)


[*] e.g., the gmapT / rank-2 example confuses me; would it work if 
(...blah...) were passed directly to gmapT without the let?


Also, does it happen to solve the 2^n worst-case typechecking?

-Isaac
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime