Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Jon Fairbairn
On 2006-02-04 at 21:15GMT Brian Hulley wrote:
 Stefan Holdermans wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Brian wrote:
 
  I think the mystery surrounding :: and : might have been that
  originally people thought type annotations would hardly ever be
  needed whereas list cons is often needed, but now that it is
  regarded as good practice to put a type annotation before every top
  level value binding, and as the type system becomes more and more
  complex (eg with GADTs etc), type annotations are now presumably far
  more common than list cons so it would be good if Haskell Prime
  would swap these operators back to their de facto universal
  inter-language standard of list cons and type annotation
  respectively.
 
  I don't think Haskell Prime should be about changing the look and
  feel of the language.
 
 Perhaps it is just a matter of aesthetics about :: and :, but I really feel 
 these symbols have a de-facto meaning that should have been respected and 
 that Haskell Prime would be a chance to correct this error. However no doubt 
 I'm alone in this view so fair enough

Not exactly alone; I've felt it was wrong ever since we
argued about it for the first version of Haskell. : for
typing is closer to common mathematical notation.

But it's far too late to change it now.

 - it's just syntax after all

It is indeed.

  Jón

-- 
Jón Fairbairn  Jon.Fairbairn at cl.cam.ac.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of left associative?

2006-02-05 Thread Tomasz Zielonka
On Sat, Feb 04, 2006 at 07:02:52PM -0500, [EMAIL PROTECTED] wrote:
 G'day all.

Hello!

 Quoting Tomasz Zielonka [EMAIL PROTECTED]:
 
  Probably it was anticipated that right associative version will
  be more useful. You can use it to create a chain of transformations,
  similar to a chain of composed functions:
 
  (f . g . h) x   =   f $ g $ h $ x
 
 Of course, if $ were left-associative, it would be no less useful here,
 because you could express this chain thusly:
 
 f . g . h $ x

OK, I can be persuaded to use this style. I like function composition
much more than $ :-)

 This is the way that I normally express it.  Partly because I find
 function application FAR more natural than right-associative application,
 and partly because I'm hedging my bets for Haskell 2 just in case the
 standards committee wakes up and notices that the associativity of $ is
 just plain wrong and decides to fix it. :-)

Is there any chance that Haskell' will change the definition of $ ?

Well, if there is any moment where we can afford introducing backward
incompatible changes to Haskell', I think it's now or never!

 In fact, I'll go out on a limb and claim that ALL such uses of $ are
 better expressed with composition.  Anyone care to come up with a
 counter-example?

The only problem I see right now is related to change locality. If I
have a chain like this:

f x y .
g x $
z

and I want to add some transformation between g and z I have to
change one line and insert another

f x y .
g x .
h x y $
z

With right-associative $ it would be only one line-add. Probably not a
very strong argument.

  But of course, left associative version can also be useful. Some
  time ago I used a left associative version of the strict application
  operator, which I named (!$).
 
 In fact, I think it's much MORE useful, and for precisely the reason
 that you state: it makes strict application much more natural.

Agreed.

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative instead of left associative?

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 02:27:45AM +, Ben Rudiak-Gould wrote:
 No one has mentioned yet that it's easy to change the associativity of $ 
 within a module in Haskell 98:
 
 import Prelude hiding (($))
 
 infixl 0 $
 f$x = f x
 
 or, for the purists,
 
 import Prelude hiding (($))
 import qualified Prelude (($))
 
 infixl 0 $
 ($) = (Prelude.$)

But that would break Copy  Paste between modules! ;-)

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Jon Fairbairn wrote:

Brian Hulley wrote:

snip


Not exactly alone; I've felt it was wrong ever since we
argued about it for the first version of Haskell. : for
typing is closer to common mathematical notation.

But it's far too late to change it now.


- it's just syntax after all


Well I'm reconsidering my position that it's just syntax. Syntax does 
after all carry a lot of semiotics for us humans, and if there are centuries 
of use of : in mathematics that are just to be discarded because someone 
in some other language decided to use it for list cons then I think it makes 
sense to correct this.


It would be impossible to get everything right first time, and I think the 
Haskell committee did a very good job with Haskell, but just as there can be 
bugs in a program, so there can also be bugs in a language design, and an 
interesting question is how these can be addressed.


For example, in the Prolog news group several years ago, there was also a 
discussion about changing the list cons operator, because Prolog currently 
uses . which is much more useful for forming composite names - something 
which I also think has become a de-facto inter-language standard. Although 
there was much resistance from certain quarters, several implementations of 
Prolog had in fact changed their list cons operator (list cons is hardly 
ever needed in Prolog due to the [Head|Tail] sugar) to reclaim the dot for 
its proper use.


My final suggestion if anyone is interested is as follows:

1) Use : for types
2) Use , instead of ; in the block syntax so that all brace blocks can 
be replaced by layout if desired (including record blocks)
3) Use ; for list cons. ; is already used for forming lists in natural 
language, and has the added advantage that (on my keyboard at least) you 
don't even need to press the shift key! ;-)


Regards, Brian.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

The only problem I see right now is related to change locality. If I
have a chain like this:

   f x y .
   g x $
   z

and I want to add some transformation between g and z I have to
change one line and insert another

   f x y .
   g x .
   h x y $
   z

With right-associative $ it would be only one line-add. Probably not a
very strong argument.


How about:

 f x y
 . g x
 $ z

then you only need to add the line

 . h x y

This is similar to how people often format lists:

a =
 [ first
 , second
 , third
 ]

Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

 Quoting Paul Hudak [EMAIL PROTECTED]:

 Actually, one of the main reasons that we chose (:) is that that's what
 Miranda used.  So, at the time at least, it was not entirely clear what
 the de facto universal inter-language standard was.


Phil Wadler argued for the ML convention at the time, and wrote a document
containing a fair amount of sample code to illustrate what it would look
like. We noticed something surprising: instead of (x:xs) and the like, Phil
had consistently written (x :: xs) -- note the extra spaces. Somehow, using
the longer operator name led him to feel spaces were needed around it. That
in turn made his lines longer, encouraged him to split definitions across
lines, and so on. When I read the thing, I realised after a while that I 
was

skipping all the code fragments -- because they just looked too big and
complicated to take in during a quick reading. It was at least partly that
experience that convinced us that using :: for cons would impose a small
cost, but a real one, on readability. It may seem trivial, but the sum of
many such decisions is significant. The story does illustrate the 
importance

of actually trying out syntactic ideas and seeing how they play--one can be
surprised by the result.

 I don't think Haskell Prime should be about changing the look and
 feel of the language.

It's about consolidating the most important extensions into the standard,
isn't it? Changes that break existing code should be very, very well
motivated--if porting code to Haskell Prime is too difficult, people just
won't do it.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Lennart Augustsson

John Hughes wrote:

  Quoting Paul Hudak [EMAIL PROTECTED]:
 
  Actually, one of the main reasons that we chose (:) is that that's what
  Miranda used.  So, at the time at least, it was not entirely clear what
  the de facto universal inter-language standard was.
 

Phil Wadler argued for the ML convention at the time, and wrote a document
containing a fair amount of sample code to illustrate what it would look
like. We noticed something surprising: instead of (x:xs) and the like, Phil
had consistently written (x :: xs) -- note the extra spaces. Somehow, using
the longer operator name led him to feel spaces were needed around it. That
in turn made his lines longer, encouraged him to split definitions across
lines, and so on. When I read the thing, I realised after a while that I 
was

skipping all the code fragments -- because they just looked too big and
complicated to take in during a quick reading. It was at least partly that
experience that convinced us that using :: for cons would impose a small
cost, but a real one, on readability. It may seem trivial, but the sum of
many such decisions is significant. The story does illustrate the 
importance

of actually trying out syntactic ideas and seeing how they play--one can be
surprised by the result.


And at the time I agreed with you.  But now I'm older and wiser(?).
I now think :: for type signatures was a bad mistake.
I don't use lists very much.  They are not the right data structure
for many things.  So : is not as common as :: in my code.
I checked a small sample of code, about 2 lines of Haskell.
It has about 1000 uses of ':' and 2000 of '::'.

In my opinion all the special syntactic sugar for lists should go
away.  I don't think lists are special enough to motivate it.

But this is not what Haskell' is about.  It's supposed to be some
modest extensions to Haskell.  Not designing a new perfect language.

-- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

Lennart Augustsson wrote:

 I now think :: for type signatures was a bad mistake.
 I don't use lists very much.  They are not the right data structure
 for many things.  So : is not as common as :: in my code.
 I checked a small sample of code, about 2 lines of Haskell.
 It has about 1000 uses of ':' and 2000 of '::'.

Just for interest, I analysed some of my code. Obviously my style is
quite different to yours--my type specialiser of 3,500 lines has 240
conses, and only 22 occurrences of '::'. I seem to be using '::'  a bit more
lately, though, which I suspect is due to using classes much more.
I also checked the Agda source code, about 14,000 lines, with
about 500 occurrences of cons and 640 of '::'. I think the only conclusion
one can draw is that style varies.

 In my opinion all the special syntactic sugar for lists should go
 away.  I don't think lists are special enough to motivate it.

What, no list comprehensions??

I'd disagree--sequencing is special, and lists represent it directly.
Don't forget, also, that lists are also much more prevalent in beginners'
code--and nice notation for beginners helps get people started on
Haskell.

 But this is not what Haskell' is about.  It's supposed to be some
 modest extensions to Haskell.  Not designing a new perfect language.

Right!

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Lennart Augustsson

John Hughes wrote:

What, no list comprehensions??


No.  I think the do notation is good enough.




I'd disagree--sequencing is special, and lists represent it directly.
Don't forget, also, that lists are also much more prevalent in beginners'
code--and nice notation for beginners helps get people started on
Haskell.


I don't really see what's so much better about writing
[x1,x2,x3,x4,x5] than x1:x2:x3:x4:x5:[].
When I've explained lists to beginners I've just found it
annoying and hard to explain why there are two ways of
writing lists.  And why only lists have this special syntax.

-- Lennart

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 10:45:50AM -0500, Lennart Augustsson wrote:
 I don't really see what's so much better about writing
 [x1,x2,x3,x4,x5] than x1:x2:x3:x4:x5:[].
 When I've explained lists to beginners I've just found it
 annoying and hard to explain why there are two ways of
 writing lists.  And why only lists have this special syntax.
 
   -- Lennart

But if you remove the [...] syntax, there will be more :'s in
people's code. You are working against yourself here ;-)

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 01:14:42PM -, Brian Hulley wrote:

How about:

 f x y
 . g x
 $ z

then you only need to add the line

 . h x y


But then you have a problem when you when you want to add something
at the beginning ;-) With right-assoc $ adding at both ends is OK.


This is similar to how people often format lists:

a =
 [ first
 , second
 , third
 ]


I am one of those people, and I am slightly annoyed with I have to
add something at the beginning of the list. I even went so far that
when I had a list of lists, which were concatenated, I've put an
empty list at front:

   concat $
   [ []
   , [...]
   , [...]
   .
   .
   .
   ]


Just in case you are interested, in the preprocessor I'm writing, I would 
write these examples as:


   (.) #
  f x y
  g x
  h x y
   $ z

and
a = #[
   first
   second
   third

where exp # {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) 
...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...](exp # 
block and exp # block are the right and left associative versions 
respectively and the special # sugar allows a layout block to be started if 
it occurs at the end of a line)


This allows me to avoid having to type lots of syntax eg repeating the . 
all the time and focus on the semantics...


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Paul Hudak

Bulat Ziganshin wrote:

LA In my opinion all the special syntactic sugar for lists should go
LA away.  I don't think lists are special enough to motivate it.

i have proposal (not for Haskell', of course) of using : and []
syntax for general notion of traversable collections:


Minor point, perhaps, but I should mention that : is not special syntax 
-- it is a perfectly valid infix constructor.  [] and all its variants, 
however, are special syntax.


  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 04:36:44PM -, Brian Hulley wrote:
 Just in case you are interested, in the preprocessor I'm writing, I would 
 write these examples as:
 
(.) #
   f x y
   g x
   h x y
$ z
 
 and
 a = #[
first
second
third
 
 where exp # {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) 
 ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...](exp # 
 block and exp # block are the right and left associative versions 
 respectively and the special # sugar allows a layout block to be started if 
 it occurs at the end of a line)

Well... I care about change locality and the like, but I'm not sure
I would use such syntax (as a means of communication between
programmers). Perhaps that's because I am not used to it and it looks
alien. But it's rather because I still put readability first.

 This allows me to avoid having to type lots of syntax eg repeating the . 
 all the time and focus on the semantics...

At some point you (the programmer) are going to do the work of a
compression program ;-)

There is some limit to terseness. Haskell's syntax is quite concise, but
it could be even more. Why it isn't? Because it would cease to resemble
the mathematical notation, it would cease to be readable. Well, even
Haskell could be more readable, but there's also some point where
further investment in concise lexical syntax doesn't pay off. I am not
sure that's the situation here, but... think about it.

PS. One wonders why you don't take the lisp way with a good lisp editor?
Aren't you designing lisp without parentheses? ;-)

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Brian Hulley wrote:

Brian Hulley wrote:

Robin Green wrote:

snip
So simply make strictness the default and have laziness annotations
(for arguments), instead of making laziness the default and having
strictness annotations.


Where would you put these laziness annotations?
If you put them in the function declaration eg as in:

if' :: Bool - ~a - ~a - a   [corrected]

presumably you'd want the compiler to pass the args as thunks instead
of evaluated values. However this means that all args to every
function would have to be passed as thunks, even though for strict
functions these thunks would immediately be evaluated. The problem is
that there is no way for the compiler to optimize out the thunk
creation / evaluation step because it occurs across the black box
of a function call, thus we wouldn't get the same efficiency as in a
language such as ML where no thunks are created in the first place.


I'm just s slow!!! ;-) Of course the laziness info would now be
part of the function's type so the compiler would be able to generate
the correct code to prepare thunks or evaluated values before calling
the function. So your idea of laziness annotations for args would
give the best of both worlds :-)


For an eager language, a state monad could perhaps be defined by

 data ST m a = ST ~(m - (m,a))

and the other operations would work as normal without any additional 
annotations. (?)


I must admit I'm a bit confused as to why the strictness annotations in 
Haskell (and Clean) are only allowed in data declarations and not function 
declarations, since it seems a bit random to have to guess which args can be 
evaluated strictly at the call site although it of course gives flexibility 
(eg to use (+) strictly or lazily). The type system doesn't prevent someone 
from writing () m0 $! m1 even though the author of () may have been 
relying on m1 being lazily evaluated... (?)


For an eager language, it would seem that lazy annotations would have to be 
allowed as part of a function's type so that if' could be implemented. Does 
anyone know of a type system that incorporates lazy annotations, and/or how 
these would be propagated?


What would the signature of a lazy map function be?

map :: (~a - ~b) - ~[a] - ~[b]
map :: (a - b) - ~[~a~] - ~[b~]

   etc etc - quite a puzzle!!!

Thanks, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why is $ right associative instead of leftassociative?

2006-02-05 Thread Aaron Denney
On 2006-02-05, Brian Hulley [EMAIL PROTECTED] wrote:
 Jon Fairbairn wrote:
 Brian Hulley wrote:
 snip

 Not exactly alone; I've felt it was wrong ever since we
 argued about it for the first version of Haskell. : for
 typing is closer to common mathematical notation.

 But it's far too late to change it now.

 - it's just syntax after all

 Well I'm reconsidering my position that it's just syntax. Syntax does 
 after all carry a lot of semiotics for us humans, and if there are centuries 
 of use of : in mathematics that are just to be discarded because someone 
 in some other language decided to use it for list cons then I think it makes 
 sense to correct this.

 It would be impossible to get everything right first time, and I think the 
 Haskell committee did a very good job with Haskell, but just as there can be 
 bugs in a program, so there can also be bugs in a language design, and an 
 interesting question is how these can be addressed.

 For example, in the Prolog news group several years ago, there was also a 
 discussion about changing the list cons operator, because Prolog currently 
 uses . which is much more useful for forming composite names - something 
 which I also think has become a de-facto inter-language standard. Although 
 there was much resistance from certain quarters, several implementations of 
 Prolog had in fact changed their list cons operator (list cons is hardly 
 ever needed in Prolog due to the [Head|Tail] sugar) to reclaim the dot for 
 its proper use.

 My final suggestion if anyone is interested is as follows:

 1) Use : for types
 2) Use , instead of ; in the block syntax so that all brace blocks can 
 be replaced by layout if desired (including record blocks)
 3) Use ; for list cons. ; is already used for forming lists in natural 
 language, and has the added advantage that (on my keyboard at least) you 
 don't even need to press the shift key! ;-)

 Regards, Brian.

If anything, using ',' for block syntax and ';' for lists is backwards.
',' is used for generic lists in English, whereas ';' is used for
seperating statements or lists.

But I like the current syntax just fine.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Bulat Ziganshin
Hello Brian,

Saturday, February 04, 2006, 4:50:44 AM, you wrote:

 One question is how to get some kind of do notation that would
 work well in a strict setting.
 The existing do notation makes use of lazyness in so far as the
 second arg of   is only evaluated when needed. Perhaps a new
 keyword such as go could be used to use = instead ie:

BH If strictness was the default (eg if the language were ML not Haskell), 
then 
BH in

BH  putStr hello  putStr (show 1)

BH both args to  would be evaluated before  was called. Thus putStr (show
BH 1) would be evaluated before the combined monad is actually run, which 
would 
BH be wasteful if we were using a monad with a  function that only runs the 
BH rhs conditionally on the result of the lhs.
BH If Haskell were a strict language I think an equivalent for the do notation 
BH would have to lift everything (except the first expression) and use = 
BH instead of  .

it seems that you misunderstand the monads (or may be i misunderstand :)

each and every monadic operation is a function! type IO a is really
RealWorld - (RealWorld,a) and the same for any other monad. concept
of the monad by itself means carrying hidden state from one monadic
operation to the next. that allows to _order_ monadic operations plus
this state used for zillions other things, including state, logs,
fails and so on, so on


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 05:18:55PM -, Brian Hulley wrote:
 I must admit I'm a bit confused as to why the strictness annotations in 
 Haskell (and Clean) are only allowed in data declarations and not function 
 declarations

Clean does allow strictness annotations in function types.

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 01:10:24PM -, Brian Hulley wrote:
 2) Use , instead of ; in the block syntax so that all brace blocks can 
 be replaced by layout if desired (including record blocks)

Wouldn't it be better to use ; instead of , also for record syntax?

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Bulat Ziganshin wrote:

Hello Brian,

Saturday, February 04, 2006, 4:50:44 AM, you wrote:


One question is how to get some kind of do notation that would
work well in a strict setting.
The existing do notation makes use of lazyness in so far as the
second arg of   is only evaluated when needed. Perhaps a new
keyword such as go could be used to use = instead ie:



If strictness was the default (eg if the language were ML not
Haskell), then in



 putStr hello  putStr (show 1)



both args to  would be evaluated before  was called. Thus
putStr (show 1) would be evaluated before the combined monad is
actually run, which would be wasteful if we were using a monad with
a  function that only runs the rhs conditionally on the result of
the lhs.
If Haskell were a strict language I think an equivalent for the do
notation would have to lift everything (except the first expression)
and use = instead of  .


it seems that you misunderstand the monads (or may be i misunderstand
:)

each and every monadic operation is a function! type IO a is really
RealWorld - (RealWorld,a) and the same for any other monad. concept
of the monad by itself means carrying hidden state from one monadic
operation to the next. that allows to _order_ monadic operations plus
this state used for zillions other things, including state, logs,
fails and so on, so on


exp1  exp2 in a strict setting would force exp1 to be evaluated to a 
monad, exp2 to be evaluated to a monad, then these monads to be combined 
using  into another monad, which at some later point would actually be 
run. But it is this eager evaluation of exp2 into the rhs monad that is the 
problem, because in the example above, (show 1) would be evaluated during 
the evaluation of (putStr hello  putStr (show 1)) whereas in Haskell it 
would only be evaluated when the combined monad is actually run (because it 
is only at this point that Haskell actually creates the combined monad from 
the thunk).


Regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 01:10:24PM -, Brian Hulley wrote:

2) Use , instead of ; in the block syntax so that all brace
blocks can be replaced by layout if desired (including record blocks)


Wouldn't it be better to use ; instead of , also for record syntax?


I thought of this also, but the nice thing about using commas everywhere is 
that it is consistent with tuples and lists:


   [a,b,c]
   (a,b,c)
   {a,b,c}

I admit it takes some getting used to to write:

   map f (h;t) = f h;map f t

but you couldn't use commas in tuple syntax if they were also used as list 
cons.


Also, I'm using

   test :{Eq a, Show a} a - ()

instead of

   test :: (Eq a, Show a) = a-()

and the comma here is particularly nice because it suggests a set, which is 
exactly what the context is.


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Ben Rudiak-Gould

Paul Hudak wrote:
Minor point, perhaps, but I should mention that : is not special syntax 
-- it is a perfectly valid infix constructor.


But Haskell 98 does treat it specially: you can't import Prelude hiding 
((:)), or rebind it locally, or refer to it as Prelude.:. In fact I've 
always wondered why it was done this way. Can anyone enlighten me? Of course 
it might be confusing if it were rebound locally, but no more confusing than 
the fact that [f x | x - xs] is not the same as (map f xs).


It might be kind of nice if the list type were actually defined in the 
Prelude as


data List a = Nil | a : List a

and all of the special [] syntax defined by a desugaring to this (entirely 
ordinary) datatype, e.g. [1,2] - 1 Prelude.: 2 Prelude.: Prelude.Nil.


-- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why is $ right associative instead of leftassociative?

2006-02-05 Thread Ben Rudiak-Gould

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 01:14:42PM -, Brian Hulley wrote:

How about:

 f x y
 . g x
 $ z


But then you have a problem when you when you want to add something
at the beginning ;-)


How about:

id
. f x y
. g x
$ z

-- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 05:18:55PM -, Brian Hulley wrote:

I must admit I'm a bit confused as to why the strictness annotations
in Haskell (and Clean) are only allowed in data declarations and not
function declarations


Clean does allow strictness annotations in function types.


Thanks for pointing this out - I must admit I had only taken a very quick 
look at Clean (I was overwhelmed by the complicated type system) but now 
I've found the place in the Clean book that describes strictness annotations 
for function types so I must look into this a bit more.


If I wanted to write a 3d computer game in Haskell (or Clean), would lazy 
evaluation with strictness annotations lead to as fast a program as eager 
evaluation with lazy annotations for the same amount of programming effort? 
And would the result be as fast as an equivalent program in C++ or OCaml or 
MLton? If so, there would obviously be no point wasting time trying to 
develop an eager dialect of Haskell (or Clean).


I wonder if current compilation technology for lazy Haskell (or Clean) has 
reached the theoretical limits on what is possible for the compiler to 
optimize away, or if it is just that optimization has not received so much 
attention as work on the type system etc?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative instead of leftassociative?

2006-02-05 Thread Tomasz Zielonka
On Sun, Feb 05, 2006 at 06:58:15PM +, Ben Rudiak-Gould wrote:
 Tomasz Zielonka wrote:
 But then you have a problem when you when you want to add something
 at the beginning ;-)
 
 How about:
 
 id
 . f x y
 . g x
 $ z

Yes, I've thought about it. You are using a neutral element of .,
just like I used [] as a neutral element of ++ (or concat).

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML)  (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compiling hdirect on windows with COM support

2006-02-05 Thread Marc Weber

On Fri, Feb 03, 2006 at 07:55:30AM +0100, Gracjan Polak wrote:
Hi,
I would be iterested in seeing what you have done. And maybe helping
in getting it to work.
I did not find the examples :) Many links seem to be broken on HDirect
page.

Hi. I have finished. It took some more time as expected ;-)

Just install cygwin, and run this script. It will download parts of
fptools via cvs (so you need cvs and GNU make from cygwin)

Greetings, Marc

 compilehdirect.sh ---

#!/bin/sh
# author: Marc Weber
# [EMAIL PROTECTED]

# configure fptools

WORKDIR=${WORKDIR:-myfptoolsdirectory} # you can use  here because cvs will 
checkout the folder fptools everything will be done in that..
CREATEZIPSTODEBUGTHISSCRIPT= #  = no yes=yes, for debugging purposes.

# hdirect lib info (will replace old file due to packaging info changes)
hdirectlibpkginfo=name: \hdirect\  \n\
import-dirs: \\${hd_libdir}\ \n\
library-dirs: \\${hd_libdir}\ \n\
hs-libraries: \HShdirect\ \n\
include-dirs: \\${hd_libdir}\ \n\
depends: \base\, \haskell98\ \n\
exposed:True \n\
exposed-modules:\HDirect\,\Pointer\ 
#hdirect comlib cabal pkg info:
hdirectcomlibpkginfo=name: com \n\
import-dirs: \\${hd_imp}\ \n\
library-dirs: \\${hd_lib}\ \n\
hs-libraries: \HScom\ \n\
extra-libraries: \kernel32\, \n\
\user32\, \n\
\ole32\, \n\
\oleaut32\, \n\
\advapi32\, \n\
\HScom\ \n\
include-dirs: \\${hd_inc}\ \n\
depends: \base\, \n\
\haskell98\ \n\
exposed:True \n\
exposed-modules: \Com\, \n\
\TypeLib\, \n\
\ComPrim\, \n\
\Automation\, \n\
\AutoPrim\, \n\
\WideString\, \n\
\StdTypes\, \n\
\StdDispatch\, \n\
\ComServ\, \n\
\Connection\, \n\
\SafeArray\ 


confirm() { echo -e $1 \n press return to proceed, Strg-C to exit; read; }
info() { echo; echo $1; echo $1 | sed -e 's/./-/g' ; }
die() { echo $1 ; exit; }
question() { echo -e $1 \n [y]es, [n]o; read d; if [[ $d=y ]]; then True 
; else False; fi; }
dozip() { [ -z CREATEZIPSTODEBUGTHISSCRIPT ] || zip -r  ../$1.zip . || die 
couldn't create zip ; }
doziphd() { [ -z CREATEZIPSTODEBUGTHISSCRIPT ] || zip -r  ../../$1.zip . || die 
couldn't create zip; }

confirm this script isn't well tested yet .. run it .. Read the code first 
because you are running it on your risk
echo  you will need cvs, GNU make from cygwin. I've used ghc-6.4.1 from Win 
Installer
confirm Be sure to have your ghc in win PATH environment variable, else 
everything will work till compiling hdirect ;-) if you can run ghc.exe from 
cygwin shell everything should be fine.


[ -d $WORKDIR ] || mkdir $WORKDIR  || die couldn't create working 
directory
cd $WORKDIR || die couldn't cd to working directory

if [[ ! -d fptools ]]; then
#if :; then

question Shall I dwonload happy (say yes and add 
--enable-src-tree-happy to configure if you don't have happy)  coHappy=1
question Shall I dwonload alex (say yes and add --enable-src-tree-alex 
to configure if you don't have alex)  coAlex=1
info checking out fptools top directory
cvs -d:pserver:[EMAIL PROTECTED]:/cvs co -l fptools || die cvs 
checkout of fptools top directory failed 
info checking out fptools/glafp-utils (used by hdirect)
cvs -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/glafp-utils || die 
cvs checkout of fptools top directory failed 
info checking out ghc
cvs -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/ghc || die cvs 
checkout of fptools top directory failed 
info checking out hdirect
cvs  -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/hdirect || die cvs 
check of hdirect failed  
info checking out fptools/mk (needed by hdirect make)
cvs -d:pserver:[EMAIL PROTECTED]:/cvs co -l fptools/mk || die cvs 
checkout of fptools top directory failed 
info checking out ghc/mk (else hdirect/make will complaing about 
ghc/mk/paths.mk
cvs  -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/ghc/mk || die cvs 
check of hdirect failed  
if [ ! -z coHappy ]; then
info checking out happy (else hdirect/make will complaing 
about ghc/mk/paths.mk
remhappy=add --enable-src-tree-happy
cvs  -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/happy || die 
cvs check of hdirect failed  
fi
if [ ! -z coAlex ]; then
info checking out alex (else hdirect/make will complaing about 
ghc/mk/paths.mk
remalex=add --enable-src-tree-alex
cvs  -d:pserver:[EMAIL PROTECTED]:/cvs co fptools/alex || die 
cvs check of hdirect failed  
fi
cd fptools
dozip AfterCO
cd ..
else
info omitting cvs co because fptools already exists
fi

cd fptools

if [ -f configure ]; then info omitting autoreconf (configure already exists);
else  info calling autoreconf; autoreconf ||die autoreconf failed ;
fi

if [ -f 

Re: [Haskell-cafe] Re: Why is $ right associative insteadofleftassociative?

2006-02-05 Thread Brian Hulley

Ben Rudiak-Gould wrote:

Paul Hudak wrote:

Minor point, perhaps, but I should mention that : is not special
syntax -- it is a perfectly valid infix constructor.


 snip
... but no more confusing than the fact that [f x | x - xs] is
not the same as (map f xs).


Can you explain why? On page 258 of Paul Hudak's book The Haskell School of 
Expression he states that do x- xs; return (f x) is equivalent to [f x | x 
- xs] which is clearly just map f xs


I can't find anything wrong with the example in the book but perhaps I've 
missed something?


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative insteadofleftassociative?

2006-02-05 Thread Chris Kuklewicz
Brian Hulley wrote:
 Ben Rudiak-Gould wrote:
 Paul Hudak wrote:
 Minor point, perhaps, but I should mention that : is not special
 syntax -- it is a perfectly valid infix constructor.

  snip
 ... but no more confusing than the fact that [f x | x - xs] is
 not the same as (map f xs).
 
 Can you explain why? On page 258 of Paul Hudak's book The Haskell
 School of Expression he states that do x- xs; return (f x) is
 equivalent to [f x | x - xs] which is clearly just map f xs
 
 I can't find anything wrong with the example in the book but perhaps
 I've missed something?

He may mean that if you *redefine* the operator Prelude.((:)) then the
desugaring and other steps may end up binding the old or the new (:) and no
longer be identical.  This is touched on in

http://www.haskell.org/ghc/docs/6.4.1/html/users_guide/syntax-extns.html#rebindable-syntax

In particular, if you redefine Monad, then [ f x | x-xs ] and do {x-xs; return
x} may no longer mean the same thing.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of left associative?

2006-02-05 Thread Bill Wood
On Sun, 2006-02-05 at 13:49 +0100, Tomasz Zielonka wrote:
   . . .
 and I want to add some transformation between g and z I have to
 change one line and insert another
 
 f x y .
 g x .
 h x y $
 z

 With right-associative $ it would be only one line-add. Probably not a
 very strong argument.

Maybe stronger than you think.  I know that one of the arguments for
making ; a C-style delimiter rather than a Pascal-style separator is
that adding a new statement at the end of a series is error-prone -- one
tends to forget to add the ; in front of the new statement (and one
reason Pascal syntax included the null statement was so that s1;
would parse as s1; null, making ; a de facto delimiter).

Editing ease matters more than a little.

 -- Bill Wood


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Jan-Willem Maessen


On Feb 5, 2006, at 2:02 PM, Brian Hulley wrote:


...
I wonder if current compilation technology for lazy Haskell (or  
Clean) has reached the theoretical limits on what is possible for  
the compiler to optimize away, or if it is just that optimization  
has not received so much attention as work on the type system etc?


I would answer resoundingly that there is still a good deal to  
learn / perfect in the compilation technology, but there's been a  
lack of manpower/funding to make it happen.


-Jan-Willem Maessen



Regards, Brian.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead of leftassociative?

2006-02-05 Thread Brian Hulley

Tomasz Zielonka wrote:

On Sun, Feb 05, 2006 at 04:36:44PM -, Brian Hulley wrote:

Just in case you are interested, in the preprocessor I'm writing,
I would write these examples as:

   (.) #
  f x y
  g x
  h x y
   $ z

and
a = #[
   first
   second
   third

where exp # {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a
... ) ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...]
(exp # block and exp # block are the right and left associative
versions respectively and the special # sugar allows a layout block
to be started if it occurs at the end of a line)


Well... I care about change locality and the like, but I'm not sure
I would use such syntax (as a means of communication between
programmers). Perhaps that's because I am not used to it and it looks
alien. But it's rather because I still put readability first.


It is true that it looks quite alien at first, but consider that it allows 
you to use longer identifiers for function names (because they now only need 
to be written once) which could actually enhance readability eg


  Prelude.compose #
 f x y
 g x
 h x y
  $ z

so perhaps people would start using more real words instead of obscure 
symbols like =+= etc. Also, the less use of infix notation the better, 
because every infix symbol requires the reader to search for the fixity 
declaration then try to simulate a precedence parser at the same time as 
grappling with the semantics of the code itself. The #, # notation solves 
this problem by making the sugared associativity immediately visible, and 
the use of layout further enhances the direct visual picture of what's 
happening.


Anyway it's just an idea I thought I'd share- I'm sure there's no danger of 
it ever ending up in a future Haskell... ;-)


Regards, Brian. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Paul Hudak

Ben Rudiak-Gould wrote:

Paul Hudak wrote:
Minor point, perhaps, but I should mention that : is not special 
syntax -- it is a perfectly valid infix constructor.


But Haskell 98 does treat it specially: you can't import Prelude hiding 
((:)), or rebind it locally, or refer to it as Prelude.:. In fact I've 
always wondered why it was done this way. Can anyone enlighten me?


I think that originally it was because various primitives were defined 
(via Translations in the Haskell Report) in terms of lists.  But with 
qualified imports I'm also not sure why this is necessary.


Of course it might be confusing if it were rebound locally, but no more 
confusing than the fact that [f x | x - xs] is not the same as (map f xs).


It's not?  Hmmm... why not?  (At one time list comprehensions were 
another way to write do notation -- i.e. they were both syntactic sugar 
for monads -- in which case these would surely be different, but that's 
not the case in Haskell 98, as far as I know.)


  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative insteadofleftassociative?

2006-02-05 Thread Paul Hudak

Chris Kuklewicz wrote:

Brian Hulley wrote:

Ben Rudiak-Gould wrote:

... but no more confusing than the fact that [f x | x - xs] is
not the same as (map f xs).


Can you explain why? On page 258 of Paul Hudak's book The Haskell
School of Expression he states that do x- xs; return (f x) is
equivalent to [f x | x - xs] which is clearly just map f xs

I can't find anything wrong with the example in the book but perhaps
I've missed something?


He may mean that if you *redefine* the operator Prelude.((:)) then the
desugaring and other steps may end up binding the old or the new (:) and no
longer be identical.  This is touched on in

http://www.haskell.org/ghc/docs/6.4.1/html/users_guide/syntax-extns.html#rebindable-syntax

In particular, if you redefine Monad, then [ f x | x-xs ] and do {x-xs; return
x} may no longer mean the same thing.


Right, but the original question is whether or not [f x | x - xs] is 
the same as map f xs.  My book's been out for six years and no one has 
mentioned this issue, so if it's a problem I'd like to know why so that 
I can add it to my Errata list!


Thanks,  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Meacham
On Sun, Feb 05, 2006 at 06:50:57PM +, Ben Rudiak-Gould wrote:
 Paul Hudak wrote:
 Minor point, perhaps, but I should mention that : is not special syntax 
 -- it is a perfectly valid infix constructor.
 
 But Haskell 98 does treat it specially: you can't import Prelude hiding 
 ((:)), or rebind it locally, or refer to it as Prelude.:. In fact I've 
 always wondered why it was done this way. Can anyone enlighten me? Of 
 course it might be confusing if it were rebound locally, but no more 
 confusing than the fact that [f x | x - xs] is not the same as (map f xs).
 
 It might be kind of nice if the list type were actually defined in the 
 Prelude as
 
 data List a = Nil | a : List a
 
 and all of the special [] syntax defined by a desugaring to this (entirely 
 ordinary) datatype, e.g. [1,2] - 1 Prelude.: 2 Prelude.: Prelude.Nil.

it would probably be simpler just to declare [] to be a data
constructor. that is what jhc does, it parses the same as any
capitalized name. so you can do

import Prelude hiding([])

data Foo a = [] | Foo | Bar

and list syntax desugars into whatever (:) and [] are in scope.

similarly, (x,y) is just sugar for (,) x y and (,) is a standard data
constructor and can be hidden, redefined, etc just like any other one.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread Cale Gibbard
On 05/02/06, Lennart Augustsson [EMAIL PROTECTED] wrote:
 John Hughes wrote:
  What, no list comprehensions??

 No.  I think the do notation is good enough.


 
  I'd disagree--sequencing is special, and lists represent it directly.
  Don't forget, also, that lists are also much more prevalent in beginners'
  code--and nice notation for beginners helps get people started on
  Haskell.

 I don't really see what's so much better about writing
 [x1,x2,x3,x4,x5] than x1:x2:x3:x4:x5:[].
 When I've explained lists to beginners I've just found it
 annoying and hard to explain why there are two ways of
 writing lists.  And why only lists have this special syntax.

 -- Lennart

Lists have special syntax because they're the lazy-functional
counterpart to loops. They're quite a fundamental structure,
regardless of what other data types we may have at our disposal, and I
think that lots of special support is reasonable. Loops in imperative
languages often get all kinds of special syntax support, and I don't
think it's too far off-base to give lists special syntax accordingly.

That said, I'd *really* like to see monad comprehensions come back,
since they align better with the view that monads are container types,
dual to the view that monads are computations, which is supported by
the do-syntax. This view is actually much easier to teach (in my
experience). Giving lists a little extra syntax is nice, but giving
things unnecessarily restrictive types seems to be the point at which
I'd consider it going too far.

I haven't thought this out too carefully, but perhaps in order to give
the brackets and commas syntax some more weight, the syntax
x1:x2:x3:[] (or x1:x2:x3:Nil ?) could be used solely as a list, but
[x1,x2,x3] would be the corresponding element in any MonadPlus -- this
would be quite handy in a lot of the cases which I care about
(nondeterministic computation). It would also mesh perfectly with
monad comprehensions.

I'd also like to mention that although my background is in pure
mathematics, I had no trouble whatsoever adjusting to :: meaning has
type. A colon is commonly inserted in mathematics between the name of
a function and a depiction of the domain and codomain with an arrow
between them, but I wouldn't think of that as formal syntax per-se.
Also, it's not centuries-old as mentioned, but only about 50 years old
-- I believe it started with the use of arrows in category theory.
Before then, people mostly stated the types of functions in words, or
it was left completely implicit, and they still often do both of
those. Also, it is only used for functions and doesn't apply to values
in any set or concrete categorical object. The notation x : S to mean
x is an element of S is not in widespread common use.

The use and context in mathematics is sufficiently different that I
don't see it as a concern that Haskell be the same in this regard.

The aesthetic reason that I like :: for has type and : for cons is
that it's far more common that type signatures occur on a line by
themselves, whereas conses when needed are often needed in bunches on
the same line.

Not that I'm suggesting that we change things, but as an example, I
actually wouldn't mind typing out has type for type declarations
(though the symbolic form is awfully nice when things must be
annotated in-line, because it looks more like a separator rather than
some random words -- syntax colouring could make up for that though),
whereas I would likely mind a larger symbol for list cons. The amount
of typing isn't the concern, it's how it actually looks on the page,
and where it occurs in use.

The wishy-washy informal reasoning is that cons is like a small bit of
punctuation between the elements of a list -- semantically a comma,
whereas a type annotation is actually saying something. When reading
code aloud, you might not even say 'cons', and if you do say
something, it'll probably be something fairly short and
punctuation-like whereas for a type annotation, you're almost
certainly going to say 'has type' or 'is of type', which seems
structurally 'larger' to me, and perhaps deserves a bigger, more
noticeable representation on the page.

 - Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread Cale Gibbard
On 05/02/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:

 On Feb 5, 2006, at 2:02 PM, Brian Hulley wrote:

  ...
  I wonder if current compilation technology for lazy Haskell (or
  Clean) has reached the theoretical limits on what is possible for
  the compiler to optimize away, or if it is just that optimization
  has not received so much attention as work on the type system etc?

 I would answer resoundingly that there is still a good deal to
 learn / perfect in the compilation technology, but there's been a
 lack of manpower/funding to make it happen.

 -Jan-Willem Maessen


Besides, haven't you heard of the full-employment theorem for
compiler-writers? ;) To paraphrase it: For every optimising compiler,
there is one which does better on at least one program.

In any event, if compilers which preserve non-strict semantics aren't
producing programs from naively written code which are even as fast as
the corresponding hand-tuned C+Assembly programs, then there's still
plenty of room for improvement. It just takes a lot of time,
resources, and effort, as mentioned, to make it (or more reasonable
approximations to it) happen.

Perhaps some small amount of overhead will always be needed to
implement programs with non-strict semantics, (short of solving the
halting problem) but I think that with a lot of hard work, this is
something which could be squeezed down a lot, perhaps to the point of
being negligible. (It's already negligible for many, perhaps even most
applications, on modern hardware.)

I think that as programming languages become higher level, one has
more and more fun opportunities to optimise that it would be much more
difficult to locate and attempt in lower level languages. For example,
knowing that a piece of code is a 'map' or 'foldr' operation,
algebraic rules can be applied at the higher levels, performing
fusion. This part has been done to some extent, but perhaps there are
much deeper things which could be done at that level.

At a lower level, special optimisers could be used in native code
generation, which would take advantage of the fact that there will be
no state or limited state to carry around and no real side effects
(potentially limiting the loads and stores and operating system calls
one would have to do). One might choose an appropriate scheduler which
handled code differently based on the particular higher-order function
in which it was wrapped, since different structures of computation put
different kinds of strain on any given processor.

Anyway, don't ever fool yourself into thinking that any
otherwise-reasonable language is somehow inherently slow. There's
always potential for a better implementation.

 - Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

Cale Gibbard wrote:


That said, I'd *really* like to see monad comprehensions come back,
since they align better with the view that monads are container types,
dual to the view that monads are computations, which is supported by
the do-syntax. This view is actually much easier to teach (in my
experience). Giving lists a little extra syntax is nice, but giving
things unnecessarily restrictive types seems to be the point at which
I'd consider it going too far.

 



The trouble with monad comprehensions was that it became far too easy to 
write ambiguous programs, even when you thought you were just working 
with lists. Haskell overloading works really nicely *as long as there's 
a judicious mixture of overloaded and non-overloaded functions*, so that 
the overloading actually gets resolved somewhere. Overload too many 
things, and you end up having to insert type annotations in the middle 
of expressions instead, which really isn't nice.


Lists are special, not least because they come very early in a Haskell 
course--or, in my case, in the first ever programming course my students 
have ever taken. Getting error messages about ambiguous overloading when 
they are still trying to understand what comprehension notation means 
(without even the concept of a for-loop to relate it to) is very 
destructive. And this is in the case where the code is otherwise 
type-correct--the kind of error message you would get by trying to 
append a number to a monad comprehension doesn't bear thinking about!


The class system is already something of an obstacle in teaching, 
because you have to mention it in the context of arithmetic (even if you 
tell students always to write monomorphic type signatures, they still 
see classes mentioned in error messages). After all, that is surely why 
Helium doesn't have it. I find classes manageable for arithmetic, even 
if students do take some time to learn to distinguish between a class 
and a type (or a type and a value, for that matter!). But it's a relief 
that list programs, at least, have simple non-overloaded types. List 
functions provide an opportunity to introduce polymorphism in a simple 
context--it's much easier to understand why (++) should have the type 
[a] - [a] - [a], than to start talking about MonadPlus m = m a - m a 
- m a.


There is a lot to learn in Haskell, especially in the type and class 
system. It's an advantage if you don't have to learn it all at once--if 
you can master lists and list comprehensions without exposure to monads 
(which are a much harder concept). We should never forget that beginners 
have somewhat different needs from expert programmers--and those needs 
are also important. If we want Haskell to be used for first programming 
courses (and it's a big advantage to catch 'em early), then there needs 
to be a learning path into the language that goes quite gently. 
Monomorphic lists help with that.


We did consider more aggressive defaulting to address the ambiguity 
problems with monad comprehensions--defaulting Monad to lists, for 
example, or user-declared defaulting rules--but this introduces yet more 
complexity without really addressing the problem of keeping types simple 
for beginners, so the idea was abandoned.


John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[2]: strict Haskell dialect

2006-02-05 Thread John Meacham
On Sun, Feb 05, 2006 at 05:18:55PM -, Brian Hulley wrote:
 I must admit I'm a bit confused as to why the strictness annotations in
 Haskell (and Clean) are only allowed in data declarations and not function
 declarations, since it seems a bit random to have to guess which args can
 be evaluated strictly at the call site although it of course gives
 flexibility (eg to use (+) strictly or lazily). The type system doesn't
 prevent someone from writing () m0 $! m1 even though the author of ()
 may have been relying on m1 being lazily evaluated... (?)

It is because a data declaration is defining the form of the data, which
includes both its representation and the type of its constructors. the
strictness annotations affect its representation (or at least its
desugaring) but not its type. The strictness of the fields is not
reflected in the type. A function declaration is just declaring the type
of the function, where strictness is not reflected either just like in
data types.


another way you can think of it is that for

data Foo = Bar !Int !Char

the bangs arn't being assosiated with the Int and Char types, but rather
the Bar data constructor. However, the syntax is a little confusing in
that it makes the bangs look as though they were part of the types of
the constructor arguments.

John

--
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe