Re: Negation

2010-02-13 Thread John Launchbury
I don't think this is a bug. I do not expect to be able to unfold a definition 
without some syntactic issues. For example,

two = 1+1
four = 2 * two

but unfolding fails (four = 2 * 1 + 1). In general, we expect to have to 
parenthesize things when unfolding them.

John


On Feb 13, 2010, at 11:56 AM, Simon Marlow wrote:

 On 09/02/10 21:43, S. Doaitse Swierstra wrote:
 One we start discussing syntax again it might be a good occasion to
 reformulate/make more precise a few points.
 
 The following program is accepted by the Utrecht Haskell Compiler (here
 we took great effort to follow the report closely ;-} instead of
 spending our time on n+k patterns), but not by the GHC and Hugs.
 
 module Main where
 
 -- this is a (rather elaborate) definition of the number 1
 one = let x=1 in x
 
 -- this is a definition of the successor function using section notation
 increment = ( one + )
 
 -- but if we now unfold the definition of one we get a parser error in GHC
 increment' = ( let x=1 in x + )
 
 Now that *is* an interesting example.  I had no idea we had a bug in that 
 area. Seems to me that it ought to be possible to fix it by refactoring the 
 grammar, but I haven't tried yet.
 
 Are there any more of these that you know about?
 
 Cheers,
   Simon
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: change to qualified operator syntax

2009-07-09 Thread John Launchbury

Simon,

Thanks for summarizing the arguments about qualified operators. I  
continue to be a strong advocate of this approach. IMO, we either have  
to do this, or disallow . (dot) as an operator, and I think you are  
right that it is too late to do the latter.


In the proposed implementation of this change, you say:

Prelude.(=) has to be a single lexeme, because there's no way to  
lift the syntax of qualified names into the context-free grammar. So  
this forces us to move the syntax for parenthesized operators and  
`..` down to the lexical syntax


with the consequent behavior that `---` and (---) are not dual.

Can you regain duality by *also* providing a definition of `---` and  
(---) at the level of the context free grammar?


John


John Launchbury   | galois |   (503)626-6616 x104


On Jul 9, 2009, at 4:22 AM, Simon Marlow wrote:


On 08/07/2009 23:06, k...@cas.mcmaster.ca wrote:

Simon Marlow replied to Henning Thielemann:

   Prelude.= just doesn't look like an infix operator.  The  
point of
   infix operators is that they are a lightweight notation, but  
they lose
   that advantage when qualified.  The qualified operator proposal  
gives
   you a nice rule of thumb to rely on when parsing code in your  
head: if

   it begins with a letter, it is not infix.  The advantages of this
   shouldn't be underestimated, IMO.

Actually, I am another supporter of qualified infix operators.

Is see on

   http://hackage.haskell.org/trac/haskell-prime/wiki/QualifiedOperators

that with the new proposal, I would have to write

  `Prelude.(=)`


You don't *have* to write that, you can use the prefix form.  The  
argument that this proposal makes is that when you have to qualify  
an operator, it has lost most of the advantages of being infix.


. I frequently run into situations where that would be extremely  
useful.



the M.. M... M debacle

I don't think that problems arising from a single character
should outlaw a whole lexical category.
Better outlaw that character! ;-)


Dot is a particularly troublesome character, owing to the decision  
to use it for qualified operators back in Haskell 1.3.  It's really  
too late to change that now, sadly.



Back to the original argument:

   Prelude.= just doesn't look like an infix operator.

I think that

  `Prelude.(=)`

doesn't really look like an infix operator either.


It does begin with a `, just like `Data.List.map`, or `fmap`.  So in  
that sense it is more like an infix operator than Prelude.=.


Anyway, thanks for all the comments in this thread.  I've tried to  
summarise the pros/cons on


http://hackage.haskell.org/trac/haskell-prime/wiki/QualifiedOperators

Please let me know if I've missed anything.  The committee will  
review the arguments when we make final decisions.


I realise this change is not a clear-cut win.  So few things are.   
It's a question of balancing the advantages against the  
disadvantages, and reasonable people are very likely to differ.


Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell Prime Language definition

2007-10-15 Thread John Launchbury

Hi Robert,

At the recent Haskell workshop, I stood up and gave the following  
summary (approximately):


Up to now, the Haskell' effort has been mostly about exploring the  
possibilities, to find out what could be in Haskell', and to scope  
out what it might mean. We've now reached the stage where we want to  
do the opposite, namely trying to pin down what we definitely want to  
have in the standard, and what it should look like in detail. I've  
set aside a chunk of my own time this fall to help coordinate the  
activity, write text etc. I'm hoping that things should be pretty  
clear by early next year.


I have spoken with CUP and JFP about publishing the standard as a  
special issue of JFP and as a book, and they are interested. The  
strawman timeline for that is early next summer.


Hope this helps,
John


On Oct 11, 2007, at 9:34 PM, Robert Will wrote:


Hi all,

When I first discovered Haskell' I was really excited to hear that
many of the individual extensions that are already used by many people
are going to be put together to one coherent next release.

I have read the archive of the Haskell Prime Mailing list for all of
2007 as well as a lot of pages on Haskell.org and in the Haskell Prime
Wiki, yet the most recent status report that I found is the one in the
wiki from September 2006.
(http://hackage.haskell.org/trac/haskell-prime/wiki/Status')

Could someone please summarize the current status and planned time
line for Haskell'?

thanks a lot,
Robert

--
Skype: robert.will
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime



John Launchbury   | galois |   (503)626-6616 x104


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: State monads don't respect the monad laws in Haskell

2002-05-16 Thread John Launchbury

Yes. Let me be clear.

It is not the fact that `seq` operates on functions that breaks foldr/build:
it is the fact that `seq` claims to be parametrically polymorphic when in
fact it is not. The parametricity result is weakened to the point that the
foldr/build proof no longer applies, and a counter example can be
constructed, viz.

  head = foldr const undefined
  one = build (\c n - n `seq` c 1 n)

  result = head one

The one definition looks fine as the body to build is sufficiently
polymorphic, but that only because `seq` is lying.

John

 I watched with interest the discussion about the ravages of `seq`.
 
 In the old days, we protected uses of `seq` with a type class, to keep it at
 bay. There was no function instance (so no problem with the state monads, or
 lifting of functions in general), and the type class prevented interference
 with foldr/build.
 ...
 
 Just a further remark: During discussion with Olaf about consequences of
 `seq` for foldr/build, respectively his type-inference based
 deforestation, I had the impression that there could very well be a
 function instance for `seq` not interfering with shortcut deforestation,
 provided this instance is restricted in the following way:
 
 class Eval a where seq :: a - b - b
 
 instance Eval d = Eval (c - d)
 
 I have no idea what would be a semantic justification for this instance
 declaration, except that it allows use of `seq` on at least some
 function types, but seems to outlaw all the critical cases where use of
 `seq` falsifies the foldr/build-rule.
 
 
 --
 Janis Voigtlaender

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: State monads don't respect the monad laws in Haskell

2002-05-15 Thread John Launchbury

I watched with interest the discussion about the ravages of `seq`.

In the old days, we protected uses of `seq` with a type class, to keep it at
bay. There was no function instance (so no problem with the state monads, or
lifting of functions in general), and the type class prevented interference
with foldr/build.

However... 

In defining Haskell 98 we made a conscious decision not to guard `seq` with
a type class. The reasoning was as follows:

 + Space control is an important phase of programming, and `seq` provides
   a powerful mechanism for tweaking a program, to improve its space
   behavior. It should therefore be as easy as possible to experiment
   with adding `seq` in various places, including on function closures.

 - Haskell already had somewhat muddied semantics for types (both sums
   and products are lifted, and bottom is present in every type). It
   seemed as though a little more muddying was not a serious affair.

Did we make the right decision? For Haskell 98: Yes. And I believe we made
it with our eyes open.

Will the next version of Haskell have something better. I hope so, but I
fear not. 

First, I don't see much active research on what Haskell would look like with
true unpointed types -- there are lots of practical issues to be addressed;
second, the prevalence of things like unsafePerformIO (slogan: USPIO is NOT
Haskell) seems to be growing, not diminishing, and again I don't see much
active research on how to gain its benefits while containing its damage.

True declarative programming is a delicate flower.

Review the history of the 80's and early 90's. It was the conviction of the
Lazy FP community to true declarativeness that led us all to reject proposal
after proposal for state and efficient arrays, until finally Eugenio and
Phil led us to the land of monads. Finally, as a result of the burning need
we all felt, and of the high standards we all demanded, we could be as
imperative as the best of them, without compromising the mathematical nature
of the language. If this history teaches us anything, it should urge us all,
and the new young blood in particular, not to go for quick fixes, or to
compromise on the dream of a truly declarative language.

Haskell 98 is a great language. The best on the planet today. But it's not
perfection. Let's confront the problems face to face, and find the right way
forward!

John


 And I'd really much rather we cleaned up the semantics of
 seq---or better yet, fixed the problems with lazy evaluation which
 make seq necessary in the first place.
 
 A general question: What is seq useful for, other than efficiency?
 
 seq can create a new, strict definition of a function from an existing
 non-strict function.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Implicit Parameters

2002-02-05 Thread John Launchbury


 
 My questiona are: Were the designers of the implicit
 parameters paper aware of this problem when they wrote the
 paper? If so, they probably did not think this was a big
 problem. Do people in general think this is a problem?

We certainly were aware. It is a problem, and a big one. The monomorphism
restriction (MR) was (barely) acceptable in Haskell 98 because at least the
final value returned by the program was not changed by this kludge kicking
in. But, as we point out in the paper, implicit parameters and the MR are
simply incompatible. One of them has to go.

As John Hughes intimated, this debate is part of a much larger issue as to
how Haskell handles type schemes versus types, and implicit parameters show
that type schemes can arise from causes other than polymorphism.

In the long term, should Haskell maintain a distinction between types and
type schemes? Between call-by-name and call-by-need? Should type schemes be
permitted everywhere? If so, should inference do it's best and simply report
when ambiguities arise? Etc. etc.

I think the time has come for us to address these types of questions from a
fundamental basis, not simply as fixes to the existing infrastructure.
Otherwise we'll never be able to budge from the sludge of the kludge...

John



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: n+k patterns

2002-01-30 Thread John Launchbury

I strongly disapprove of n+k patterns from a whole-language taste
perspective, so I am most unkeen to broaden their scope. Because they are
such a language kludge already it simply doesn't make sense to try to reason
rationally about what the best answer for them is. It's like putting
lipstick on a chicken.

If anything, we should have restricted them to the very simplest case
covered in the early textbooks, i.e. just Int.

John

 | hbc is on the Integral side, if that counts. :-)
 | Just because ghc doesn't follow the spec isn't a good reason
 | to change the spec. :-)
 
 I absolutely didn't say that!  All I'm saying is
 
 * Two of the four impls have to change regardless
 * The change is non-de-stabilising on the rest of the report
 * So we should think what the best answer is
 
 I argued that (Num a, Ord a) makes most sense to me.
 You argued that (Integral a) was a conscious choice (something I
 don't remember but I'm sure you're right), and is the right one anyway.
 
 I'd be interested to know what others think.  If there's any doubt,
 we'll stay with Integral.
 
 Simon
 
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: fixity for (\\)

2001-01-22 Thread John Launchbury

I agree with Koen: \\ is list subtraction and we're all used to subtraction being left 
associative.

John.


Jon Fairbairn wrote:
 
 On Wed, 17 Jan 2001, Koen Claessen wrote:
  I propose that it gets the following fixity:
 
infixl 5 \\
 
 Unless the it's common usage outside of Haskell, I oppose!
 
 Getting
 
List [1,2,3]\\[2]\\[3]
ERROR: Ambiguous use of operator "(\\)" with "(\\)"
 
 at compile time does no harm, but getting [1] instead of
 [1,3] _at run time_ does do harm.
   Jn
 --
 Jn Fairbairn [EMAIL PROTECTED]
 31  Chalmers Road[EMAIL PROTECTED]
 Cambridge CB1 3SZ  +44 1223 570179 (pm only, please)
 
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Haskell job opportunities

2000-11-16 Thread John Launchbury

Hi,

A discussion thread earlier this year asked about the existence (or otherwise) of jobs 
involving programming in Haskell and other FP languages. Well, they really do exist, 
and we have some openings. If you're interested you can find out more at 
http://www.galconn.com.

John.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Results: poll: polymorphic let bindings in do

2000-06-06 Thread John Launchbury

Koen,

 If a language has the property that in one place, one can
 use a "let" block to define polymorphic bindings, and in
 another place one can only use it for monomorphic bindings,
 then I think that is bad language design.

I don't think that's a very accurate description. The "let" in "do"
is a bit of a different beast from the usual "let..in" construct.
For a start, it doesn't have an "in" keyword. This is not a trivial
point: "let"s are guaranteed to introduce polymorphism only
within the block introduced by "in". Uses of variables (functions) 
bound by a "let" block are a priori monomorphic within the rest of that
block. In pactice, of course, the strongly-connected-components pass
will reconfigure big let-blocks into many separate blocks, and so
increase the amount of polymorphism. In a "do" block, however, it's
a lot harder to do the reconfiguration, so we would be stuck with
the by-default monomorphic behavior. (The current use of "let" in
"do" does some simplistic reconfiguration -- introducing an "in"
keyword to obtain polymorphism -- which does not work in the recursive
case).

 On the other hand, recursive do-notation is obviously is a
 great feature that many (at least I) have been wanting for
 some time now. If that feature implies monomorphic let
 bindings in do-notation, then there is clearly a trade-off
 to make. But don't use the argument "nobody is using this
 feature" to make the language design less consistent.

The argument is really to make the design *simpler* because
nobody is using the feature. The consistency or otherwise is
an artifact of the "let" keyword being asked to do double duty,
and that's the point at which we need to ask about consistency.

John.




Re: Results: poll: polymorphic let bindings in do

2000-06-06 Thread John Launchbury

Jeff,

 Isn't that a bit of a dodgy argument?  I don't know of any papers on `in'
 polymorphism, but many about `let' polymorphism.  If I see `let', I expect
 polymorphism, and I'm not going to go searching for an `in'.  

Not true (or if true, misguided). Consider the type rule for "let". The
generalization only takes place in the context for the body of the "let" (the bit
after the "in"), and not in the context of the declarations themselves. This is
why extra work is needed to achieve polymorphic recursion, that is, where "let" 
declarations are indeed polymorphic in their own scope. In the absence of an
explicit type annotation the use would be monomorphic. In the absence of let
restructuring, the same would apply to any other uses of the variable (function)
in the same binding group.

 It's been proposed before that we have a separate notation for monomorphic
 bindings.  I'd like to see you adopt this approach, rather than having `let'
 in `do' behave fundamentally differently.  This example also provides nice
 example and motivation for introducing a monomorphic let notation.

This could make sense if done throughout the language. Our proposal is intended
to be much more modest.

 The ability to bind multiple things at once (and have the corresponding
 increase in polymorphism) is a nice feature of a modern `let' - but it's not
 the primary characteristic.  The let in do could easily bind multiple things
 as well (by allowing `in').  The currently syntax is just an aesthetic choice
 isn't it?

The current "let" in "do" already can bind multiple things. The reason for omitting
the "in" is explicitly to avoid introducing another scope level (requiring a new "do",
a new indent, etc.). How one expects it to behave depends on the implicit 
assumptions made about all the variables being bound in the "do". Currently, 
even though "do" gives the appearance of introducing many variables all at the same
declaration level, it actually intoduces a nested sequence of variable bindings,
each possibly shadowing bindings earlier in the sequence. In this current setting
it makes sense for "let" in "do" to be thought of having an implicit "in", and so
its bound variable is polymorphic in the remainder of the "do". In a recursive "do"
(as in a "let"), all the newly bound variables are actually considered to be in
a concurrent scope -- as suggested by the layout. In this setting, any variables bound
by "let" are also part of the same scope as the others -- we have one big binding
group -- hence they are monomorphic when used by other declarations in the same
binding group.

John.




Re: Show class on ADT with function

2000-05-10 Thread John Launchbury

Note that these have omitted the behavior of the function on bottom, so even on a 
finite domains, the description is not complete.

john.


[EMAIL PROTECTED] wrote:
 
 George writes:
  There is no problem with Showing functions with finite domains.
  For example, try:
 
  module ShowFun where
  instance (Show a) = Show (Bool - a) where
 show f = show ((f  True),(f False))
  instance (Show a) = (Show (Int - a))
 
 Why stop there?  Eq and Read too, though they do become tricky at Int-Int.
 
 
 Ian Stark http://www.dcs.ed.ac.uk/home/stark
 LFCS, Division of Informatics, The University of Edinburgh, Scotland






Re: core dumps when making use of IORefs

1999-09-14 Thread John Launchbury


 I don't know any way to make unsafePerformIO type-safe without imposing
 some drastic or complicated restriction.  Something in the back of
 my mind tells me that John Launchbury has another example of
 type unsafety induced by unsafePerformIO but I can't remember what;
 so I'm cc'ing him.

Just an expansion of the same idea. UnsafePerformIO allows us to write a cast function 
with type
  cast :: a - b
I put the code for this in my ICFP paper this year.

I don't know of any fix.

John.



Re: typing error?

1999-07-19 Thread John Launchbury

Jan Brosius wrote:

 I have read the online postscript paper " Lazy Functional State Threads".
 I think the typing (page 4)
 
 f :: MutVar s a - Mutvar s a
 
 is wrong, since the definition of  f  is :
 
 f v = runST ( newVar v 'thenST'  \w -
   readVar w)
 

I think you mean page 10...

The typing given for f is an instance of the more general typing you gave. The intent 
was to show that the s-types were not special in any way, and that state threads could 
manipulate state structures belonging to other threads. There are less contrived 
examples which are not mere instances of more general polytypes.

John





Here's a puzzle to fry your brains ....

1999-04-28 Thread John Launchbury

Folks,

My student Levent Erkok and I have been playing about with the idea of mutual 
recursive bindings in the do notation. Mostly it's clear how they should behave, but 
we struggle with the following example. I would love to hear some (considered) 
opinions about this.

John.



test :: [Int]
test = do (x,z) - [(y,1),(2*y,2), (3*y,3)]
  Just y - map isEven [z .. 2*z]
  return (x+y)

isEven x = if even x then Just x else Nothing





Re: Standard Haskell

1998-09-09 Thread John Launchbury

I think I favor "20th century Haskell" myself :-)

Hassett wrote:
 
 On 9/8/98 5:10 PM, Andrew Rock wrote
 
 If Standard Haskell is meant to be a stable target for texts and the like,
 why not Haskell-Ed (for Education), perhaps with a version indication like
 Haskell-Ed-98.
 
 Unfortunately, this carries the risk that the uninformed may think that
 the
 language was named after Eddie "That's a very nice sweater, Mrs. Cleaver"
 Haskell.
 
 - Jim Hassett





Re: A new view of guards

1997-04-29 Thread John Launchbury

I love Simon's suggestion. It gives me all the right vibes. And - seems to
me to be the right connective to use.

At the risk of beating my hobby horse, let's not think of - solely in
terms of monads. It is certainly appropriate there, but it is also
appropriate for lists when thought of purely as a bulk data type, and I
think it's appropriate here also.

Simon's syntax also provides a viable alternative to @ patterns

  f x @ (Just 3) y = e

  f x y | Just 3 - x = e

though it is slightly less convenient in certain contrived examples (but
very much less ad hoc).

John.







My home page

1995-09-21 Thread John Launchbury


Folks,

Soon after offering my home page as a source of the paper Phil
mentioned the other day, we had a disk go bad which led to

  403 Forbidden
  Your client does not have permission to get URL /~jl from this server.

The problem has now been fixed, so you should be able to get through.

John.







Re: Haskell 1.3

1995-09-11 Thread John Launchbury


I would like to respond to John's note. My response is largely positive,
though I disagree with a couple of points.

However, it is an independent question whether or not strictness annotations
should be applicable to function types. And this is where I disagree with
the committee. To quote `Introducing Haskell 1.3',

Every data type, except -, is a member of the Data class.

In other words, in Haskell 1.3

FUNCTIONS ARE NOT FIRST-CLASS CITIZENS

I cannot agree here. Functions are not members of the equality class either,
but that does not demote them to second class citizens. However, John may be
right in suggesting that people will become more reluctant to use functions
as values if they cannot force their evaluation.

I see a very great cost in such a philosophical change, and I do not see
that the arguments against strictly evaluating function values are so very
compelling.

  Implementation difficulties? hbc has provided it for years, and
  even under the STG machine is the problem so very much harder than handling
  shared partial applications correctly?

I haven't checked hbc, but I would be interested if someone would confirm
that function strictify works properly. It didn't use to in LML.

  Semantic difficulties? The semantics of lifted function spaces are
  perfectly well defined. OK, so it's not the exponential of a CCC --- but
  Haskell's tuples aren't the product either, and I note the proposal to
  change that has fallen by the wayside.

This is probably an important point. I see there being value in two sorts
of functions: lifted and non-lifted (or equivalently boxed and unboxed).
A lifted function may be expressed as a computation which delivers a function,
just like lifted integers are computations which deliver integers. Under this
view it would be entirely in keeping with the rest of Haskell for the standard
functions to be lifted, and to leave open the possibility in the future of
introducing unlifted functions.

So here's my proposal: change `Introducing Haskell 1.3' to read

Every data type, including -, is a member of the Data class.

I am inclined to agree. Is there a problem then that every type is in Data?
Not at all. The Data class indicates that forcing has been used in the
body of an expression. This is valuable information that is exposed in
the type.

John.







Re: Do functions exist?

1993-11-19 Thread John Launchbury


John says we can't go from a function to its concrete representation
ie E - [E] - OK. He hints that implementations are from concrete 
representations to real functions ie [E] - E. I disagree profoundly.

I'm not surprised you disagree. I hinted no such thing. Implementations
manipulate representations.

So to manipulate concrete representations of functions in a functional
language we have to implement functions all over again? But don't we already
have concrete representations of functions in the original functional language
implementation? All (all?) I'm proposing is to print them out.

Yes we do, and it's fine for the *implementation* to print such things out.
But it is NOT fine to cross the dividing line of (1) the implementation, and
(2) the language it implements. As I believe I have made abundantly clear
a showFun function which exists *within* the language destroys many of our
reasoning properties.

Incidentally, my point about not bothering to evaluate functional
programs whose final values are functions was serious. Presumably,
people don't generally write programs that return functions as
final values? Maybe functions are the only types in functional languages
which are not 1st class objects - full orthogonality doesn't
stretch to printing functions. And we call this functional programming?

This is silly (excuse my frustration). There are lots of things you can't
do with all sort of types. You can't apply a list to a value and hope it
makes sense, for example. I don't see it as surprising that there are
things you cannot do with functions as a consequence of their nature.

Let's go back to the lambda calculus again. Here are some lambda terms
which we may hope exist within the calculus. I state their defining
property and whether
such a term exists or not. The properties are universally quantified over
ALL lambda terms E.

  a   E   =   E Yes
  b  [E]  =  [E]Yes
  c  [E]  =   E Yes
  d   E   =  [E] NO

It's not that I am being restrictive, or hooked on semantics, or anything
like that. It is simply a matter that no such lambda term exists. Try to
write one if you like.

Now of course, when we look on the calculus *from the outside* then we can
see into the internal structure of terms. So looking from the outside gives
a richer
view than looking in. So as I said before, it *is* ok for an implementation
to provide information about the structure of closures etc., but it must
not be done via a showFun function within the language.

John.






Re: Function denotations

1993-11-18 Thread John Launchbury


Greg argument's about function denotations is valid up to a point. However,
when optimising functional programs we rely on *denotational* equivalence
to justify substitution, not just beta equivalence. So for example, we
can prove that

  foldr (:) [] = id :: [a]-[a]

in the sense that given any input, they both produce the same output. This
form of equality is called *extensionality* and we make use of such facts
very frequently when deriving programs, or when proving them correct. Note
that neither side reduces to the other.

Thus, if we provided a function

  showFun :: (Int-Int) - String

say, that returned a string representing the function then any denotational
equivalence based on extensionality goes out the window. A big loss, to put
it mildly! The problem is that the string is then available for use by the
rest of the program.

On the other hand, if we provided a system call (a request or whatever)
of the form

data Request = ...
 | ShowFun (Int-Int)

then any problems become much more contained, as it is only the *external
world* that would witness non-extensionality.

John.





module Main

1993-11-17 Thread John Launchbury


I have a followup question to Lennart's motivated by the same
IOHCC entry.

The Haskell report states that in an abbreviated module, the
header is assumed to be

  module Main where

which makes all the identifiers exportable (and so the monomorphism
restriction could bite the unwary). I tend to think that the header
should be the simpler

  module Main(main) where

so that only main is exported. It seems to me that the only value of
having the abbreviated form is for small, single-module programs, so
main is the only thing that needs to be exported. In a multi-module
program it seems to me to be bad practice not to be explicit about all
module headers. At least in the case when Main defines stuff used by
other modules this proposal will require the header to be explicit.

John.

PS. I don't know how others out there feel about the Obfuscated Haskell
competition that Lennart is running, but I am now convinced it's a
great idea. It is exactly these programs which test the corners of
the language definition. Perhaps some of them should go into Ian
Holyer's validation suite! Just in preparing my entry I hit a bug
in glhc (with regard to defaults), and the program's failure under
hbc provoked Lennart's question.





Lifted Stuff

1993-11-05 Thread John Launchbury


Some people have referred to semantic issues and Abramsky and Ong's work
when contributing to the lifted/unlifted debate. I think it would be fair
to summarise them as follows.

Pro lifting
===

The simplest possible lambda calculus has lifting in its canonical model as
soon as "laziness" is introduced. Otherwise, non-strict constructors could
not be modelled in the calculus.

When actual non-strict constructors are provided, therefore, essentially
nothing new has been added. The same models work beautifully. Thus in my
POPL 93 paper about lazy semantics, I used lifted function spaces to obtain
a direct correspondence between operational and denotational semantics that
held *everywhere*. Contrast this with Purushothaman and Seaman's ESOP 92
paper where they used unlifted function spaces, and there the
correspondence was only true at base types.

Summary: operational and denotational models "fit" with one another if
function spaces are lifted (and products too by implication).


Con Lifting
===

If our goal is to improve equational reasoning then, as many people have
pointed out, more (interesting) equations hold when function and product
spaces are not lifted.

Furthermore, the natural semantics of unboxed types is that of domains
without bottoms (FPCA 91), so if Haskell is ever likely to move (in the
future) in the direction of accepting unboxed types into its definition,
then it would make sense not to introduce possibly unnecessary liftings
elsewhere.

John.





Liftings

1993-11-05 Thread John Launchbury


More on liftings:

In our FPCA 91 paper, Simon and I came to the conclusion that the "proper"
way to give semantics to data declarations was, as follows. If

  data T = A U V | B W

then the model for T (written here T*) is

  T* = ((U* x V*) + W*)_\bot

where the x is pure domain product, the + is disjoint union (i.e. not smash
sum, not separated sum, but a union which results in a domain without a least
element), and _\bot is lifting.

The form of bottomless types is now obvious:

  bottomless S = A U V | B W

is modelled by

  S* = (U* x V*) + W*

What this means is that constructors really are playing three roles in standard
Haskell: they are formed as a composition of lifting and injection into a sum,
and introduce a (true) product. I believe we ought to be very cautious
about introducing special cases for single constructors, for example, which
do not generalise neatly.

John.





Haskell 1.3 (n+k patterns)

1993-10-12 Thread John Launchbury


I feel the need to be inflamatory:

  I believe n+k should go.

There are lots of good reasons why they should go, of course. The question
is: are there any good reasons why they should stay? My understanding is
that the only reason they are advocated is that they make teaching
induction easier. I don't believe it. I teach an introductory FP course
including induction. I introduce structural induction directly, and the
students have no problem with it. When I have tried to talk to individuals
about natural number induction using (n+k) patterns, then the problems
start. Because they are so unlike the form of patterns they have become
used to they find all sorts of difficulties. What if n is negative. Ah yes,
well it can't be. Why not. It just can't. etc.

Let's throw them out.

John.





Re: haskell

1993-10-12 Thread John Launchbury



It sounds to me as if the problem is with negative numbers.
So, one more time ...  What about the *natural* numbers?
Doesn't anyone else program with these?  (Maybe just occasionally? :-)

The problem is only partly to do with naturals. Having these would
certainly improve matters but I suspect the Num class is big enough
already. However, some other problems of (n+k) patterns are
 * syntax: + is the only nonconstructor constant allowed in patterns.
 * generality: mathematics commonly uses (bn+k) patterns, but Haskell doesn't
   allow this. It has one series of special cases only.
 * construction: laws were thrown out of Miranda because of the confusion
   created when something built one way, was taken apart another way. All our
   patterns *except (n+k)* have the property that the deconstruction reverses
   construction. If I build (a:as) then take it apart with pattern (x:xs) then
   x is a, and xs is as. If I build (a+b) then take it apart with pattern
   (n+1) then I cannot deduce that n is a (nor that 1 is b).

John.





re ADTs etc.

1993-10-05 Thread John Launchbury


I think there is another problem with having strict constructors. It messes
up parametricity even more than fix does. There are two reasons why this
would be a shame:

* Parametricity is cool. It lets you prove lots of interesting theorems
  for very little work. These theorems help with program transformation
  and the like.

* Some compilers use parametricity. In particular, the justification for
  cheap deforestation method (foldr-build) comes from parametricity. If
  parametricity is weakened too much the transformation may become unsafe.

One way to introduce strictness is to use overloading and have a class
Strict with an operation
  strict : a - a
defined for each type in the class (not including functions unless their
semantics changes, nor unlifted products if they get introduced). Then a
strict constructor would have a class restriction and these would provide
the standard mediation for parametricity.

John.






Re: Arrays and Assoc

1993-10-05 Thread John Launchbury



But I think we can have the cake and eat it too, if we get rid of the
restriction (which I never liked) that operators beginning with : must be a
constructor: just define 
a := b = (a,b)

Unfortunately that won't work if := had been used in patterns. I think
backward compatibility is an issue. The standard technique of supporting
Assoc but with compiler warnings will probably have to be used.

---

I'm not exactly sure what you mean here. It is allready possible to define 
arrays by self-reference in Haskell.

Haskell arrays are strict in the indices. That is, the whole of the
defining list is consumed and the indices examined before the array becomes
available. Thus, a recursive array definition in which the *index
calculation* depends on the earlier part of the array gives bottom. The
current definition allows for a recursive definition so long as it is only
the values of the array elements which depend on the array. This is not
always sufficient.

---

Let me just remind people what the LML arrays does:

example:
lmlarray 1 3 f list = 
array [ 1:= f [ x | (1,x) - list],
2:= f [ x | (2,x) - list],
3:= f [ x | (3,x) - list]
  ]
where array is like the ordinary Haskell array constructor function.
 ...
It seems to me that it is a bit more general to apply f to the entire
list accumulated at each index, rather than as an operator for foldr.

If you want the list you can supply (:) and []. If not, you supply the
operations, and the intermediate list never gets built.

John.