[Haskell] Postdoctoral Research Assistant position at University of Edinburgh

2021-12-27 Thread Philip Wadler
The School of Informatics, University of Edinburgh invites applications for
a postdoctoral research assistant on the SAGE project (Scoped and Gradual
Effects), funded by Huawei. The post will be for eighteen months.

We seek applicants at an international level of excellence.  The School of
Informatics at Edinburgh is among the strongest in the world, and Edinburgh
is known as a cultural centre providing a high quality of life.

If interested, please contact Prof. Philip Wadler .
Full details are here
Applications close at 5pm on 14 January 2021.

The Opportunity:

In the early 2000s, Plotkin and Power introduced an approach to
computational effects in programming languages known as *algebraic effects*.
This approach has attracted a great deal of interest and further
development, including the introduction by Plotkin and Pretnar of *effect
handlers*. Several research languages, such as Eff, Frank, Koka, and Links,
support the use of algebraic effects. A few languages, including Multicore
O'Caml and WebAssembly, support specific algebraic effects, and we may see
general algebraic effects make their way into mainstream languages in the

The proposed research has two focusses: (a) a simpler version of static
scoping for algebraic effect handlers, and (b) gradual types for algebraic
effect systems.

The former is a question of how to match an effect with its corresponding
handler. This issue arises when two effects have the same name, and you
expect the handler for one but invoke the handler for the other. Some
current solutions, like tunnelling and instance variables, are based on
lexical scoping. However, despite this insight, tunnelling and instance
variables are more complex than lexical binding of a variable to a value.
In particular, the type system needs to track that an effect does not
escape the scope of its surrounding handler (reminiscent of the funarg
problem with dynamic variable binding). Can we do better?

The latter is a question of how to integrate legacy code that lacks typed
effects with new code that supports typed effects. Usually, integration of
dynamically and statically typed languages is best explored in the context
of gradual types, which support running legacy code and new code in tandem.
To date there is only a tiny amount of work in the area of gradual types
for effects, and important aspects of algebraic effects, such as effect
handlers, have yet to be addressed. Overall, work to date on effect
handlers is often complex, and attention is required to where ideas can be
simplified, which is the ultimate objective of this project.

Your skills and attributes for success:

• PhD (or close to completion) in programming languages or a related
• Expertise and experience in relevant areas, including lambda calculus,
type systems, and compilation.
• Ability to implement prototype language processors in O’Caml, Haskell, or
a related language.
• Excellence in written and oral communication, analytical, and time
management skills.
• Ability to collaborate with a team.


• Published research in programming languages or a related field,
preferably in top-tier conferences.
• Experience in presenting at conferences or seminars.
• Knowledge of effect systems or gradual type systems.
• Experience with a proof assistant, such as Agda or Coq.
• Strong programming and software engineering skills.

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

Re: [Haskell] Haskell tutors required!

2020-09-09 Thread Philip Wadler
Yes, you've located last year's syllabus! Go well, -- P

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/

On Tue, 8 Sep 2020 at 21:20, Howard B. Golden 

> Hi, I am intrigued by the opportunity, but I wonder if I am up to the
> task. To help me decide, I have this question: Is the syllabus for the
> course similar to the most recent presentation (
> https://www.learn.ed.ac.uk/webapps/blackboard/content/listContent.jsp?course_id=_73477_1_id=_3873901_1
> )?
> Best regards,
> Howard
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

[Haskell] Haskell tutors required!

2020-09-03 Thread Philip Wadler
Edinburgh is one of a few places in the world that teaches functional
programming to first-year students, specifically Haskell.

This year, our Haskell course for first-year's will be delivered
remotely. Last year's class was 400 students, this year we expect
more. In the past we've had trouble locating enough high-quality
tutors. The unique situation offers us a new possibility: recruit
tutors from elsewhere. Please get in touch if:

(a) You might yourself be interested.

(b) You know of folk who might be interested or could forward an
announcement to a suitable mailing list.

Informatics 1 - Introduction to Computation (INF1A/FP)

Role: Tutor

Description: Tutors required from week 1 to week 11 (Monday 21
September -- Friday 4 December) to assist in weekly one-hour
ten-person tutorial sessions to review and mark solutions to exercises
prepared by students in advance of the tutorial, to facilitate group
work, and to answer questions about the material covered in the
course. Tutors will be paid for 3 hours/week: 1 for preparation, 1 for
marking, and 1 for the online tutorial session (£15.52/hour * 3 hours
* 11 weeks).  In the past, we have had trouble recruiting a sufficient
number of capable tutors. This year, we will take advantage of remote
delivery to recruit tutors from both within and without Edinburgh.
Tutorials are expected to be on Thursdays and Fridays each
week. Wadler will monitor tutorials, and expects to be in a position
to write letters of recommendation after the course, if required.

Skills: Good knowledge of functional programming, preferably in
Haskell, and of basic finite state machines and propositional logic;
such as might be had by a 3rd or 4th year undergraduate or a
postgraduate student. Ability to communicate enthusiasm for the
subject and encourage students new to university to engage in and
contribute to tutorials.  Tutors need to supply appropriate
encouragement and support for all students, from those who are
struggling to those who are coping easily and want to learn more.

To apply: Send email to wad...@inf.ed.ac.uk. Include one paragraph on
your background and why you would do a good job teaching the course.
Please include any relevant documentation, such as a copy of your

I look forward to hearing from you! Go well, -- P

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

[Haskell] Haskell, Then and Now. Got Questions? Ask them here!

2020-07-03 Thread Philip Wadler
IOHK Cardano Virtual Summit continues. Today's sessions include:

*16.00 (UK time) Fri 3 Jul* *Haskell, then and now: What is the future
for **functional
programming languages?* Prof Simon Peyton-Jones, Prof John Hughes, Prof
Philip Wadler, Dr Kevin Hammond, Dr Duncan Coutts.

You can submit questions via Reddit
Register for the summit here <https://cardanosummit.iohk.io/>. You can log
in with your registered username and password here

See you there! Go well, -- P

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

[Haskell] Cardano Virtual Summit 2020

2020-07-02 Thread Philip Wadler
Cardano Virtual Summit 2020 takes place today and tomorrow. Some talks that
may be of interest:

*16.00 Thu 2 Jul* *An overview of IOHK research* Prof Aggelos Kiayias, Prof
Elias Koutsoupias, Prof Alexander Russell, Prof Phil Wadler.

*18.30 Thu 2 Jul* *Architecting the internet: what I would have done *
*differently... *Vint Cerf, Internet pioneer and Google internet
evangelist, Prof Aggelos Kiayias, panel moderated by Prof Philip Wadler.

*20.00 Thu 2 Jul* *Functional smart contracts on Cardano* Prof Philip
Wadler, Dr Manuel Chakravarty, Prof Simon Thompson.

*16.00 Fri 3 Jul* *Haskell, then and now: What is the future for **functional
programming languages?* Prof Simon Peyton-Jones, Prof John Hughes, Prof
Philip Wadler, Dr Kevin Hammond, Dr Duncan Coutts.

You need register in advance. You can do so here:

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

[Haskell] Lecturer/Senior Lecturer/Reader in Programming Languages for Trustworthy Systems in Edinburgh LFCS

2019-12-10 Thread Philip Wadler
Lecturer/Senior Lecturer/Reader in Programming Languages for Trustworthy
Systems in Edinburgh LFCS


The University of Edinburgh seeks to appoint a Lecturer/Senior
Lecturer/Reader in Programming Languages for Trustworthy Systems.  An ideal
candidate will be able to contribute and complement the expertise of the
Programming Languages & Foundations Group which is part of the Laboratory
for Foundations of Computer Science (LFCS).

The successful candidate will have a PhD, an established research agenda
and the enthusiasm and ability to undertake original research, to lead a
research group, and to engage with teaching and academic supervision, with
expertise in at least one of the following:


   Practical systems verification: e.g. for operating systems, databases,
   compilers, distributed systems

   Language-based verification: static analysis, verified systems / smart
   contract programming, types, SAT/SMT solving

   Engineering trustworthy software: automated/property-based testing, bug
   finding, dynamic instrumentation, runtime verification

We are seeking current and future leaders in the field.

Applications from individuals from underrepresented groups in Computer
Science are encouraged.

Appointment will be full-time and open-ended.

The post is situated in the Laboratory for Foundations of Computer Science,
the Institute in which the School's expertise in functional programming,
logic and semantics, and theoretical computer science is concentrated.
Collaboration relating to PL across the School is encouraged and supported
by the School's Programming Languages Research Programme, to which the
successful applicant will be encouraged to contribute. Applicants whose
PL-related research aligns well with particular strengths of the School,
such as machine learning, AI, robotics, compilers, systems, and security,
are encouraged to apply and highlight these areas of alignment.

All applications must contain the following supporting documents:

• Teaching statement outlining teaching philosophy, interests and plans

• Research statement outlining the candidate’s research vision and future

• Full CV (resume) and publication list

The University job posting and submission site, including detailed
application instructions, is at this link:


Applications close at 5pm GMT on January 31, 2020. This deadline is firm,
so applicants are exhorted to begin their applications in advance.

Shortlisting for this post is due early February with interview dates for
this post expected to fall in early April 2020. Feedback will only be
provided to interviewed candidates. References will be sought for all
shortlisted candidates. Please indicate on your application form if you are
happy for your referees to be contacted.

Informal enquiries may be addressed to Prof Philip Wadler (

Lecturer Grade: UE08 (£41,526 - £49,553)

Senior Lecturer or Reader Grade: UE09 (£52,559 - £59,135)

The School is advertising a number of positions, including this one, as
described here:


About the Laboratory for Foundations of Computer Science

As one of the largest Institutes in the School of Informatics, and one of
the largest theory research groups in the world, the Laboratory for
Foundations of Computer Science combines expertise in all aspects of
theoretical computer science, including algorithms and complexity,
cryptography, database theory, logic and semantics, and quantum computing.
The Programming Languages and Foundations group includes over 25 students,
researchers and academic staff, working on functional programming, types,
verification, semantics, software engineering, language-based security and
new programming models. Past contributions to programming languages
research originating at LFCS include Standard ML, the Edinburgh Logical
Framework, models of concurrency such as the pi-calculus, and foundational
semantic models of effects in programming languages, based on monads and
more recently algebraic effects and handlers.

About the School of Informatics and University of Edinburgh

The School of Informatics at the University of Edinburgh is one of the
largest in Europe, with more than 120 academic staff and a total of over
500 post-doctoral researchers, research students and support staff.
Informatics at Edinburgh rated highest on Research Power in the most recent
Research Excellence Framework. The School has strong links with industry,
with dedicated business incubator space and well-established enterprise and
business development programmes. The School of Informatics has recently
established the Bayes Centre for Data Technology, which provide a locus for
fruitful multi-disciplinary work, including a range of compa

[Haskell] University of Edinburgh Chancellor's Fellowships

2018-02-13 Thread Philip Wadler
Dear All,

The University of Edinburgh is offering  a variety of Chancellor's Fellowships.
These are essentially lectureships with an initial fellowship period of
five years. Some fellowships may be in Programming Languages as part of the
Digital Technologies area, and suitable candidates are strongly encouraged
to apply. The deadline is March 12th, please see:


for further information. Feel free to contact me if you have questions.
Yours, -- P

.   \ Philip Wadler, Professor of Theoretical Computer Science,
.   /\ School of Informatics, University of Edinburgh
.  /  \ and Senior Research Fellow, IOHK
. http://homepages.inf.ed.ac.uk/wadler/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Haskell mailing list

Re: Two Proposals

2011-10-19 Thread Philip Wadler
 Yes... not quite so well with the “by” variants.  What would we say?

   then group initsBy by x

This problem occurs independent of group, does it not?

  then sortBy by x

The solution is to rename

  sort -- sortByDefault
  sortBy -- sort

but that's hardly appealing.  Might be better to live with the
problem.  Any ideas?

 I don’t think we can steal “group” as a keyword -- it’s a function exported
 by Data.List, and I don’t think the benefit justifies the cost.

So you are saying you prefer then group to group?

Yours,  -- P

.\ Philip Wadler, Professor of Theoretical Computer Science
./\ School of Informatics, University of Edinburgh
/  \ http://homepages.inf.ed.ac.uk/wadler/

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

Glasgow-haskell-users mailing list

Fwd: Two Proposals

2011-10-12 Thread Philip Wadler
George,  Thanks very much for this.  I like your suggestion, which
fits the logical structure perfectly; and you've suggested a neat way
around the ugliness of 'group groupBy'.  I also note that if we aren't
so worried about not introducing new keywords, that 'then group' could
become 'group'.  Yours,  -- P

On Mon, Oct 10, 2011 at 11:21 PM, George Giorgidze giorgi...@gmail.com wrote:
 A quick thought that came to me after hoogling [a] - [[a]].
 The first four functions in the search result are named after what they
 return (noun in plural) rather than what they do (verb). I am talking about
 inits, permutations, subsequence and tails.
 So I thought the following syntax might work as well if (as it is already
 common) grouping functions are named after  what they return.
 then   f
 then   f by e
 then group f
 then group f by e
 For example the following code fragments read well:
 then group inits
 then group permutations
 then group subsequences
 then group tails
 Here we use the special identifier group as a verb.
 I have not told you about the fifth result of the hoogling, the groupWith
 function. The following really looks ugly:
 then group groupWith by e
 But following the aforementioned naming convention the groupWith function
 could as well be named as equals. Now this reads well:
 then group equals by e
 Cheers, George

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

Glasgow-haskell-users mailing list

[Haskell] elementary tracing for Haskell

2004-10-11 Thread Philip Wadler
Are there any tools for beginning programmers that give traces of 
Haskell programs?  I want something like the following.  Given the 

sum [] = 0
sum (x:xs) = x + sum xs
sum [1,2,3]
should yield this trace
sum [1,2,3]
1 + sum [2,3]
1 + 2 + sum [3]
1 + 2 + 3 + sum []
1 + 2 + 3 + 0
1 + 2 + 3
1 + 5
I know there are advanced tools like Hat, but (unless I'm missing 
something) they don't yield anything as naive as the above, and it's 
naive users I'm trying to help here.  -- P

PS.  Google search on Haskell trace yields lots on Hat, plus the 

The Mystery of Brigadier General Harry L. Haskell
... we have developed a short biography of General Haskell and been 
able to trace the
 locations where he lived, and, possibly, how the medal came into my 
family. ...
 family.phelpsinc.com/branches/haskell/ - 28k - Cached - Similar pages

Philip Wadler, Professor of Theoretical Computer Science
School of Informatics, University of Edinburgh
JCMB, Mayfield Road, Edinburgh EH9 3JZ SCOTLAND
+44 131 650 5174 http://homepages.inf.ed.ac.uk/wadler/
Haskell mailing list

Re: HaskellDoc?

2000-03-17 Thread Philip Wadler

Volker suggests using SGML/DSSSL for documentation.  If one were to
take this route, I think XML/XSLT would be a more sensible combination.

for a quick intro to XML.  Runciman and Wallace at York have a nice
XML library for Haskell.

Volker (or anyone else reading this), can you tell me where to find a
user manual for DSSSL?  I've never been able to locate one.  I'd also
like an article for SIGPLAN or JFP (or both) describing DSSSL, if you
think you can write one please let me know.  (For those of you who
don't know, DSSSL is based on Scheme.)

Cheers,  -- P

ICFP 2000: new submission deadline

2000-02-27 Thread Philip Wadler

 In order to maintain the same deadline as our sister conference PPDP,
we are postponing our submission deadline by two weeks.
 Our apologies for the change.

 ICFP 2000:  International Conference on Functional Programming
Montreal, Canada; 18--20 September 2000
  (associated with PLI 2000/PPDP 2000)

  13.00 EST (18.00 UTC), 15 MARCH 2000
  (previously 13.00 EST, 1 March 2000)

   Call and submission details online

ICFP 2000

2000-02-22 Thread Philip Wadler

Below is the call for ICFP 2000.  You may prefer to view it on the web:

The submission deadline is fast approaching: 1 March 2000!
Two invited speakers have accepted:
Carl Seger, Intel
Benjamin Pierce, University of Pennsylvania
A third is in the works.

This year we are experimenting with two special classes of submissions,
Application letters (Applets) and Functional Pearls, to emphasize both the
utility and the beauty of functional programming.  If you have a nifty
application or an elegant program that never quite seemed to fit the conference
mold, then this is your year.  And of course, we also want submissions that
fulfill the traditional model of a crisp new research result.  See you in
Montreal!  -- P

Call for Papers
ICFP 2000:  International Conference on Functional Programming
Montreal, Canada; 18--20 September 2000
   (associated with PLI 2000)



ICFP 2000 seeks original papers on the full spectrum of the art, science, and
practice of functional programming.  The conference invites submissions on all
topics ranging from principles to practice, from foundations to features, and
from abstraction to application.  The scope covers all languages that encourage
programming with functions, including both purely applicative and imperative
languages, as well as languages that support objects and concurrency.  Papers
setting new directions in functional programming are particularly encouraged.
Topics of interest include, but are not limited to, the following:

 FOUNDATIONS: formal semantics, lambda calculus, type theory, monads,
continuations, control, state, effects.

  DESIGN: modules and type systems, concurrency and distribution, components
 and composition, relations to object-oriented and logic programming,
  multiparadigm programming.

  IMPLEMENTATION: abstract machines, compile-time and run-time optimization,
   just-in-time compilers, memory management, foreign-function and component

   TRANSFORMATION AND ANALYSIS: abstract interpretation, partial evaluation,
   program transformation, theorem proving, specification and verification.

   APPLICATIONS: scientific and numerical computing, symbolic computing and
 artificial intelligence, systems programming, databases, graphic user
 interfaces, multimedia programming, web programming.

   EXPERIENCE: FP in education and industry, ramifications on other
 paradigms and computing disciplines.

The conference also solicits two special classes of submissions, application
letters and functional pearls, described below.

Application Letters (Applets)

Especially in industry, users of functional languages may be fully occupied
writing functional programs, and may lack the time to write a full paper
describing their work. Thus attendees often hear only from those developing
functional languages --- the users are too busy using them. In order to attract
greater participation from users, the conference solicits application letters
describing experience using functional languages to solve real-world
problems. Such papers might typically be about six pages (any length up to
twelve pages is fine), and may be judged by interest of the application and
novel use of functional languages as opposed to a crisp new research result.

Functional Pearls

Program committees traditionally expect a paper to make a contribution of a
certain size. Ideas that are small, rounded, and glow with their own light may
have a number of venues, but conferences are not typically among them. (Among
the outlets have been columns such as Bentley's Programming Pearls in
Communications of the ACM, Rem's Small Programming Exercises in Science of
Computer Programming, and Barendregt's Theoretical Pearls and Bird's Functional
Pearls in the Journal of Functional Programming.) As an experiment, this year
the conference invites papers that develop a short functional program. Such
papers might typically be about six pages (any length up to twelve pages is
fine), and may be judged by elegance of development and clarity of expression as
opposed to a crisp new research result.

Submission guidelines

Authors should submit a 100-200 word abstract and a full paper. Submissions
should be no more than 12 pages in standard ACM conference format: two columns,
nine point font on ten point baseline, page 20pc (3.33in) wide and 54pc (9in)
tall with a column gutter of 2pc (0.33in). Submissions that do not meet these
guidelines will not be considered. Suitable style files for Latex, Word, and
Word Perfect are provided by the ACM at

Re: drop take [was: fixing typos in Haskell-98]

2000-01-25 Thread Philip Wadler

Chris,  I admit your argument about symmetry is attractive.
If you could put forward a concrete application, on a par with
the `break into blocks' application given earlier, you would
likely sway me.  -- P

Re: Contexts on data type declarations

1999-05-17 Thread Philip Wadler

 So doesn't my proposal give you precisely what you want, and no more?

Because my interest is as much in documentation as in constraints on
how the program executes (I admit your method achieves the latter).
If I write, say,

  data Eq a = Assoc a b

then I'd like the signatures to ensure that the first argument of Assoc
is a member of class Eq wherever it appears.  Your technique guarantees
this is Assoc is used in a constructor, but not if used in a destructor.

   But when you take a constructor *apart*, the invariant must hold
   by construction: you couldn't have built the thing you are taking
   apart unless invariant held.  So enforcing the invariant again is
   redundant; and in addition it pollutes the type of selectors.

Your `redundant pollution' is exactly the effect I want to achieve!
Clearly, tastes differ.  Hope this clarifies,  -- P

Re: Market Penetration of FP and Haskell

1999-03-04 Thread Philip Wadler

"S. Alexander Jacobson" [EMAIL PROTECTED] writes:

 So, if they are making sure that all platforms have a scheme interpreter,
 why not make the spec language agnostic.  i.e. allow the user to use any
 language that can be compiled/translated into scheme (by code written in

Your assumption is incorrect.  There is no Scheme interpreter in XSL.

I should have been clearer in my earlier comment: XSL is a special
purpose language with a highly specialised structure (as Alexander
notes, it is a little like production rules).  It is not as general as
either Haskell or Scheme.  DSSSL, unlike XSL, was embedded in Scheme
and provided both the specialised structure and general-purpose Scheme
computation.  I'd be very happy if XSL was as nice as Scheme!

 So the user would be able to say:
 transformation language=http://www.haskell.org/haskell2scheme

There is an extension mechanism in XSL to allow part of the transformation
to be done in another language, roughly similar to the above.

-- P

Re: Market Penetration of FP and Haskell

1999-03-02 Thread Philip Wadler

"S. Alexander Jacobson" [EMAIL PROTECTED] writes:

 There is a campaign going on to separate XSL (eXtensible style
 language) into two separate languages
 1. a transformation language
 2. a style language
 I suggest that now is the opportunity to make a convincing case that the
 transformational language should be a functional programming language.
 HoF, Laziness and statelessness are ideal properties for such a language.
 The existing XSL spec was somewhat influenced by scheme, but a
 transformational language has stronger consistency requirements and should
 be typechecked so Haskell seems like a good choice.

I have bad news, good news, bad news, and good news.

Bad news: The proposal to split XSL refers to the current working
draft, which already contains a transformation language (Section 2)
and a style language (Section 3).  Preliminary versions of the
transformation language are already in wide use (including Internet
Explorer 5 by Microsoft; LotusXSL by IBM Alphaworks; and xt, by James
Clark, editor of the XSL document).  So it's way too late, there is no
chance of getting the W3C to switch to Haskell.

Good news: The charter requires that XSL be `a declarative language',
and in fact the transformation part of XSL is a functional language.
It is largely based on DSSSL, the stylesheet language for SGML, an ISO
standard based on Scheme.  (I was gobsmacked when I first discovered
Scheme was part of an ISO standard for document production!)

Bad news: XSL is not nearly as nice as Haskell.  Sigh.

Good news: Malcolm Wallace and Colin Runciman at York are working
on a Haskell library for manipulating XML.  If the transformation and
style languages are separated (which seems likely to happen), then that
will make it easier for users to substitute Haskell for the transformation
part of XSL, if they choose.

-- Phil Wadler (member of the XSL working group)

PS.  If anyone has any suggestions about *small and reasonable* changes
to the XSL working draft that would be an improvement, please let me know.

XSL working draft: http://www.w3.org/TR/WD-xsl
XSL W3C home page: http://www.w3.org/Style/XSL/

Bell Labs, Lucent Technologies  http://www.cs.bell-labs.com/~wadler
600 Mountain Ave, room 2T-402   office: +1 908 582 4004
Murray Hill, NJ 07974-0636 fax: +1 908 582 5857
USA   home: +1 908 626 9252

Re: MonadZero (concluded)

1998-11-06 Thread Philip Wadler

class Monad m where
  return :: m a
  (=)  :: m a - (a - m b) - m b
  ()   :: m a - m b - m b

  fail :: String - m a
  fail s = error s

IO.fail becomes IO.ioError

Looks good.

class Monad m = MonadPlus m where
  mzero :: m a
  mplus :: m a - m a - m a

Why is this here?  It doesn't need to be in the prelude.  Just
leave it for the user to define (and then the user may pick
better names, like Ringad, zero, and +).  -- P

mfail - fail

1998-11-05 Thread Philip Wadler

Here's an even better idea: replace mfail with fail.
It is, after all, the fail of the IO monad!

  Option 4'': Monad ((=), return, fail)

-- P

Re: mfail - fail

1998-11-05 Thread Philip Wadler

Ralph writes,

  Unfortunately not, fail of IO fame has type
  IOError - IO a
  and not
  String - m a
  as Monad's new member.
  Here's my suggestion:
  o  Monad ((=), return, fail)
  o  throw (thanks Frank) instead of fail for the IO operation
  o  raise for Haskell-2's exception mechanism (raise and handle)?

Oops, good point.  `throw' in IO works fine; or if we don't want
to steal such a general name, `ioerror' seems sensible.  -- P

Re: Default Default

1998-11-05 Thread Philip Wadler

John Peterson writes

  Int is really not a nice
  datatype and we shouldn't be allowing it to crop up unexpectedly.

Yes, but if we believed that, we should use Integer, not Int, for

You are right, beginners may stub their toe on factorial.  On the other
hand, with the default set to Integer, beginners may stub their toe
on bad performance.  Anyone familiar with, say, C, will be unsurprised
by the need to switch Int to Integer to get factorial to work, but will
be very surprised by the need to switch Integer to Int to get decent
performance.  -- P

Re: Haskell 98 progress

1998-11-04 Thread Philip Wadler

  - The simple-context restriction.  My difficulty here is that there
is just no good place to stop.  The 'type variable in head position'
option is attractive, but it's unimplemented, untested, and it's
neither the status quo nor a final destination.  On the other hand,
we *believe* it's technically ok and restores principal types.

It seems to me restoring principal types is worth it.  Didn't we vote
for this change at ICFP?  I thought the `slippery slope' argument you
give here was applied to something else, namely, allowing more general types
in instance declarations.  I argued for that change too, but you and some
others were unconvinced.  -- P

Re: MonadZero (concluded?)

1998-11-04 Thread Philip Wadler

There is no need to have both `mzero' and `mfail' in every monad.
Just have `mfail'.  Leave `zero' and `plus' to MonadPlus.  This should
make Eric partially happy.  It also means one can simply write

instance Monad [] where
   ...return, =,  as before...
   mfail s = []

rather than

instance Monad [] where
   ...return, =,  as before...
   mfail s = mzero
   mzero = []

The names `mzero' and `mfail' are horrible.  I like Ralph's suggestion
to change `fail' to `raise' in the IO monad, and use `fail' for
`mfail'.  If that doesn't work, try something else, but please
pick names that have a simple meaning in English (as with `return')
not monsters like `mzero' and `mfail'.  -- P

Re: MonadZero

1998-11-03 Thread Philip Wadler

Simon says,

  Here are the two proposals I suggested in
   1.Fix up the current version.
   use MonadZero for do expressions with *irrefutable* patterns
   (instead of *unfailable* patterns as now)
   2.Nuke MonadZero altogether.
   add mfail :: m a  to Monad instead

Sorry, I don't understand option 2, can you please explain?

Simon also says,

  But (1) really sticks in my craw.  How can we explain this:
  f :: Monad m = m (a,b) - m a
  f m1 = do { x - m1; return (fst x) }
  g :: MonadZero m = m (a,b) - m a
  g m1 = do { (a,b) - m1; return a }
  h :: Monad m = m (a,b) - m a
  h m1 = do { ~(a,b) - m1; return a }
  ... the type differences between g and f,h are really hard to justify.

Yes.  But these subtle differences exist independent of MonadZero.

f x  =  [fst x]
g (a,b)  =  [a]
h ~(a,b) =  [a]

Here it turns out f and h are equivalent and g differs, just as above.
All that differs with MonadZero is that the type is affected, as well
as the semantics.  Eric and others point out that this may be a good
thing: better to find out about the difference at compile-time than
at run-time.

I want to make a different plea: keep the language design consistent!
Yes, the difference between f, g, h is a wart, but let's have one wart
repeated, rather than two different warts.

-- P

Re: MonadZero

1998-11-03 Thread Philip Wadler

Simon says, 

   Sorry, I don't understand option 2, can you please explain?

* Eliminate MonadZero
* Add 'mfail :: m a' to Monad, with a suitable default decl
* Every do expression has a type in Monad

I must be dense this morning, as I'm still in the dark.  What is the
intended meaning of `mfail'?  If `mfail' is `mzero', why change the
name?  What is the suitable default declaration?  What, if anything,
does `mfail' have to do with `do' expressions?  -- P

Re: MonadZero

1998-11-03 Thread Philip Wadler

Thanks for the further explanation.

  On reflection there probably shouldn't be a default declaration for mzero.
  Any compiler for Haskell must do *something* if a method is
  called for which there is neither an explicit declaration in 
  the instance, nor a default method.  ...  Leaving out the default
  method would let a compiler halt execution reporting

"Pattern match failure in `do' expression at line 39 of Foo.hs"

  which is what we want.

Well, you can't always give such a message, because the monad might be
unresolved at the point the `do' appears (as in all your examples
where MonadZero had to appear in the scope).  And you message looks
odd, since the problem is that mzero is undefined.  But these are
implementation issues.

I see the attraction of adding `mzero' to Monad.  The chief
disadvantage is that it makes the class hierarchy less expressive.  Is
there anyone out there who values the separation of Monad and
MonadZero for purposes other than `do' notation?  If not, then I
don't oppose this option.

I like Ralph's suggestion to use `fail' instead of `mzero', and
to change `fail' in IO to `raise', if that doesn't break too many
existing programs.

-- P

Re: Haskell 98

1998-10-26 Thread Philip Wadler

 Consider the function
   t :: T a = T a - T a
 I think that it's far from clear what each of the T's mean!
 Worse, in Haskell 2 we'll also have
   t :: T T = T a - T a
 In (T T) one is class and the other is a type constructor.

Let's leave the language as it is: class names and type names must be
distinct.  Rationale: the above examples show that a change might have
questionable consequences.  Recall that the change requires adding the
label `class' to signatures, but not adding the label `type'; it feels
more like a convenient hack than a well-rounded design.  Rather than
make a questionable change now, let's leave it for Haskell 2 and get it
right there.  -- P

Re: Functional DSP

1998-10-24 Thread Philip Wadler

I was planning to try similar things you have mentioned
on your page, such as interfacing to "the fastest FFT in
the West" (FFTW) via GreenCard, ...

By the way, FFTW although written in C, was generated by a functional
program written in Caml.  You can find more details on the web page

Functional Programming in the Real World

-- P

Int vs. Integer

1998-10-07 Thread Philip Wadler

Hi Simon, This really seems to have touched off a firestorm!  I think
the meta-argument made by Simon T (and by you) is sensible, so I don't
object to maintaining the status quo.  But you implied that changing
the libraries led to `a ball of spaghetti', and I don't see that, can
you please explain?  I'm not arguing to change the decision, I just
want to know what problems (if any) you spotted.  Cheers, -- P

Re: Int vs Integer

1998-10-06 Thread Philip Wadler

Simon proposes not to change the prelude to support Integer,
giving this argument:

  The more this topic gets discussed the more I that it's like
  other Prelude things: you pull on one thing and the plate of
  spaghetti ends up in your lap.

I'm confused.  I thought at the ICFP Haskell meeting we agreed
on the following types:

length :: Integral i = [a] - i
take, drop :: Integeral i = i - [a] - [a]
showsPrec :: (Show a, Integral i) = i - a - ShowS

and so on.  These have the advantage that they work with either Int or
Integer.  If Haskell implicitly imposes `default (Integer,Double)'
then the old efficiency level can be recalled by simply adding the
line `default (Int,Float)' to your program.  The only problem I recall
was what types to give to member of class Floating, and that looked so
odd that it looked to need changing regardless of what we did with Int
and Integer.

Simon argues that it is not so bad to retain Int over Integer, saying

  I don't think Chris's "plain ordinary punters" will notice.  They
  don't write programs with lists longer than 2^31.

I think this argument is misleading.  The key to the above proposal is
that it makes it easy to write programs that use either Int or Integer
uniformly.  With Simon's proposal, if you wish to use Integer then you
will need to introduce coercions from Int to Integer.  This makes life
hard for any punter who wishes to use Integer as a default, or for any
punter who uses Int as a default and gets overflow (admittedly, the
overflow is likely to be on something other than a list length, but so

I'm not against adopting a conservative design if there is a problem,
but then Simon ought to explain what the problem is.

Cheers,  -- P

Re: Int vs Integer

1998-09-15 Thread Philip Wadler

Simon PJ writes:

 I think we should
   - generalise length, splitAt, drop, take, replicate
   - generalise toEnum, fromEnum
   - leave (!!) alone (arrays must fit in an address space)
 I don't know what to do about RealFloat. Sigh

If the goal is to make life easy for beginners (and to make it easy to
avoid overflow errors), shouldn't we generalise *all* uses of Int, so
that a beginner can use Integer uniformly everywhere, and need never
know about Int?

Even given this principle, it's not clear what to do about RealFloat,
which has both Int and Integer: generalize both to the same type
variable, generalize each separately, generalize Int only, or replace
Int by Integer? -- my guess is the first is correct.

Uniformly generalizing Int strikes me as the right thing to do,
although I hope before we commit to it someone will test it in advance,
say on a subset of the Nofib suite.  -- P

Re: type errors

1998-06-30 Thread Philip Wadler

  You're right.  The restriction is excessive.  Thanks for pointing
  this out.  Probably we should only require that at least one
  of the class variables is constrained.

Why even require this?  (All x) = x - x uses the class `All'
which restricts its argument not one whit.  -- P

Re: Teaching Haskell

1998-06-24 Thread Philip Wadler

  Speaking of this, when is someone going to write an fp book that teaches
  all the interesting stuff?  Monads, advanced use of classes, advanced types
  in general, exceptions, etc... [here's an idea: after comprehending the
  book, the student should be able to follow the discussion on this mailing
  list without difficulty]
  In my opinion, such a book should assume the reader has already been
  through Bird and Wadler or whatever, is familiar with the lambda calculus,
  and has written substantial programs.  Please, no more "introduction to fp"

How about second editions of `Intro to FP'?  Richard has revised Bird
and Wadler, which is now just Bird.  It includes type classes and
monads, and should be in your bookstore round about now.  -- P

Bell Labs, Lucent Technologies  http://www.cs.bell-labs.com/~wadler
600 Mountain Ave, room 2T-402   office: +1 908 582 4004
Murray Hill, NJ 07974-0636 fax: +1 908 582 5857
USA   home: +1 908 626 9252
   ``There may, indeed, be other applications of the system
than its use as a logic.''  -- A. Church, 1932

Re: Wadler's prettier printer

1998-05-14 Thread Philip Wadler

Rossberg writes,

  thinking about whether the pretty printer proposed by Wadler requires
  some changes to be efficient in a strict language, I stumbled over the
  the last case defining `flatten':
  flatten (x :| y) = flatten x
  I wonder why it is necessary here to recurse on x. The only point were a
  doc (x:|y) is constructed is in the function `group':
  group z   = flatten z :| z
  So the x above is always flat already. Wouldn't the equation
  flatten (x :| y) = x
  suffice? Doing recursion here seems to be unnecessary overhead. In
  particular, it prevents structure sharing between alternatives when
  grouping, because flatten rebuilds the whole doc tree (which might be
  more of a problem without laziness).

  Am I missing something?

You're not the one who missed something.  I didn't spot this
optimization because I had in mind the case where the user might use
| directly.  If we disallow this, your tricky optimisation is quite
sensible.  As it happens, it doesn't seem to improve time or space by
much, at least for the lazy version.  I include modified code below.

It's wonderful to have clever people reading and commenting on
my code.  Thanks!  -- P

Bell Labs, Lucent Technologies  http://www.cs.bell-labs.com/~wadler
600 Mountain Ave, room 2T-402   office: +1 908 582 4004
Murray Hill, NJ 07974-0636 fax: +1 908 582 5857
USA   home: +1 908 626 9252

-- Pretty printer based on grouping
-- as in March 1998 version of `A prettier printer'
-- Philip Wadler, March 1998

-- Modified version based on suggestion of Andreas Rossberg
-- Philip Wadler, May 1998

-- Two optimized lines, marked below, exploit the invariant that
-- the left hand argument of :| must be result of applying flatten.

infixr 5 :|
infixr 6 :
infixr 6 
data  DOC=  NIL
 |  DOC : DOC
 |  NEST Int DOC
 |  TEXT String
 |  LINE
 |  DOC :| DOC
data  Doc=  Nil
 |  String `Text` Doc
 |  Int `Line` Doc
nil  =  NIL
x  y   =  x : y
nest i x =  NEST i x
text s   =  TEXT s
line =  LINE

group (x :| y) =  x :| y-- *** remove line for unoptimized
group x  =  flatten x :| x
flatten NIL  =  NIL
flatten (x : y)=  flatten x : flatten y
flatten (NEST i x)   =  NEST i (flatten x)
flatten (TEXT s) =  TEXT s
flatten LINE =  TEXT " "
flatten (x :| y)   =  x  -- *** replace by (flatten x) for unoptimized
layout Nil   =  ""
layout (s `Text` x)  =  s ++ layout x
layout (i `Line` x)  =  '\n' : copy i ' ' ++ layout x
copy i x =  [ x | _ - [1..i] ]

best w k x   =  be w k [(0,x)]
be w k []=  Nil
be w k ((i,NIL):z)   =  be w k z
be w k ((i,x : y):z)   =  be w k ((i,x):(i,y):z)
be w k ((i,NEST j x):z)  =  be w k ((i+j,x):z)
be w k ((i,TEXT s):z)=  s `Text` be w (k+length s) z
be w k ((i,LINE):z)  =  i `Line` be w i z
be w k ((i,x :| y):z)  =  better w k (be w k ((i,x):z)) (be w k ((i,y):z))
better w k x y   =  if fits (w-k) x then x else y
fits w x | w  0 =  False
fits w Nil   =  True
fits w (s `Text` x)  =  fits (w - length s) x
fits w (i `Line` x)  =  True
pretty w x   =  layout (best w 0 x)

-- Utilities

space   =  text " "
x / y =  x  line  y
x + y =  x  space  y
par x   =  text "("  x  text ")"

stack   =  foldr1 (/)
strip   =  foldr1 (+)

-- Testing

dataTerm=  Term String [Term]

prTerm (Term n [])  =  text n
prTerm (Term n ts)  =  par (group (nest 2 (stack (text n : map prTerm ts

szTerm (Term n ts)  =  length n + length ts + sum (map szTerm ts)

mkTerm 0 i  =  Term (mkName i) []
mkTerm (d+1) i  =  Term (mkName i) (map (mkTerm d) (randoms i))

mkName i=  [ 'x' | j - [1..i] ]

randoms i   

Re: Design Methodologies

1998-02-02 Thread Philip Wadler

  What I am looking for are the formal design methods and tools
  supporting those methods that lead one through the task
  of decomposing a problem (possibly specified in a requirements
  document) into a design that can be used as the starting point
  for coding with a functional language. The documentation
  should not only cover the engineering task of (using generic terms)
  top-level-design, intermediate-design, detailed-design,
  unit-codingtest, testintegration,
  requirements-testingsell-off, etc, but also how one
  manages such a project.

Good question.  Both Erlang and Natural Expert have been widely used
in industry, and both have training courses that cover at least some
of what you want.  For a quick intro to these, see


and follow the pointers from there.  (This paper also contains just
what you don't want, a list of serious applications of FP.  :-)

Your quote represents what I suspect is a widely held view.

  "... functional approaches do not permit you to create
  an abstract notion of system state that can be modeled.
  Functional languages are great for people to think about
  problems (especially recursion), but they are no good for 
  large software systems.  It's not that people haven't looked
  at them in this context; they don't work."

This claim is belied by the success of Erlang and Natural Expert.
Also, there are scores of functional programs containing over 50K
lines.  While not large in software engineering terms, these go well
beyond merely being used to `think about problems (especially
recursion)'.  If the quote is not confidential, I'd appreciate if
you could give a source for it.

Yours,  -- P

Bell Laboratories   http://www.cs.bell-labs.com/~wadler
Lucent Technologies office: +1 908 582 4004
600 Mountain Ave, room 2T-402  fax: +1 908 582 5857
Murray Hill, NJ 07974-0636  USA   home: +1 908 626 9252

Dave MacQueen: POPL call for participation

1997-11-19 Thread Philip Wadler

Phil,  Could you send the following POPL 98 Call for Participation message to
the Haskell mailing list for me?  Thanks,  Dave


Twenty-fifth Annual ACM Symposium
   January 19 - 21, 1998, San Diego, California


POPL '98 is the 25th in the series that started in Boston in 1973.  In
honor of this anniversary, we will have special invited talks from
John Reynolds (CMU), Gerard Berry (Ecole des Mines/INRIA), and
Mark Wegman (IBM), as well as a regular invited talk by Peter Lee (CMU).

The Fifth International Workshop on Foundations of Object Oriented
Languages (FOOL 5) will be held in conjunction with POPL '98 on
January 17 - 18, 1998.


Important Dates

December 17, 1997 -  expiration of special POPL hotel rates
December 31, 1997 -  deadline for advance registration discount

Important URLs


   General POPL '98 information, including the advance program,
   registration and hotel.


   Information about FOOL 5.


   Online conference registration form.


   San Diego Marriott Mission Valley Hotel web page.  Information
   on travel and facilities.




Invited Talk: 9:00-10:00
Where Theory and Practice Meet: POPL Past and Future
John Reynolds, Carnegie Mellon University.

Session 1: 10:30-12:30.
Chaired by Neil Jones, University of Copenhagen.

Higher-Order UnCurrying
John Hannan and Patrick Hicks

Alias Analysis of Executable Code
Samya Debray, Robert Muth and Matthew Weippert

Escape Analysis: Correctness Proof, Implementation and Experimental
Bruno Blanchet

Data Flow Analysis is Model Checking of Abstract Interpretations
David A. Schmidt

Session 2: 14:00-16:00.
Chaired by Atsushi Ohori, Kyoto University.

Bridging the Gulf: a Common Intermediate Language for ML and Haskell
Simon Peyton Jones, John Launchbury, Mark Shields and Andrew Tolmach

Correctness of Monadic State: An Imperative Call-by-Need Calculus
Zena M. Ariola and Amr Sabry

Functional Representation of Data Structures with a Hole
Yasuhiko Minamide

From System F to Typed Assembly Language
Greg Morrisett, David Walker, Karl Crary, and Neal Glew

Session 3: 16:30-18:30.
Chaired by Jeanne Ferrante, University of California, San Diego.

Maximal Static Expansion
Denis Barthou, Albert Cohen and Jean-Franccedil;ois Collard

Array SSA Form and its Use in Parallelization
Kathleen Knobe and Vivek Sarkar

Putting Pointer Analysis to Work
Rakesh Ghiya and Laurie J. Hendren

Edge Profiling versus Path Profiling: The Showdown
Thomas Ball, Peter Mataga and Mooly Sagiv


Invited Talk: 9:00-10:00.
Global Trends in Flow Analysis
Mark Wegman, IBM.

Session 4: 10:30-12:30.
Chaired by Martin Odersky, University of South Australia.

A Type System for Java Bytecode Subroutines
Raymie Stata and Martiacute;n Abadi

Java-light is Type-Safe -- Definitely
Tobias Nipkow and David von Oheimb

Classes and Mixins
Matthew Flatt, Shriram Krishnamurthi and Matthias Felleisen

Manufacturing Cheap, Resilient, and Stealthy Opaque Constructs
Christian Collberg, Clark Thomborson and Douglas Low

Session 5: 14:00-16:00.
Chaired by Nevin Heintze, Bell Laboratories, Lucent Technologies.

From Polyvariant Flow Information to Intersection and Union Types
Jens Palsberg and Christina Pavlopoulou

Inference of Polymorphic and Conditional Strictness Properties
Thomas P. Jensen

Fast Interprocedural Class Analysis
Greg DeFouw, David Grove and Craig Chambers

Path-Sensitive Value-Flow Analysis
Rastislav Bod\'{i}k and Sadun Anik

Session 6: 16:30-18:30.
Chaired by Xavier Leroy, INRIA, Rocquencourt.

Local Type Inference
Benjamin C. Pierce and David N. Turner

Static Typing for Dynamic Messages
Susumu Nishimura

Second-order Unification and Type Inference for Church-style Polymorphism
Aleksy Schubert

Dynamic Typing as Staged Type Inference 
Mark Shields, Tim Sheard and Simon Peyton Jones

Invited Talk: 20:30-21:30.
From Principles to Programming Languages
G\'{e}rard Berry, \'{E}cole des Mines and INRIA, Sophia-Antipolis.


Re: Damas-Milner, Hindley-Milner, ...

1997-10-15 Thread Philip Wadler

  Briefly, what is the difference between Damas-Milner and Hindley-Milner

None.  They are different names for the same thing.

The actual algorithm was discovered a number of times.  Hindley
wrote a history of it, which I don't have to hand.  He notes that
it had already been implemented on a computer in the 1950s!
I believe he attributes it to Curry.  It was also rediscovered by
a number of others, including Jim Morris, then a PhD student at MIT.

Hindley wrote down a concise description, based on logic.  Milner,
independently, rediscovered the algorithm, apparently not based on
logic.  Later, Milner and his PhD student Damas reformulated the
algorithm in a much more accessible way, with the connection to logic

Milner's key contribution -- and very important it was, too -- was to
add the syntactic form `let x=e0 in e1'.  Without this, the algorithm
infers principle types, but does not seriously exploit polymorphism.

-- P

Re: Monads and Linear Logic

1997-09-03 Thread Philip Wadler

 In general laymen's terms, what are the performance and expressiveness
 issues in comparing monads with linear objects?

You may want to glance at the latest version of `How to Declare an
Imperative' (Computing Surveys, to appear), which explicitly makes
this comparison.  It's available via my home page.  -- P

Bell Laboratories http://cm.bell-labs.com/cm/cs/who/wadler/
Lucent Technologies office: +1 908 582 4004
600 Mountain Ave, Room 2T-304  fax: +1 908 582 5857
Murray Hill, NJ 07974-0636  USA   home: +1 908 626 9252

Re: Evaluating Haskell

1997-08-27 Thread Philip Wadler

 I have not followed the Pizza mailing list (how do you get Pizza in the

You can find out about Pizza (and join the mailing list) at any of the
web sites.  If you are interested in functional programming and Java,
please have a gander!  -- P

Pizza Sites

More information on Pizza is available at the following web sites
(which mirror each other):

  University of South Australia
  Bell Labs, Lucent Technologies
  Loyola University, Chicago
  Tulane University, New Orleans
  University of Karlsruhe

Bell Laboratories http://cm.bell-labs.com/cm/cs/who/wadler/
Lucent Technologies office: +1 908 582 4004
700 Mountain Ave, Room 2T-304  fax: +1 908 582 5857
Murray Hill, NJ 07974-0636  USA   home: +1 908 626 9252

Re: Evaluating Haskell

1997-08-27 Thread Philip Wadler

 I have not followed the Pizza mailing list (how do you get Pizza in the

You can find out about Pizza (and join the mailing list) at any of the
web sites.  If you are interested in functional programming and Java,
please have a gander!  -- P

Pizza Sites

More information on Pizza is available at the following web sites
(which mirror each other):

  University of South Australia
  Bell Labs, Lucent Technologies
  Loyola University, Chicago
  Tulane University, New Orleans
  University of Karlsruhe

Bell Laboratories http://cm.bell-labs.com/cm/cs/who/wadler/
Lucent Technologies office: +1 908 582 4004
700 Mountain Ave, Room 2T-304  fax: +1 908 582 5857
Murray Hill, NJ 07974-0636  USA   home: +1 908 626 9252

GC (was: Stumped without mutation... )

1997-08-12 Thread Philip Wadler

   I think that Java might be slow, just because of this automatic garbage

I can't let this chestnut go unchallenged.  Let's not bandy about
claims without evidence.  GC is perfectly fine for building into phone
switches (via Erlang) so it can't be all that slow. -- P

Re: pattern guards + monads

1997-05-15 Thread Philip Wadler

 Now, for monads, Phil Wadler writes a law:
m = \x - k x ++ h x  =  m = k ++ n = k
 in which 'h' appears on the lhs but not the rhs, and 'n' on the rhs but
 not the lhs.  ... perhaps the equation should read as follows?
 m = \x - k x ++ h x  =  m = k ++ m = h

Yes, of course it should.  Thanks for the correction.  -- P

Re: Monads, Functors and typeclasses

1997-05-14 Thread Philip Wadler

 Maybe we should do it as follows:
 At the moment you make an instance of a class that has default definitions
 for one of its superclasses, and there is no corresponding instance for
 that class, we implicitly insert the specified default instance. If
 there are more instances to choose from at that point, we report an error. 

If I read this properly, the intent is to make the meaning of the
program depend on the order in which it is compiled.  A good design
will make the meaning of a program depend only on the program text.
-- P

Journal of Functional Programming

1997-05-02 Thread Philip Wadler

Below are the contents of recent issues of the Journal of Functional
Programming.  Contents for all issues, and abstracts for the last
three years, can be found at:


The site also contains information on how to submit, and how you or
your library can subscribe.  -- Philip Wadler, Editor-in-Chief, JFP

JFP 6.4 July 1996

Representing demand by partial projections
John Launchbury and Gebreselassie Baraki

New dimensions in heap profiling
Colin Runciman and Nicklas Rojemo

Benchmarking implementations of functional languages with 'Pseudoknot',
a float-intensive benchmark
Pieter Hartel et al

Functional Pearl: The third homomorphism theorem
Jeremy Gibbons

JFP 6.5 September 1996

A syntactic theory of type generativity and sharing
Xavier Leroy

LambdaV, a calculus of explicit substitutions which preserves strong
Zine-el-Abidine Benaissa, Daniel Briaud, Pierre Lescanne and Jocelyne

Pi-Red+ An interactive compiling graph reduction system for an applied
Dietmar Gartner and Werner E. Kluge

A simple proof of the undecidability of inhabitation in Lambda-P
Marc Bezem and Jan Springintveld

JFP 6.6 November 1996

The Bologna Optimal Higher-order machine
Andrea Asperti, Cecilia Giovannetti and Andrea Naletto

A positive supercompiler
M. H. Sorensen, R. Gluck and N. D. Jones

Optimal purely functional priority queues
Gerth Stolting Brodal and Chris Okasaki

JFP 7.1 January 1997

A foundation for actor computation
Gul A. Agha, Ian A. Mason, Scott F. Smith and Carolyn L. Talcott

First-order functional languages and intensional logic
R.Rondogiannis and W.W. Wadge

On the effectiveness of functional language features: NAS benchmark FT
J. Hammes, S. Sur and W. Bohm

Call for papers: Special issue on Theorem Provers and Functional

Bell Laboratories http://cm.bell-labs.com/cm/cs/who/wadler/
Lucent Technologies office: +1 908 582 4004
700 Mountain Ave, Room 2T-304  fax: +1 908 582 5857
Murray Hill, NJ 07974-0636  USA   home: +1 908 626 9252

Re: pattern guards and guarded patterns

1997-04-30 Thread Philip Wadler

Tommy Thorn writes:

 My only worry is that Haskell (if not already) is turning into the C++
 of functional languages, ie. feature upon feature.  Haskell is already
 quite a mouthfull to learn.

This worries me too.  Not least is the problem that - is not a
particularly natural syntax for this feature, as others have noted.
(Though I admit the connection with qualifiers is tempting.)

Simon PJ writes:

 Re (b) we write
   f (x:xs) [] = e1
   f (x:xs) (y:ys) = e2
 when you might think we should find a syntax to allow
   f (x:xs) [] = e1
(y:ys) = e2
 But we write out the common prefix and expect the compiler to spot the
 common pattern.  Similarly I think we should rely on the compiler to spot
 common prefixes of guards.
   f x y | (x:xs) - xs,
   [] - ys
 = e1
 | (x:xs) - xs,
   (y:ys) - ys
 = e2
 The only extra difficulty is that the RHS of the pattern guard can be an

I have to say, this looks pretty ugly to me, and I'm not sure I would
be happy trusting it to an optimiser.

It is arguably a mistake that Haskell does not allow some way of
capturing commonality in pattern matching.  As I recall, we very
nearly added the syntax

f (x:xs) [] = e1
'  ' (y:ys) = e2

and refrained only because we wanted a conservative design.  Simon's
suggestion strikes me as far from conservative, so perhaps it is worth
revisiting more general issues of pattern matching.

-- P

Re: global type inference

1997-02-25 Thread Philip Wadler

 I think the report has it about right.
 * A conforming implementation of Haskell 1.4 must support mutually recursive
   modules.  That is, a collection of individually legal mutually recursive
   modules is a legal Haskell program.
 * The Report recognises that implementations available in the forseeeable
   future are likely to require the programmer to supply extra type
   information to support separate compilation of mutually recursive modules.
   For example, the implementation may require exported functions to be
   equipped with type signatures.

Why muddle implementation with language design?  Pick a design that
we know everyone can implement -- e.g., exported functions must have
type declarations -- and stick to that.  When the state of implementations
improve, the specification for Haskell 1.5 can change accordingly.  -- P

Re: global type inference

1997-02-25 Thread Philip Wadler

 This isn't muddling implemenation with language design.  The language design
 says mutual recursion is OK. A particular implementation supporting separate
 compilation will typically require a variety of "help", such as a Makefile
 with accurate dependencies.  Requiring type signatures, or interface files,
 or whatever is just another implementation specific thing to support
 separate compilation.

The language design should at least say what is a legal Haskell file,
and what is not.  As much as possible, it should lay out exactly what
I need to do to write a program that will work on any Haskell compiler
that conforms to the definition.  It seems to me that the language
definition should go further than it does on this point, though
perhaps reasonable people may disagree.  -- P

Re: Haskell 1.3

1996-03-08 Thread Philip Wadler

 It looks ugly, but we could say that a data declaration does not 
 have to have any constructors:
   data Empty =
-- Lennart

I agree that the best way to fix this is to have a form of data
declaration with no constructors, but I'm not keen on the syntax you
propose.  How about if we allow the rhs of a data declaration to be
just `empty', where `empty' is a keyword?

data Empty = empty

-- P

Haskell 1.3

1996-03-07 Thread Philip Wadler

Congratulations to all involved on Haskell 1.3!  I especially like the
introduction of qualified names and the attendant simplifications.
Here are some small suggestions for further improvement.

Suggestion: the introduction should draw attention to the fact that
interface files are no longer part of the language.  Such a wondrous
improvement should not go unremarked!

ISO Character Set
Suggestion:  Add a one-page appendix, giving the mapping between
characters and character codes.

Fields and records
Suggestion: Use = to bind fields in a record, rather than -.
I concur with Thomas Hallgren's argument that - should be reserved for
comprehensions and for `do'.  SML has already popularised the = syntax.

Suggestion: Use the SML syntax, `#field' to denote the function that
extracts a field.  Then there is no possibility of accidentally
shadowing a field name with a local variable.  Just as it is a great
aid to the readability of Haskell for constructors to be lexically
distinguished from functions, I predict it will also a great aid for
field extractors to be lexically distinguished from functions.

(Alternative suggestion: Make field names lexically like constructor
names rather than like variable names.  This again makes shadowing
impossible, and still distinguished fields from functions, though now
field extractors and constructors would look alike.)

The empty type
Suggestion: Include among the basic types of Haskell a type `Empty'
that contains no value except bottom.

It was a dreadful oversight to omit the empty type from Haskell,
though it took me a long time to recognise this.  One day, I bumped
into the following example.  I needed the familiar type

data  Tree a  =  Null | Leaf !a | Branch (Tree a) (Tree a)

instantiated to the unfamiliar case `Tree Empty', which has `Null' and
`Branch' as the only possible constructors.

One can simulate the empty type by declaring

data  Empty = Impossible

and then vowing never to use the constructor `Impossible'.  But by
including `Empty' in the language, we support a useful idiom and
(perhaps more importantly) educate our users about the possibility of
an algebraic type with no constructors.

It would be folly to allow only non-empty lists.  So why do we allow
only non-empty algebraic types?

The infamous (n+1) patterns
Suggestion:  Retain (n+1) patterns.

If Haskell was a language for seasoned programmers only, I would
concede that the disadvantages of (n+1) patterns outweigh the

But Haskell is also intended to be a language for teaching.
The idea of primitive recursion is powerful but subtle.  I believe
that the notation of (n+1) patterns is a great aid in helping students
to grasp this paradigm.  The paradigm is obscured when recursion over
naturals appears profoundly different than recursion over any other

For instance, I believe student benefit greatly by first seeing

power x 0   =  1
power x (n+1)   =  x * power x n

and shortly thereafter seeing

product []  =  1
product (x:xs)  =  x * product xs

which has an identical structure.  By comparison, the definition

power x 0   =  1
power x n | n  0   =  x * power x (n-1)

completely obscures the similarity between `power' and `product'.

As a case in point, I cannot see a way to rewrite the Bird and Wadler
text without (n+1) patterns.  This is profoundly disappointing,
because now that Haskell 1.3 is coming out, it seems like a perfect
time to do a new edition aimed at Haskell.  The best trick I know is
to define

data Natural = Zero | Succ Natural

but that doesn't work because one must teach recursion on naturals and
lists before one introduces algebraic data types.  Bird and Wadler
introduces recursion and induction at the same time, and that is one
of its most praised features; but to try to introduce recursion,
induction, and algebraic data types all three at the same time would
be fatal.

Now, perhaps (n+1) patterns introduce a horrible hole in the language
that has escaped me; if so, please point it out.  Or perhaps no one
else believes that teaching primitive recursion is important; if so,
please say.  Or perhaps you know a trick that will solve the problem
of how to rewrite Bird and Wadler without (n+1) patterns; if so,
please reveal it immediately!

Otherwise, I plead, reinstate (n+1) patterns.

Yours,  -- P

Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencehttp://www.dcs.glasgow.ac.uk/~wadler
University of Glasgow  office: +44 141 330 4966
Glasgow G12 8QQ   fax: +44 141 330 4913
SCOTLAND home: +44 141 357 0782

Re: 1.3 PreludeList _____By functions' restricted type

1995-12-21 Thread Philip Wadler

Alistair,  I agree with Mike Gunter on this one.  -- P

 Mike Gunter writes:
The By functions in the 1.3 PreludeList (elemBy, deleteBy, etc.)
all take ``equality'' predicates with type a - a - Bool.  Most of
these functions (with no change in definition that I see in a quick
check) could take predicates with type a - b  Bool (with a
corresponding change in the type of the other arguments).  For
example, I find myself wanting to use
  elemBy :: (a - b - Bool) - a - [b] - Bool
  elemBy = any (eq x)   
while the draft prelude has
  elemBy :: (a - a - Bool) - a - [a] - Bool
  elemBy = any (eq x)   
The one downside that I can see (I've not thought about this long) of
such a generalization is a few problems (caught during type-checking)
when one forgets which predicate argument comes from the key and which
from the list.  What are the others?
 I thought briefly of doing this but decided against it for several
 reasons.  These are weak and somewhat interconnected - so they may
 well yield to a few minutes intelligent thought (sadly, I'm incapable
 of intelligent thought at the moment - I can't get to the coffee shop
 for the snow and Yale coffee is undrinkable).
 1) The less general signature makes it a simple generalisation of the
existing functions.  In particular, if Haskell allowed these type
synonyms, we could have:
  type Eq a  = a - a - Bool  -- illegal: can't use same name for
  type Ord a = a - a - Bool  --  type and class
  elem   :: Eq a = a - [a] - Bool
  elemBy :: Eq a - a - [a] - Bool
 2) The semantics of these functions is somewhat subtle (ie tedious to 
informally state/remember) if the predicate is not an equivalence.
Allowing the more general signatures encourages use of non-equivalences.
 3) Where an equivalence is not being used (eg every time the predicate
has most general type "a - b - Bool"), a name like "elemBy" doesn't
really capture what is going on.
The obvious fix is of course to come up with better names - but what??
And, once we've provided the more general versions with different
names, should we throw in the elemBy, deleteBy, etc anyway for 
consistency with nubBy and friends?
 [Incidentally, the current mood of the Haskell committee seems to be
 to put these in libraries as part of an (often ignored) effort to slim
 Haskell down - or at least reduce its rate of growth.]
 Alastair Reid
 Yale Haskell Project

Re: Haskell 1.3 (newtype)

1995-09-19 Thread wadler

Sebastian suggests using some syntax other than pattern
matching to express the isomorphism involved in a newtype.
I can't see any advantage in this.

Further, Simon PJ claims that if someone has written

data Age = Age Int
foo (Age n) = (n, Age (n+1))

that we want to be able to make a one-line change

newtype Age = Age Int

leaving all else the same: in particular, no need to add
twiddles, and no changes of the sort Sebastian suggests.
I strongly support this!  (No, Simon and I are not in collusion;
indeed, we hardly ever talk to each other!  :-)

Cheers,  -- P

Re: Haskell 1.3 (newtype)

1995-09-13 Thread wadler

Well, I'm glad to see I provoked some discussion!

Simon writes:

   Lennart writes:
   | So if we had
   |data Age = Age !Int
   |foo (Age n) = (n, Age (n+1))
   | it would translate to
   |foo (MakeAge n) = (n, seq MakeAge (n+1))
   | [makeAge is the "real" constructor of Age]
   Indeed, the (seq MakeAge (n+1) isn't eval'd till the second
   component of the pair is.  But my point was rather that foo
   evaluates its argument (MakeAge n), and hence n, as part of its
   pattern matching.  Hence foo is strict in n.

Why should foo evaluate its argument?  It sounds to me like
Lennart is right, and I should not have let Simon lead me astray!

I think its vital that users know how to declare a new isomorphic
datatype; it is not vital that they understand strictness declarations.
Hence, I favor that

newtype Age = Age Int
data Age = Age !Int

be synonyms, but that both syntaxes exist.

This is assuming I have understood Lennart correctly, and that

foo (Age n) = (n, Age (n+1))
foo' a = (n, Age (n+1)) where (Age n) = a

are equivalent when Age is declared as a strict datatype. Unlike
Sebastian or Simon, I believe it would be a disaster if for a newtype
one had to distinguish these two definitions.

Cheers,  -- P

Re: Haskell 1.3 (lifted vs unlifted)

1995-09-12 Thread wadler

To the Haskell 1.3 committee,

Two choices in the design of Haskell are:
Should products be lifted?
Should functions be lifted?
Currently, the answer to the first is yes, and to the second is no.
This is ad hoc in the extreme, and I am severely embarrassed that I did
not recognise this more clearly at the time we first designed Haskell.

Dear committee, I urge you, don't repeat our earlier mistakes!  John
Hughes makes a compelling case for yes; and mathematical cleanliness
makes a compelling case for no.  I slightly lean toward yes. (John is a
persuasive individual!)  But unless someone presents a clear and clean
argument for answering the two questions differently, please answer
them consistently.

If both questions are answered yes, then there is a choice as to
whether or not to have a Data class.  Indeed, there are two choices:
Should polymorphic uses of seq be marked by class Data?
Should polymorphic uses of recursion be marked by class Rec?
John Launchbury and Ross Paterson have written a beautiful paper urging
yes on the latter point; ask them for a copy.  Here, I have a mild
preference to answer both questions no, as I think the extra
complication is not worthwhile.  But again, please answer them

Cheers,  -- P

Re: Haskell 1.3 (newtype)

1995-09-12 Thread wadler

The design of newtype appears to me incorrect.

The WWW page says that declaring

newtype Foo = Foo Int

is distinct from declaring

data Foo = Foo !Int

(where ! is a strictness annotation) because the former gives

case Foo undefined of Foo _ - True  =  True

and the latter gives

case Foo undefined of Foo _ - True  =  undefined.

Now, on the face of it, the former behaviour may seem preferable.  But
trying to write a denotational semantics is a good way to get at the
heart of the matter, and the only way I can see to give a denotational
semantics to the former is to make `newtype' define a LIFTED type, and
then to use irrefutable pattern matching.  This seems positively weird,
because the whole point of `newtype' is that it should be the SAME as
the underlying type.

By the way, with `newtype', what is the intended meaning of

case undefined of Foo _ - True ?

I cannot tell from the summary on the WWW page.  Defining `newtype'
in terms of `datatype' and strictness avoids any ambiguity here.

Make newtype equivalent to a datatype with one strict constructor.
Smaller language, more equivalences, simpler semantics, simpler
implementation.  An all around win!

Cheers,  -- P


1995-08-29 Thread wadler

   From: Anuradha L [EMAIL PROTECTED]
   I have recently started using gofer 2.30 which provides monadic IO
   but unfortunately I don't quite know how to use it.
   May I know some tutorial sources for monadic IO?
   L. Anuradha
   Dept. of Computer Science,
   University of Poona,
I've just completed a new tutorial on this subject,

How to Declare an Imperative.

This will be presented as an invited talk at the
International Logic Programming Symposium in December 1995.
You can access it from my web page.  Other relevant papers
on monads are there as well:

Imperative Functional Programming
Monads for Functional Programming
The Essence of Functional Programming
Comprehending Monads

Cheers, -- P

Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencehttp://www.dcs.glasgow.ac.uk/~wadler
University of Glasgow   phone: +44 141 330 4966
Glasgow G12 8QQ   fax: +44 141 330 4913
SCOTLAND home: +44 141 357 0782

Lectureships at Glasgow

1995-03-16 Thread wadler

Please circulate the following to anyone who might find it of

Functional programming, formal methods, and type theory are among the
`departmental interests' mentioned in the advertisement.  The money for
the one-year temporary lectureship is coming from an industrial
research grant that I secured; so strong candidates in these areas have
a fair chance for at least one of the posts.  Any exceptionally strong
candidate would be considered for the permanent lectureship, regardless
of research area.

Please write to me if you have any questions.  Cheers,  -- P

Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Science   tel: +44 141 330 4966
University of Glasgow fax: +44 141 330 4913
Glasgow G12 8QQ, SCOTLANDhome: +44 141 357 0782

 University of Glasgow

   Lectureships in Computing Science

The University invites applications for one permanent and two temporary
lectureships in the Department of Computing Science. This Department is
top-rated for both teaching and research. 

Applicants should possess the enthusiasm and ability to fit into a
dynamic academic environment, and should show considerable promise in
both teaching and research. Applicants for the permanent lectureship
should also have substantial experience in both teaching and research.

In 1995-96 the Department will launch a new MSc course in Advanced
Information Systems, covering databases, information retrieval, and
multimedia. For two of the appointments, preference will be given to
applicants who can demonstrate expertise in at least one of these areas. 
Strong applications in other areas that match departmental interests are
also welcomed.

The permanent lecturer will be appointed on the Lecturer A or B scale
(#14,756-25,735 p.a.), at a point determined by age, experience, and
ability. The temporary lecturers will be appointed on the Lecturer A
scale (#14,756-19,326 p.a.). One temporary appointment will be tenable
for three years in the first instance, the other for one year in the
first instance. If the applicants are strong enough, consideration will
be given to increasing the number and length of these appointments.

Informal enquiries may be made to the Head of Department, Dr David Watt,
e-mail [EMAIL PROTECTED] Information about the Department may be found
from its WWW page http://www.dcs.gla.ac.uk/.

Further particulars may be obtained from the Academic Personnel Office,
University of Glasgow, Glasgow G12 8QQ, Scotland (tel. 0141 330 5161, fax
0141 330 4921). Applications (three copies from UK applicants, one copy
from overseas applicants) must be sent to this address by 21 April 1995. 
Each application must be accompanied by a curriculum vitae, list of
publications, and names and addresses of three referees. In all
correspondence please quote reference number 8675.  

Re: Strictness and Unlifted Products

1993-11-10 Thread wadler

In all of this, I neglected to mention *why* I think unlifted tuples
are a good idea.  I've given various reasons, but not the real one.

The real one is: Embarassment.  I wrote an implementation of linear
logic in Haskell.  It took a while before I discovered why my
implementation got into a loop: I needed to add ~ to my tuple
patterns.  One reason that I haven't distributed my implementation to
the LL community is that I quail at the thought of explaining why those
~'s have to be there.

In the end, lifted vs. unlifted types is a subtle distinction which
advanced users will have to cope with.  But I would greatly prefer a
language in which such subtleties can be ignored as long as possible.
In particular, I'm not especially happy with Warren's proposal to
include ~ and ! in `data' declarations, despite its uniformity.
Declaring a new type isomorphic to an existing type is so basic that it
should not require knowledge of esoterica like ~.

Cheers,  -- P

Re: Lifted Tuples

1993-11-08 Thread wadler

It might help to recall some terminology from domain theory.  I assume
here that all domains are pointed, i.e., have a bottom element.  (John
Launchbury's note refer to a semantics with unpointed domains).

1.  Products, a \times b.

These are unlifted products, with (\bot,\bot)=\bot, and with
(a,b) \neq \bot  if  a \neq \bot  or  b \neq \bot.

2.  Smash products, a \otimes b.

These are unlifted products, with (a,\bot) = \bot and (\bot,b) = \bot.

3.  Lifted products, (a \times b)_\bot.

These are what Haskell has currently.

Observe that 1 and 2 satisfy the law

(a,(b,c)) = ((a,b),c)

However, only 1 satisfies the law

(a,b) - c  =  a - b - c.

(So consider yourself corrected, Paul! :-) Cheers,  -- P

Laws of unlifted tuples

1993-11-05 Thread wadler

Simon notes that lifted functions prevent certain optimisations, and
then Joe wonders if lifting tuples prevents optimisations.  Arvind has
already answered this question.  Unlifted tuples satisfy the type

(a,(b,c))  =  (a,b,c)

which is heavily used for optimising Id, but is invalid if tuples are
lifted as in Haskell.  Furthermore, only unlifted tuples satisfy the
type isomorphism

a - b - c  =  (a,b) - c

Just for sentimental reasons, I would like this last to hold in a
language named for Haskell Curry.  -- P

Re: Lifted functions

1993-11-04 Thread wadler

  * My taste is that isomorphic possibly-abstract types should get a new kind
   of type declaration (newtype, isotype, or whatever), perhaps defined to be
   exactly equivalent to a algebraic data type with just one strict
   constructor.  But I don't feel strongly about this.
   (At least we appear to be agreed on the idea that there should
   be an explicit constructor.)

Our tastes agree here.
   * I'd misunderstood Phil's position about liftedness.  He advocates
- unlifted functions
- lifted algebraic data types (all of them)
- unlifted tuples (just tuples, not all single-constr types)

Not quite.  I advocate separate declarations for lifted constructors
and unlifted constructors.  For lifted constructors, we have the
traditional `data' declaration, exactly as before.  For unlifted
constructors we have

newtype  T a_1 ... a_k  =  C t_1 ... t_n

where the a's are type variables, the t's are types, T is the
new type name, and C is the name of the single constructor.  (It's
semantically impossible to have an unlifted sum type, which is why
there's only one constructor.)

Note that `newtype' yields Simon's isomorphic type declarion as a
special case when n=1.

I furthermore advocate that tuples be unlifted, i.e., defined by
`newtype' rather than `data'.  (I'm not especially enamored of the name
`newtype'; any better suggestions?)
   I have never, never been tripped up by the liftedness of tuples, but the
   argument that ``we are prepared to pay for laziness so why not this too''
   has a certain masochistic charm.  I'll try the effect on performance of
   making all tuple-matching lazy in the nofib suite.

Thanks Simon!
   * Phil remarks that pattern-matching in let/where is implicitly twiddled. 
   That's true, and it is inconsistent; but while we might fix it for tuples,
   we can't fix it for general algebraic data types --- so is it important to
   fix it for tuples...?
Fair point.  The only way to make the correspondence perfect is to ban
lifted contructors in `let' and `where', allowing only unlifted constructors.
This yields a neater language, but is probably too radical a change for
Haskell 1.3.

Cheers,  -- P

Re: Strictness

1993-10-29 Thread wadler

I've separated this from my previous note, because it's about the
precise question of strictness annotations rather than the more general
question of laws.

I would rather tell someone that to define a new type exactly
isomorphic to an old type they need to write

newtype  Type = Constructor typeexp

then tell them that they need to write

data  Type = Constructor !typeexp

The latter smacks too much of magic.  This is clearly a matter of
taste, where reasonable people may disagree; I'm just putting forward
my preference.

Paul's suggestion that ! annotations in the type should change the
meaning of a value worries me.  It is bound to make reasoning more
difficult.  Paul, do you have a formal specification of how the type
annotations should affect the semantics?  It would be more in the
spirit of Haskell to require annotations in the function definition,
from which the annotations in the type can be inferred.

Cheers,  -- P

Re: Strictness

1993-10-29 Thread wadler

If Lennart was asking, `Shall we make laws a paramount design feature
of Haskell, and therefore go for unlifted tuples, unlifted functions,
and no n+k or literal patterns', my answer would be `let's go for it'.

But I suspect what Lennart is really asking is `Shall we ignore laws,
have lifted tuples and functions, have literal patterns, but drop n+k
patterns because I never use them'.  That's a fine argument based on
what Lennart uses (which is a reasonable criterion), but has nothing to
do with laws.

Cheers,  -- P

Re: Strictness

1993-10-29 Thread wadler

Paul writes,
   I think it's important to realize that laws aren't being entirely
   lost -- they're just being weakened a (wee) bit, in the form of
   carrying an extra constraint.  For example, eta conversion:
 \x - f x  =  f
   must simply be modified slightly:
 \x - f x  =  fif f /= _|_

Indeed.  Notice that there is a similar difference between call-by-need
and call-by-value beta:

(\x - u) t  =  u[t/x]  call-by-need

(\x - u) t  =  u[t/x]  if  t /= _|_call-by-value

But here we seem to think the difference is important.  So why is
it important for beta, but unimportant for eta?  Or are we deluding
ourselves in thinking it is important?

Cheers,  -- P

Re: Recursive type synonyms

1993-10-19 Thread wadler

John notes that recursive type synonyms may lead to less comprehensible
error messages.  Good point!

As John says, experimentation would be a good idea.  Any takers?
Cheers,  -- P

Re: Lifted products

1993-10-05 Thread wadler

Oops!  I should have underlined in my last message where I wrote
`newtype' instead of `datatype'.  As a result, Simon seems to have
completely misunderstood my proposal.  Sorry about that.

Simon seems to think I am proposing that if one writes

datatype  T a_1 ... a_k = C t_1 ... t_n

that one gets unlifted tuples.  I am *not* proposing this.  What I
propose is that if one writes

newtype  T a_1 ... a_k = C t_1 ... t_n

then one gets unlifted tuples.  I'm not stuck on the keyword
`newtype', anything other than `datatype' will do.

Simon writes of my true proposal (which he mistakenly labels an
alternative) `I like it not'.  But doesn't say why.  In particular, he
seems not to have hoisted on board that my proposal is just a
*generalisation* of his proposal to write

newtype  T a_1 ... a_k = C t.

to declare a type isomorphic to an existing type.

In particular, if one wants to create a type `New a' isomorphic to an
existing type, Simon would write (by his latest proposal)

datatype  Data a = New a = MakeNew a

whereas I would write

newtype  New a = MakeNew a

So my alternative is simpler in some ways.

Simon also notes that strictness declarations don't seem sensible
for unlifted constructors.  Indeed.  Ban them.  (Again, this is an
argument against something I never proposed.)

I think Simon's other points about ~ patterns are spurious.  But I
don't want to rebut them, because now that I've pointed out that he
misunderstood my proposal, perhaps he no longer holds to them.  Simon
(or anyone else), if you have further arguments against what I *did*
propose, please raise them again and I'll answer.

All in the spirit of a quest for the perfect Haskell!  -- P

Re: ADTs and strictness

1993-10-05 Thread wadler

Gerald Ostheimer notes that in Abramsky and Ong's lazy lambda calculus
that (\x - bottom) differs from bottom.  That's correct.

But just because they call it `lazy' doesn't mean that it really is
the essence of laziness.  I prefer to use the more neutral name `lifted
lambda calculus' for their calculus.

An example of a perfectly good lazy language in which neither products
nor functions are lifted is Miranda (a trademark of Research
Software Limited).

Hope this clarifies things,  -- P

Recursive type synonyms

1993-10-05 Thread wadler

While we are proposing things, here's a further suggestion.
The great thing about this suggestion is that it only *removes*
a restriction, and makes the language definition simpler.
It is fully backward compatible.

The suggestion is:

Remove the restriction that type synonym
declarations must not be recursive.

In other words, one could write things like

type  Stream a  =  (a, Stream a)

which is equivalent to the type (a, (a, (a, ...))).

The only reason we included the restriction at the time was

(a)  it makes unification easier to implement
(b)  it was more standard
(c)  there didn't seem any compelling reason *not*
 to include the restriction.

Guy Steele has since pointed out several compelling examples
where it would be *much* easier not to have the restriction,
and I've encountered a few myself.  Let's trash it!

The obvious way to go is for someone to implement it first, to
make sure it's not difficult.  Mark Jones, have you tried this
in Gofer yet?

Cheers,  -- P

Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencetel: +44 41 330 4966
University of Glasgow  fax: +44 41 330 4913
Glasgow G12 8QQ, SCOTLAND

Re: ADTs in Haskell

1993-10-04 Thread wadler

Lennart makes the point that a different generalisation of Simon's
suggestion, namely strict constructors, would be more useful.  Perhaps.

A point in favour of my suggestion is that unlifted products have
an easy semantic definition.  What, precisely, is the definition of
a constructor strict in a specified field?  In particular, how do
you define it, and implement it, if that field has a functional
type?  Remember, in Haskell function types are unlifted, so we
should have  (\_ - bottom) = bottom.

Regardless, I'd like to urge Lennart, Simon, and Paul to perform
experiments to determine the relative costs of lifted and unlifted
products.  We pretty much agreed we ought to perform such an
experiment when we decided on lifted products -- now is the time
to do it.  Lennart writes:

   It would be very easy to measure, but I'm not quite sure what
   you want to measure.  Just unlifted tuples?  In a well-written
   program this would make little difference since it's bad software
   engineering to use tuples, so I wouldn't expect big programs to
   use them a lot (you should define a new type instead of using a 
   tuple since that gives better documentation and types).
   Or should we try and unlift all types with a single constructor?

Yes: try it with all single-constructor types.  Going to unlifted
products should be more expensive: performing an experiment that
de-lifted all single-constructor types would put an upper bound on
the cost.
Cheers,  -- P
Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencetel: +44 41 330 4966
University of Glasgow  fax: +44 41 330 4913
Glasgow G12 8QQ, SCOTLAND


Re: ADTs in Haskell

1993-10-03 Thread wadler

The idea that all ADT's must be `lifted' appears clunkier and clunkier
over time, so I support Simon's suggestion.

Note that Simon's suggestion can be easily generalised.  Our current
declaration form is

datatype  T a_1 ... a_k = C t_1 ... t_n | ... 

Simon suggests we add

newtype  T a_1 ... a_k = C t

with the (semantic) difference that  C bottom = bottom,  whereas if we
had used the corresponding datatype then  C bottom /= bottom.  The
generalisation is to instead add

newtype  T a_1 ... a_k = C t_1 ... t_n

with the difference that  C bottom ... bottom = bottom,  whereas if we
had used the corresponding datatype then  C bottom ... bottom /=
bottom.  In other words, the generalisation of Simon's idea is unlifted

Thus I propose:

(1)  We generalise the `newtype' construction as outlined above.

(2)  We make products unlifted rather than lifted.  That is, instead of

datatype (a_1, ..., a_n)  =  MakeTuple_n a_1 ... a_n

we have

newtype  (a_1, ..., a_n)  =  MakeTuple_n a_1 ... a_n

This makes programs more defined, so all valid programs still run.

My main reason for making the suggestion is that just as lifted ADT's
appear clunkier and clunkier over time, so do lifted products.  There
have been a number of times I have been forced to write

f ~(x,y) = ...

which is rather obscure.  I was especially galled to realise that in
language called Haskell, the main equation of Currying does not hold.
That is, the types

(a,b) - cand   a - b - c

are not isomorphic.  They would be with unlifted products.

Our main reason for avoiding unlifted products before was that we
wanted to avoid complexity.  Simon has rightly pointed out that we
need this complexity anyway.  We ought to make the most of it.

A secondary reason was that we were afraid of the potential cost
of unlifted products.  We now have the tools in place to measure
such costs.  Simon, Lennart, Paul, would this be difficult to do?

Cheers,  -- P

Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencetel: +44 41 330 4966
University of Glasgow  fax: +44 41 330 4913
Glasgow G12 8QQ, SCOTLAND

Re: Defining Haskell 1.3 - Committee volunteers wanted

1993-09-27 Thread wadler

Three cheers for Brian, for his work to keep Haskell a
living and growing entity.

I propose as a touchstone for 1.3 that they should only look
at extensions that have been incorporated in one or more
Haskell implementations.  Hence the following are all good
candidates for 1.3's scrutiny:

Monadic IO
Strict data constructors
Prelude hacking
Standardizing annotation syntax

But the following is not:

Records (naming field components)

If someone actually implemented records, then after we
had some experience with the implementation it would
become a suitable 1.3 candidate.

A further thing which 1.3 should look at is:

ISO Standardisation

The credit for this suggestion should go to Paul Hudak,
but I heartily endorse it.

Cheers,  -- P

Re: Polymorphic recursive calls possible via type classes

1993-07-28 Thread wadler

Joe writes:
| Phil Writes:
| |  The closest one can come is
| |
| |class G a where
| |g :: a - Bool
| |g x = g [x]
| |
| |instance G Int
| |instance G [Int]
| |instance G [[Int]]
| |...
| |
| |which requires an infinite number of instance declarations.
| Can't this be written as follows?
|   instance G Int
|   instance (G a) = G [a]
| Now, this is still an infinite number of instances, though not
| declarations, so the point still holds that it can't be monomorphized.

I don't think this works.  The declaration `instance G Int' is valid
only if `instance G [Int]' holds, and this in turn requires that
`instance G [[Int]]' holds, and so on.  I haven't tried it in Haskell,
but it should either be illegal or cause the type-checker to enter an
infinite loop.  (Hmmm!  Maybe someone should try it ...)  Cheers,  -- P

Re: Polymorphic recursive calls possible via type classes

1993-07-28 Thread wadler

Joe writes:
   I forgot to mention that I did try it with hbc:

   ... transcript of working program omitted ...

That's surprising!  I'd be interested to know if `maptry' works,
since in that case it makes sense to run the program, as well
as type it.  Cheers,  -- P

Re: Polymorphic recursive calls possible via type classes

1993-07-28 Thread wadler

(An addition to Konstantin and Mark's remarks.)

The great thing about Milner's polymorphism is that it came for free:
adding a function declaration does not make a function more efficient.
This is true for the usual implementations of functional languages,
where all values are pointers and polymorphism is cheap.  As we move to
new optimisations, such as unboxing, this is no longer the case: naive
implementations of polymorphism either don't work (one can't have
polymorphism over unboxed types in Glasgow Haskell), or are not free,
or require less naive implementation techniques (generating one
function body for each monomorphic instance at which a polymorphic
function is used).

The sad thing about Haskell's type classes is that they did not come
for free: adding a function declaration could make an overloaded
function an order of magnitude more efficient.  This was true for the
naive implementation of type classes, where dictionaries are passed at
run time.  But with a less naive technique, again involving monomorphic
function instances, type classes are free, in the sense that adding a
declaration does not make the program more efficient.  Mark Jones and
the Glasgow group have both experimented with such definitions.
(Indeed, they are what I had in mind when type classes were first
suggested, and it is an embarassment that they have taken so long to
come into existence.)

So, we have come full circle.  Sophisticated compiling techniques,
based on translating polymorphism to monomorphism, are required to
make Milner-style polymorphism free in the presence of optimisations
such as unboxing, and are required to make Haskell type classes free.

Now, it is a fact of the Milner type system that every polymorphic
program has a monomorphic equivalent.  The proof is easy, just replace
every occurrence of `let x = t in u' by `u[t/x]' (that is, `u' with `t'
substituted for `x').  Of course, the latter may lead to code
explosion, but by in fact creating only one instance of `t' for each
type at which `x' is used, this explosion usually remains managable).

However, for the extended type system that allows polymorphism in
recursion, this is no longer the case -- my thanks to Lennart
Augustsson for pointing this out.  The counter-example (similar
to one of Mark's) is:

g :: a - Bool
g x  =  g [x]

This function is silly, as it never terminates, but there are less
silly examples; see below.  Note that the trick for translating
polymorphic recursion into type classes (as described by Konstantin) no
longer works here.  The closest one can come is

class G a where
g :: a - Bool
g x = g [x]

instance G Int where
instance G [Int] where
instance G [[Int]] where

which requires an infinite number of instance declarations.

Incidentally, there has been a lot of work done on Milner's type
system extended to allow polymorphic recursion; this system is
sometimes called ML+.  One motivating example -- which I first
heard from Milner -- is:

data  Trie x = Atom x | List [Trie x]

maptrie :: (a - b) - Trie a - Trie b
maptrie f (Atom x)=  Atom (f x)
maptrie f (List xts)  =  List (map (maptrie f) xts)

(This is similar to Mark's example with functor Y.)
The function `maptrie' is untyped in ML, but typed in ML+.

So the bottom line is that Konstantin's trick gives you some of
the power of ML+, but not all of it; in particular, it does not
give sufficient power to define `maptrie'.  On the other hand,
both ML and Haskell type classes admit the useful compiling
technique of transforming polymorphism to monomorphism; ML+ does

Cheers,  -- P

Philip Wadler  [EMAIL PROTECTED]
Department of Computing Sciencetel: +44 41 330 4966
University of Glasgow  fax: +44 41 330 4913
Glasgow G12 8QQ, SCOTLAND

Re: n+k patterns

1993-05-18 Thread wadler

You are quite right.  I'd forgotten about local rebinding,
because I feel that all local rebinding should be disallowed.
Anyone want to start a movement to eliminate local
rebinding?  (1/2 :-)  Cheers,  -- P

- Begin Included Message -

From [EMAIL PROTECTED] Tue May 18 14:56:37 1993
Date: Tue, 18 May 93 15:55:42 +0200
From: Lennart Augustsson [EMAIL PROTECTED]
Subject: Re: n+k patterns

 Both (=) and (-) belong to classes defined in PreludeCore,
 and hence cannot be rebound.  This was a deliberate decision,
 made in order to turn your point into a non-problem.
It's true that things from PreludeCore cannot be rebound on the top level,
but they can be rebound locally.  So the problem I state does occur.
OK, I think that (=), (-), etc. should refer to those from PreludeCore,
but it doesn't say anywhere.
Another question along the same lines: What if (+) has been rebound?
Are n+k patterns still allowed?

-- Lennart

- End Included Message -

Re: n+k patterns

1993-05-18 Thread wadler

Both (=) and (-) belong to classes defined in PreludeCore,
and hence cannot be rebound.  This was a deliberate decision,
made in order to turn your point into a non-problem.

Long live (n+k)!  -- P

- Begin Included Message -

From [EMAIL PROTECTED] Mon May 17 21:33:41 1993
From: Lennart Augustsson [EMAIL PROTECTED]
Subject: n+k patterns
Date: Mon, 17 May 93 22:25:03 +0200

Could those in charge of the formal semantics of Haskell (yes,
that's you folks in Glasgow!) tell me what the meaning of n+k patterns

In the report it says that

case e0 of { x+k - e; _ - e' }

translates to

if e0 = k then { let { x' = e0-k } in e[x'/x] else e'

Which = and - does this refer to?

What if they have been locally rebound?  E.g.

let x - y = x ++ f y
where f 0 = []
  f (n+1) = f n
in  [] - 0

Does the translated - still refer to the method in PreludeCore or
to the - in scope?

-- Lennart

PS. I'd like to start the "Ban n+k patterns"-movement, any followers?

- End Included Message -

Re: Stupid Haskell question

1993-02-23 Thread wadler

Given your correction, I think that the type declaration

  data FooTree a b  =  Leaf b | Node a (FooTree a b) a (FooTree a b)

will handle things only a little less neatly than the use of
wrappers, and will allow you to use type inference in much the
way you wish.  What do you think?  Cheers,  -- P

Re: Stupid Haskell question

1993-02-23 Thread wadler

Guy,  You write,

   But now suppose that some of the annotations refer to FooTree items
   (another thing I forgot to say).  I suppose I could do
data FooTree a b  =  Leaf b | Node a (FooTree a b)
a (FooTree a b)
[FooTree a b]
   including a list of goodies that the annotations could then refer to,
   but I'm quickly losing my abstractions--the annotations are spread
   out rather than being single items.

I don't understand.  Can't you handle this is as follows?

 data FooTree a b  =  Leaf b | Node a (FooTree a b) a (FooTree a b)
 data Annote a b c =  MkAnnote Info (FooTree (Annote a b c) b) c

and then infer that what you are acting on is a

FooTree (Annote a b c) b.

The c field can be omitted, or it can be instantiated to additional
annotation information later on.

I'm sure you can hack around the problem, but it's not yet clear
to me that there isn't an elegant solution.  Cheers,  -- P

Re: Stupid Haskell question

1993-02-22 Thread wadler


I agree that the report should be updated to express the restriction we
really have in mind.  Simon: as editor, this is your bailiwick!

I also think its neat that you seem to have found a use for cyclic
unification.  This is definitely an impetus to extend the language to
include cyclic types.  (I don't expect we'll do this for a while
though.  You might consider modifying the Glasgow Haskell compiler to
include this yourself -- it may not be too difficult.)

However, I am confused by some of your example.  You want to use a data
structure like

  data FooTree a = Leaf a | Node (FooTree (Wrapper a)) (FooTree (Wrapper a))
  data Wrapper a = Annotation a Int

This seems to add an additional level of annotation at every
Node.  Something zero nodes deep has zero annotations, something
five nodes deep has five annotations.  Is this really what you

Because of the way type inference works in the Hindley-Milner system,
it is impossible to write a function that will act on values of the
type (FooTree a) as defined above.  (This is independent of the
additional complications you mention.) The reason is that every
instance of a function in a recursive definition must have the same
type as the function being defined.  But to define a function on
(FooTree a) you need a recursive instance of type (FooTree (Wrapper
a)).  Mycroft first suggested a type system that would allow such
functions to be typed, but I think it is still an open question as to
whether an inference algorithm exists for the type system.  (There was
a paper published that claimed an algorithm, but it was later withdrawn
as incorrect.)

So I hope you don't really need the types above, because then
we can't help you, even in the monomorphic case!

But would the simpler type perhaps work?

  data FooTree a b  =  Leaf b | Node a (FooTree a b) a (FooTree a b)

where a ranges over annotation, and b over leaf values?  Cheers,  -- P

Re: Stupid Haskell question

1993-02-22 Thread wadler

Guy asks the following (non-stupid) Haskell question, which I reply to
below.  The question points out an area in the Haskell report that
seems to be unclear; and a place where it might be worthwhile to change
the design to be less conservative but more uniform.

Guy's question:

- Begin Included Message -

From [EMAIL PROTECTED] Thu Feb 18 17:21:56 1993
From: Guy Steele [EMAIL PROTECTED]
Date: Thu, 18 Feb 93 12:20:44 EST
Subject: Re: Stupid Haskell question

Haskell theoretically allows recursive datatypes.  But the following
example does not work (he said innocently).

module Rec where

data Unary a = Zero | Successor a

f :: Unary z - [Unary z]
f x = [x, Successor x]

I think that the compiler ought to deduce the restriction
x::q  where  q = Unary q.  It ought to be okay for q to be Unary q
because "an algebraic datatype intervenes" (Haskell report, 4.2.2).

But the Glasgow compiler says

"/users/lang1/gls/Haskell/monads/Rec.hs", line 6:
Type variable "a" occurs within the type "Unary a".
In a list expression: [x, Successor x]

and the Chalmers compiler says

"/users/lang1/gls/Haskell/monads/Rec.hs", line 6, [63] unify1 (occurence)
and Unary a
 in  (:) A1_f ((:) (Successor A1_f) ([]))
 in f

Now everything is okay if I write

module Rec where

data Unary a = Zero | Successor (Unary a)

f :: Unary z - [Unary z]
f x = [x, Successor x]

but I have reasons in my actual code (which is hairy--this is a
stripped-down example) not to force the data type to be recursive,
but to let the type analysis deduce it where necessary.  Am I foolish
to expect this?


- End Included Message -

Phil's response:


Haskell requires that `an algebraic datatype intervenes' in order that
all types can be written as a finite tree.  The type you refer to,

q where q = Unary q,

is an infinite tree (though a finite graph).  If we intended to allow
such infinite solutions, we wouldn't need the restriction to algebraic
datatypes at all.  This suggests we should clear up the wording in the
Haskell report, so I've forwarded your question and this response to
the Haskell mailing list.

Why not allow cyclic types (i.e., any type expressible as a
finite graph)?  It turns out there is a unification algorithm
that works for finite graphs, so this is in theory possible.
But the intent of Haskell was to be a conservative design, so
we stuck with what we were familiar with.  Your example of
a place where cyclic types are useful provides an impetus
to step into the less familiar but more uniform territory.

Cheers,  -- P

Re: Multiple parameters to classes

1992-04-16 Thread Philip Wadler

| I am part of a group working on a new language for Microwave hardware
| design (MHDL).  As it turns out, we are incorporating almost all of
| Haskell 1.2 into it.

If there is anything written about this language, please send a pointer
to this mailing list; and please keep us appraised of your progress.
It's always good to know of applications of functional programming.

| We cannot do this in Haskell, and we cannot do this in Gofer either
| because all type variables in the class signature have to be mentioned
| in all the method signatures.

The restriction you mention in Gofer ("all type variables in the class
signature have to be mentioned in all the method signatures") might be
avoided by simply using a separate class to define each operator.

| | Aside:
| | The fact that ti _must_ mention u1, but may or may not mention u2,
| | ..., uk is the significant change from Gofer (and Haskell, which
| | only allows one type variable in the class declaration).  We think
| | that this would allow disambiguation with the "ease" of Haskell, but
| | still allow us the power of multiple class parameters in class
| | declarations.

This will allow easy disambiguation if you require that each instance
of the class differs in the first parameter.  But if you allow

instance  DemoClass Int Char  where ...
instance  DemoClass Int Bool  where ...

then disambiguation will not be so easy.

| Our question to the gurus is the following:  is this too much to ask?
| Are we going to mess up the typing philosophy of Haskell with this?

The only way to get a sure answer to these questions is to write a
formal specification of your system, and to convince yourself that
you have a type inference algorithm that is sound and complete with
respect to this system (e.g., yields principle types).  I strongly
urge you complete some sort of formal specification before you begin

The GRASP team at Glasgow is putting the finishing touches on a paper
that formally describes type classes as implemented in Haskell, which
you may find helpful.  We will post a pointer to this paper when it's
done.  I hope Mark Jones and Martin Odersky will also post pointers to
their recent work in this area.

Cheers,  -- P

Re: Expressiveness of functional language I/O

1992-03-19 Thread Philip Wadler

Evan writes:

| Consider a polymorphic continuation request such as the following:
| someRequest :: T a - (U a - Dialogue)

and then shows this does not work in the stream model, unless one has
existential types.

Quite true, but note that there are *no* polymorphic IO functions!
At least, not for any `true' polymorphic function satisfying Reynold's
parametricity theorem.  See Reynolds' classic 1984 paper, or my
`Theorems for Free'.

On the other hand, something like

someRequest :: (Text a) = T a - (U a - Dialogue)

certainly is reasonable.  Colin Runciman has noted that this
makes existential types doubly desirable in the presence of
type classes, and Odersky and Laufer have argued something

Cheers,  -- P

Re: Array Indexing

1992-03-02 Thread Philip Wadler

! It is the responsibility of the programmer to enforce bounds
! checking when required in non-derived instances of class @Ix@.
! An implementation is not required to check that an index
! lies within the bounds of an array when accessing that array.

The above is fine by me, but here is a more concise wording, which may
do as well.

! It is the responsibility of the programmer to enforce bounds
! checking in non-derived instances of class @Ix@, if it is desired.

Cheers,  -- P

Re: The Great Integer Division Controversy

1992-02-24 Thread Philip Wadler

I gather the change you propose makes `div` consistent with
SML and Miranda, and uses `quo` for what Scheme calls `quotient`.
Sounds good to me.  -- P

exit in PreludeIO

1992-02-23 Thread Philip Wadler

The function "exit" in PreludeIO would suit its purpose better if:
(a)  It wrote to "stderr" rather than "stdout",
(b)  It followed the error message with a newline.
Easy and quick to change!  What do people think?  (Especially Paul
and Joe.)  -- P

functional language implementations

1992-02-18 Thread Philip Wadler

Here is my current list of Haskell implementations.
If anyone has corrections or additions, please send them to me.

Cheers,  -- P

HASKELL, Sun-3 and Sun-4
Glasgow Haskell
by Cordy Hall, Kevin Hammond, Will Partain, Simon Peyton Jones, Phil Wadler
compiler, implemented in Haskell, runs stand-alone
Contact: Will Partain, University of Glasgow

Haskell Compiler Distribution
Attn: Will Partain
Department of Computing Science
University of Glasgow
Glasgow  G12 8QQ

HASKELL, Sun-3, Sun-4, and Apollo
Yale Haskell
by Paul Hudak, John Peterson, and others
compiler, implemented in T, a dialect of Scheme; runs on top of T
Contact: John Perterson, Yale University

John Peterson
Yale University
Department of Computer Science
New Haven, CT 06520

HASKELL, Sun-3, Sun-4, and others
Chalmers Haskell-B compiler
by Lennart Augustsson
Contact: Lennart Augustsson, Chalmers University

GOFER, Sun-3, Sun-4, and PC's
Gofer is a dialect of Haskell
by Mark Jones
interpreter, implemented in C
Contact: Mark Jones, Oxford University

Mark Jones
Programming Research Group
11 Keble Road
Oxford, OX1 3QD


1992-02-12 Thread Philip Wadler

(n+k) patterns are still causing difficulties in the syntax.

I would be happy with the following:
On a lhs, only allow (n+k) patterns in arguments.


n+1 = ...

is ALWAYS a definition of (+), but

f (n+1) = 
case m of {n+1 - ...}

are still allowed.  Would that make life easier?  -- P

Re: Literate comments

1992-02-06 Thread Philip Wadler


We seem to be converging here.  I prefer Simon's suggestion of
an appendix to Paul's suggestion of a Section 1.6, but am not
too bothered either way.  Will also try to work in Simon's note
about file extensions.  I am happy to leave the question of how
to format the Prelude in Joe's lap.  I'll go ahead and draft the
required page.  Cheers,  -- P

Literate comments

1992-02-05 Thread Philip Wadler

Despite Paul having ruled on the matter, the debate still rages, with
about half saying that  is wonderful, and half saying it is awful.

Under these circumstances, Paul's ruling seems right: it is pointless
to legislate  as part of the language.

On the other hand, the current circumstance is that the main
implementations do all use the  convention.  It seems reasonable to
say this in the report -- we should at least tell other implementors
what the current implementors have settled upon.  This would also let
Joe use the convention in presenting the standard prelude, as he says
he would like to do.

To be precise: I propose an additional chapter of the report, labeled
`Literate comments' and no more than one page long, that states a
convention for providing literate comments, notes that it is NOT part
of Haskell but is supported by existing implementations, and mentions
that the Prelude is presented with this convention in the report.  I
volunteer to draft the page.

Paul, as syntax honcho you should rule on this.  Cheers,  -- P

PS:  Wadler's Law:
The emotional intensity of debate on a language feature
increases as one moves down the following scale:
Lexical syntax,