The Haskell 1.3 compiler NHC13 is now available

1996-11-09 Thread Thomas Hallgren at home

Version 0.0 of NHC13, Nearly a Haskell 1.3 Compiler, by Niklas Rojemo,
is now available for download from

ftp://ftp.cs.chalmers.se/pub/haskell/nhc13

It has the following features

- Compiles Haskell 1.3
- Supports Fudgets
- Supports several kind of heap profiles:
producer
constructor
retainer
life-time
biographical
combinations of the above

Although NHC13 0.0 is probably not yet to be regarded as a mature
Haskell 1.3 compiler, it may still be of interest since it provides
some new kinds of heap profiles not found in any other Haskell 1.3
compiler. Finding space leaks or other undesired space behaviour using
(combinations of) retainer and biographical profiles can be much
simpler than with the traditional producer/constructor profiles.

Heap profiling also works for Fudgets programs.

The commands to use are

nhc13   the compiler
nhc13make   a version of hbcmake for nhc13
nhc13xmake  to compile Fudgets programs
hp2graphto convert heap profiling output to postscript

Manual pages with more details are included in the distributions.

Recent papers on heap profiling are

   Niklas Rojemo and Colin Runciman: "Lag, drag, void and use -
heap profiling and space-efficient compilation revisited".
In the proceedings of ICFP'96.

   Colin Runciman and Niklas Rojemo: "Two-pass heap profiling: a matter
of life and death". In the proceedings of IFL'96.

These are available from

ftp://ftp.cs.chalmers.se/pub/users/rojemo/icfp96.ps.gz
ftp://ftp.cs.chalmers.se/pub/users/rojemo/ifl96.ps.gz



Niklas Rojemo
Thomas Hallgren







Haskell 1.3 Libraries Available for Comment

1996-10-28 Thread Kevin Hammond

The current draft of the Haskell 1.3 Libraries is now available for public
comment at

ftp://ftp.dcs.st-and.ac.uk/pub/haskell/lib-28-Oct-96.{ps,dvi}

in either PostScript or DVI format (HTML will follow).

The document defines the required libraries for conforming Haskell 1.3
implementations:

Ratio   -- Rationals, as in Haskell 1.2
Complex -- Complex Numbers, ditto
Ix  -- Indexing Operations, ditto
Array   -- Array operations, ditto
List-- Old and new list operations
Maybe   -- Operations on the Maybe type
Char-- Operations on characters, mainly character-kind
(isLower etc)
Monad   -- Monadic utility functions
IO  -- More advanced Input/Output
Directory   -- Operations on directories
System  -- Operating system interaction (system, getEnv,
exit etc.)
Time-- Date and Time
Locale  -- Local conventions (date/time only at present)
CPUTime -- CPU Time usage
Random  -- Random number generation on Integer
Bit -- Bit manipulation
Natural -- Fixed-precision natural numbers
Signed  -- Fixed-precision signed numbers

Most of the comments that have been made on previous versions have been
acted upon.
If you have read previous versions of the library, you may notice the
omission of the Posix
library.  I intend to revise this and make it available as an optional library
in the near future.

Please send comments on these libraries either to me, or to the Haskell
Committee ([EMAIL PROTECTED]) by November 30th 1996
(I will take late comments into account as far as possible, but
may need to delay these for future reviews of the libraries).  Assuming
normal levels
of change, I aim to have this version of the libraries stabilised by the
end of the
year.

Our long-term goal is to provide a repository for these libraries at
Glasgow, which
will allow new libraries to be contributed and existing ones to be worked
on remotely.
The repository should be mirrored at Yale, Chalmers, and perhaps elsewhere.
To help future-proof these libraries, we are considering adopting an SGML
document
standard, probably based on that used for ML '96.  I hope to release
details of this
at the same time as the libraries are stabilised.

Regards,
Kevin

--
Division of Computer Science,   Tel: +44-1334 463241 (Direct)
School of Mathematical  Fax: +44-1334 463278
 and Computational Sciences,URL:
http://www.dcs.st-and.ac.uk/~kh/kh.html
University of St. Andrews, Fife, KY16 9SS.








ANNOUNCE: Glasgow Haskell 2.01 release (for Haskell 1.3)

1996-07-26 Thread Simon L Peyton Jones


 The Glasgow Haskell Compiler -- version 2.01
 

We are pleased to announce the first release of the Glasgow Haskell
Compiler (GHC, version 2.01) for *Haskell 1.3*.  Sources and binaries
are freely available by anonymous FTP and on the World-Wide Web;
details below.

Haskell is "the" standard lazy functional programming language; the
current language version is 1.3, agreed in May, 1996.  The Haskell
Report is online at
http://haskell.cs.yale.edu/haskell-report/haskell-report.html.

GHC 2.01 is a test-quality release, worth trying if you are a gung-ho
Haskell user or if you are keen to try the new Haskell 1.3 features.
We advise *AGAINST* relying on this compiler (2.01) in any way.  We
are releasing our current Haskell 1.2 compiler (GHC 0.29) at the same
time; it should be pretty solid.

If you want to hack on GHC itself, then 2.01 is for you.  The release
notes comment further on this point.

What happens next?  I'm on sabbatical for a year, and Will Partain
(the one who really makes GHC go) is leaving at the end of July 96 for
a Real Job.  So you shouldn't expect rapid progress on 2.01 over the
next 6-12 months.  

The Glasgow Haskell project seeks to bring the power and elegance of
functional programming to bear on real-world problems.  To that end,
GHC lets you call C (including cross-system garbage collection),
provides good profiling tools, and concurrency and parallelism.  Our
goal is to make it the "tool of choice for real-world applications".

GHC 2.01 is substantially changed from 0.26 (July 1995), as the new
version number suggests.  (The 1.xx numbers are reserved for further
spinoffs from the Haskell-1.2 compiler.)  Changes worth noting
include:

  * GHC is now a Haskell 1.3 compiler (only).  Virtually all Haskell
1.2 modules need changing to go through GHC 2.01; the GHC
documentation includes a ``crib sheet'' of conversion advice.

  * The Haskell compiler proper (ghc/compiler/ in the sources) has
been substantially rewritten and is, of course, Much, Much,
Better.  The typechecker and the "renamer" (module-system support)
are new.

  * Sadly, GHC 2.01 is currently slower than 0.26.  It has taken
all our cycles to get it correct.  We fondly believe that the
architectural changes we have made will end up making 2.0x
*faster* than 0.2x, but we have yet to substantiate this belief;
sorry.  Still, 2.01 (built with 0.29) is quite usable.

  * GHC 2.01's optimisation (-O) is not nearly as good as 0.2x, mostly
because we haven't taught it about cross-module information
(arities, inlinings, etc.).  For this reason, a
2.01-built-with-2.01 (bootstrapped) is no fun to use (too slow),
and, sadly, that is where we would normally get .hc (intermediate
C; used for porting) files from... (hence: none provided).

  * GHC 2.01 is much smarter than 0.26 about when to recompile.  It
will abort a compilation that "make" thought was necessary at a
very early stage, if none of the imported types/classes/functions
*that are actually used* have changed.  This "recompilation
checker" uses a completely different interface-file format than
0.26.  (Interface files are a matter for the compilation system in
Haskell 1.3, not part of the language.)

  * The 2.01 libraries are not "split" (yet), meaning you will end up
with much larger binaries...

  * The not-mandated-by-the-language system libraries are now separate
from GHC (though usually distributed with it).  We hope they can
take on a "life of their own", independent of GHC.

  * All the same cool extensions (e.g., unboxed values), system
libraries (e.g., Posix), profiling, Concurrent Haskell, Parallel
Haskell,...

  * New ports: Linux ELF (same as distributed as GHC 0.28).

Please see the release notes for a complete discussion of What's New.

To run this release, you need a machine with 16+MB memory (more if
building from sources), GNU C (`gcc'), and `perl'.  We have seen GHC
2.01 work on these platforms: alpha-dec-osf2, hppa1.1-hp-hpux9,
sparc-sun-{sunos4,solaris2}, mips-sgi-irix5, and
i386-unknown-{linux,solaris2,freebsd}.  Similar platforms should work
with minimal hacking effort.  The installer's guide give a full
what-ports-work report.

Binaries are distributed in `bundles', e.g. a "profiling bundle" or a
"concurrency bundle" for your platform.  Just grab the ones you need.

Once you have the distribution, please follow the pointers in
ghc/README to find all of the documentation about this release.  NB:
preserve modification times when un-tarring the files (no `m' option
for tar, please)!

We run mailing lists for GHC users and bug reports; to subscribe, send
mail to [EMAIL PROTECTED]; the msg body should be:

subscribe glasgow-haskell-which Your Name [EMAIL PROTECTED]

Please send bug reports about

Haskell 1.3 - what's it all about?

1996-05-16 Thread Magnus Carlsson

Maybe you have seen some mail lately on this list about something
called "Haskell 1.3", and wondered 

What is this "Haskell 1.3" anyway?,
Can I buy it?,
or
Do I have it?

By compiling and running the following two-module Haskell program, you
will at least get an answer to the last question.

-- Put in M.hs ---

module M where data M = M M | N ()

-- Put in Main.hs 

import M
main = interact (const (case (M.N) () of M (N ()) - "No\n"; N () - "Yes\n"))

---

Magnus  Thomas






Haskell 1.3 Report is finished!

1996-05-15 Thread peterson-john

The Haskell 1.3 Report is now complete.  A web page with the entire
report and other related information is at:
http://haskell.cs.yale.edu/haskell-report/haskell-report.html

This new report adds many new features to Haskell, including monadic
I/O, standard libraries, constructor classes, labeled fields in
datatypes, strictness annotations, an improved module system, and many
changes to the Prelude.  The Chalmers compiler, hbc, supports most
(all?) of the new 1.3 features.  The Glasgow compiler will soon be
upgraded to 1.3.  A new version of Hugs (now a combined effort between
Mark Jones and Yale) will be available later this summer.

A postscript version of the report is available at 
ftp://haskell.cs.yale/edu/pub/haskell/report/haskell-report.ps.gz.
This file should be available at the other Haskell ftp areas soon.

   John Peterson
   [EMAIL PROTECTED]
   Yale Haskell Project






Status of Haskell 1.3

1996-05-07 Thread peterson-john

The Haskell 1.3 report is nearly done.  The text of the report is
complete - I'm working on indexing and web pages.  We also have an
initial cut at the Library Report.  If you are interested in seeing
the new report on the web, look at

http://haskell.cs.yale.edu/haskell-report/haskell-report.html

We expect the report will be complete in another week - the web page
will have the latest information and I will be announcing to
comp.lang.functional.

No implementations of 1.3 are available yet, but we expect all the
major Haskell systems to conform to the new report soon.
Announcements will be made to this list.

Although the report is stable, the related web pages are still under
construction.  Please have patience!

  John Peterson
  Yale Haskell Project






Haskell 1.3

1996-04-22 Thread Frank Christoph

  I thought there was an April 19 deadline...?  Have there been some
last-minute problems?

--
Frank Christoph Next Solution Co.   Tel: 0424-98-1811
[EMAIL PROTECTED]  Fax: 0424-98-1500






Re: Preliminary Haskell 1.3 report now available

1996-03-08 Thread Fergus Henderson


Thomas Hallgren [EMAIL PROTECTED] writes:

 In the syntax for labeled fields (records) the symbol - is chosen
 as the operator used to associate a label with a value in
 constructions and patterns:
[...]
 According to a committee member, there were no convincing reasons
 why - was chosen. Other symbols, like = and := were also considered.

I support Thomas Hallgen's suggestion that `=' be used instead.
Another reason, in addition to the two he mentioned, is that the `-'
symbol is very unintuitive when used for pattern matching, because the
arrow is in the *opposite* direction to the data-flow.  I find this
very confusing.

-- 
Fergus Henderson  | Designing grand concepts is fun;
[EMAIL PROTECTED]   | finding nitty little bugs is just work.
http://www.cs.mu.oz.au/~fjh   | -- Brooks, in "The Mythical Man-Month".
PGP key fingerprint: 00 D7 A2 27 65 09 B6 AC  8B 3E 0F 01 E7 5D C4 3F






Re: Haskell 1.3

1996-03-08 Thread Philip Wadler


 It looks ugly, but we could say that a data declaration does not 
 have to have any constructors:
 
   data Empty =
 
-- Lennart

I agree that the best way to fix this is to have a form of data
declaration with no constructors, but I'm not keen on the syntax you
propose.  How about if we allow the rhs of a data declaration to be
just `empty', where `empty' is a keyword?

data Empty = empty

-- P






Re: Haskell 1.3

1996-03-08 Thread Lennart Augustsson



 Suggestion: Include among the basic types of Haskell a type `Empty'
 that contains no value except bottom.
Absolutely!  But I don't think it should be built in
(unless absolutely necessary).

It looks ugly, but we could say that a data declaration does not 
have to have any constructors:

data Empty =

   -- Lennart

PS. There are other ways of getting empty types, but they are
all convoluted, like

data Empty = Empty !Empty






Re: Haskell 1.3

1996-03-08 Thread Magnus Carlsson


Philip Wadler writes:
  
   It looks ugly, but we could say that a data declaration does not 
   have to have any constructors:
   
  data Empty =
   
  -- Lennart
  
  I agree that the best way to fix this is to have a form of data
  declaration with no constructors, but I'm not keen on the syntax you
  propose.  How about if we allow the rhs of a data declaration to be
  just `empty', where `empty' is a keyword?
  
   data Empty = empty
  
  -- P

I would like to propose an alternative that in my view has both good
syntax, and does not introduce a new keyword:

   data Empty

/Magnus






Re: Haskell 1.3

1996-03-08 Thread Ron Wichers Schreur

Lennart Augustsson wrote:

 It looks ugly, but we could say that a data declaration does not 
 have to have any constructors:
 
   data Empty =

Philip Wadler responded:

 I'm not keen on the syntax you propose.  How about if we allow the
 rhs of a data declaration to be just `empty', where `empty' is a
 keyword?

 data Empty = empty

Another suggestion is to omit the equal sign, as in

  data Empty


Cheers,

Ronny Wichers Schreur
[EMAIL PROTECTED]








Haskell 1.3, monad expressions

1996-03-08 Thread smk


Suggestion:

add another form of statement for monad expressions:

stmts - ...
 if exp

which is defined for MonadZero as follows:

do {if exp ; stmts} = if exp then do {stmts}
  else zero

Based on this, one can define list comprehensions by

[ e | q1,...,qn ] = do { q1' ; ... ; qn'; return e }

where either  qi' = if qi  (whenever qi is an exp)
or  qi' = qi  (otherwise).

--
Stefan Kahrs






Re: Preliminary Haskell 1.3 report now available

1996-03-07 Thread Lennart Augustsson



I always favoured `=' over `-', but I don't care much.

-- Lennart






Re: Preliminary Haskell 1.3 report now available

1996-03-07 Thread Thomas Hallgren


First, I am happy to see that Haskell 1.3, with its many valuable
improvements over Haskell 1.2, is finally getting ready,
but I also have a comment:

In the syntax for labeled fields (records) the symbol - is chosen
as the operator used to associate a label with a value in
constructions and patterns:

data Date = Date {day, month, year :: Int}

today = Date{day - 11, month - 10, year - 1995}

According to a committee member, there were no convincing reasons
why - was chosen. Other symbols, like = and := were also considered.


Here are some (in my opinion) good reasons for using = instead of - :

1. In ordinary declarations, :: is used to specify the type of a name
   and = is used to specify its value:

day, month, year :: Int
day = 11; month = 10; year = 1995

   so for consistency I think the same notations should be used
   inside record values:

data Date = Date {day, month, year :: Int}
date :: Date
date = Date {day = 11, month = 10, year = 1995}

2. The - symbol is used also in list comprehensions and the new
   monad syntax ('do'):

[ 2*x | x - [1..10] ]


do c - getChar; putChar c

   In these uses of - the name on the lhs does not have the same
   type as the expression on the rhs (above, x::Int, but [1..10]::[Int]
   and c::Char but getChar::IO Char). The value that the lhs name
   (or, indeed, pattern) is bound to is "extracted" from the value
   of the rhs expression. This is very different from what happens
   with field labels, so a difference in syntax is motivated.


Sadly, I suspect it would be difficult to convince the committee to
change their minds about this at this late stage, but I am sure it
would be even more difficult to change it for a later version of
Haskell...

Regards,

Thomas Hallgren






Haskell 1.3

1996-03-07 Thread Philip Wadler


Congratulations to all involved on Haskell 1.3!  I especially like the
introduction of qualified names and the attendant simplifications.
Here are some small suggestions for further improvement.


Interfaces
~~
Suggestion: the introduction should draw attention to the fact that
interface files are no longer part of the language.  Such a wondrous
improvement should not go unremarked!


ISO Character Set
~
Suggestion:  Add a one-page appendix, giving the mapping between
characters and character codes.


Fields and records
~~
Suggestion: Use = to bind fields in a record, rather than -.
I concur with Thomas Hallgren's argument that - should be reserved for
comprehensions and for `do'.  SML has already popularised the = syntax.

Suggestion: Use the SML syntax, `#field' to denote the function that
extracts a field.  Then there is no possibility of accidentally
shadowing a field name with a local variable.  Just as it is a great
aid to the readability of Haskell for constructors to be lexically
distinguished from functions, I predict it will also a great aid for
field extractors to be lexically distinguished from functions.

(Alternative suggestion: Make field names lexically like constructor
names rather than like variable names.  This again makes shadowing
impossible, and still distinguished fields from functions, though now
field extractors and constructors would look alike.)


The empty type
~~
Suggestion: Include among the basic types of Haskell a type `Empty'
that contains no value except bottom.

It was a dreadful oversight to omit the empty type from Haskell,
though it took me a long time to recognise this.  One day, I bumped
into the following example.  I needed the familiar type

data  Tree a  =  Null | Leaf !a | Branch (Tree a) (Tree a)

instantiated to the unfamiliar case `Tree Empty', which has `Null' and
`Branch' as the only possible constructors.

One can simulate the empty type by declaring

data  Empty = Impossible

and then vowing never to use the constructor `Impossible'.  But by
including `Empty' in the language, we support a useful idiom and
(perhaps more importantly) educate our users about the possibility of
an algebraic type with no constructors.

It would be folly to allow only non-empty lists.  So why do we allow
only non-empty algebraic types?


The infamous (n+1) patterns
~~~
Suggestion:  Retain (n+1) patterns.

If Haskell was a language for seasoned programmers only, I would
concede that the disadvantages of (n+1) patterns outweigh the
advantages.

But Haskell is also intended to be a language for teaching.
The idea of primitive recursion is powerful but subtle.  I believe
that the notation of (n+1) patterns is a great aid in helping students
to grasp this paradigm.  The paradigm is obscured when recursion over
naturals appears profoundly different than recursion over any other
structure.

For instance, I believe student benefit greatly by first seeing

power x 0   =  1
power x (n+1)   =  x * power x n

and shortly thereafter seeing

product []  =  1
product (x:xs)  =  x * product xs

which has an identical structure.  By comparison, the definition

power x 0   =  1
power x n | n  0   =  x * power x (n-1)

completely obscures the similarity between `power' and `product'.

As a case in point, I cannot see a way to rewrite the Bird and Wadler
text without (n+1) patterns.  This is profoundly disappointing,
because now that Haskell 1.3 is coming out, it seems like a perfect
time to do a new edition aimed at Haskell.  The best trick I know is
to define

data Natural = Zero | Succ Natural

but that doesn't work because one must teach recursion on naturals and
lists before one introduces algebraic data types.  Bird and Wadler
introduces recursion and induction at the same time, and that is one
of its most praised features; but to try to introduce recursion,
induction, and algebraic data types all three at the same time would
be fatal.

Now, perhaps (n+1) patterns introduce a horrible hole in the language
that has escaped me; if so, please point it out.  Or perhaps no one
else believes that teaching primitive recursion is important; if so,
please say.  Or perhaps you know a trick that will solve the problem
of how to rewrite Bird and Wadler without (n+1) patterns; if so,
please reveal it immediately!

Otherwise, I plead, reinstate (n+1) patterns.


Yours,  -- P

---
Professor Philip Wadler[EMAIL PROTECTED]
Department of Computing Sciencehttp://www.dcs.glasgow.ac.uk/~wadler
University of Glasgow  office: +44 141 330 4966
Glasgow G12 8QQ   fax: +44 141 330 4913
SCOTLAND home: +44 141 357 0782

Re: Preliminary Haskell 1.3 report now available

1996-03-07 Thread Lennart Augustsson



I always favoured `=' over `-', but I don't care much.

-- Lennart






Re: Preliminary Haskell 1.3 report now available

1996-03-07 Thread alms



 Thomas Hallgren [EMAIL PROTECTED] writes:
 
  In the syntax for labeled fields (records) the symbol - is chosen
  as the operator used to associate a label with a value in
  constructions and patterns:
 [...]
  According to a committee member, there were no convincing reasons
  why - was chosen. Other symbols, like = and := were also considered.
 
 I support Thomas Hallgen's suggestion that `=' be used instead.
 Another reason, in addition to the two he mentioned, is that the `-'
 symbol is very unintuitive when used for pattern matching, because the
 arrow is in the *opposite* direction to the data-flow.  I find this
 very confusing.
 

Indeed, a couple of reasons I find convincing myself:
1 - SML uses '=' too, therefore it is one less problem for people
moving to/from SML/Haskell.
2 - The '-' notation always reminds me of list comprehensions,
e.g. at first sight if I see an expression like
R{v - [1..10]}
I could think v is an integer (taken from [1..10]) when it is actually a list.
the following expression is also confusing:
[R{v - [1..x]} | x - [1..10]]
(defines a list of records)
An expression using records on the rhs of the '|' should be even more interesting
(and useful for obfuscated Haskell competitions).
The same applies for records with fields defined with list comprehensions.

Andre.

Andre SantosDepartamento de Informatica
e-mail: [EMAIL PROTECTED] Universidade Federal de Pernambuco
http://www.di.ufpe.br/~alms CP 7851, CEP 50732-970, Recife PE Brazil






Preliminary Haskell 1.3 report now available

1996-03-06 Thread peterson-john



Announcing a preliminary version of the Haskell 1.3 report.

The Haskell 1.3 report is nearly complete.  All technical issues
appear to be resolved and the report is nearly ready.  The report
will be finalized April 19.  Any comments must be submitted by
April 15.  We do not anticipate making any serious technical changes
is the current version.

The report is being made available both on the web and as a .dvi file.

A summary of changes made in the Haskell 1.3 report can be found in

http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html

This has pointers to the html version of the report.

The dvi file is available via anonymous ftp at
ftp://haskell.cs.yale.edu/pub/haskell/report/new-report.dvi.gz

Send comments or questions to [EMAIL PROTECTED]









Haskell 1.3?

1996-03-05 Thread Tommy Thorn


Quoting from "Introducing Haskell 1.3" (http://www.cs.yale.edu/
HTML/YALE/CS/haskell/haskell13.html):

 "The final version of the Haskell 1.3 is expected to be complete in
  January, 1996."

Does anyone know what happens?

Regards, Tommy
-- 
 "When privacy is outlawed, only outlaws will have privacy."
  -- Phil Zimmerman






Haskell 1.3 nearly ready

1995-12-12 Thread peterson-john


The Haskell 1.3 effort is nearly complete.  Although a new report is
not yet complete, all proposed changes to the language as well as the
new Prelude are now available for public comment.  These documents are
available on the web at

http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html

Any feedback is appreciated!  A new report should be ready soon.

  John Peterson
  [EMAIL PROTECTED]
  Yale Haskell Project





Re: Haskell 1.3: modules module categories

1995-10-02 Thread Johannes Waldmann


  With present Haskell modules, it seems that `with'
  automatically comes with `use' and clutters up your namespace.
  That's why you sometimes need re-naming when importing.

Sorry, I missed that one. Manuel pointed out that with/use 
is already contained in the `qualified names'-proposal.
When I'm comparing Haskell to Ada, it seems that basically

import Foo  = with Foo; use Foo;
import qualified Foo= with Foo;

Still I'd like to have Ada's `use' on its own, as in

with Text_Io;
package Foo is
  ...
  procedure Bar is
use Text_Io; 
  begin
...
  end;
  ...
end Foo;

And while we're at it, what about
- nested modules
- with possibly private sub-modules
similar to the Ada(-95) things.

-- 
Johannes Waldmann, Institut f\"ur Informatik, UHH, Jena, D-07740 Germany,
(03641)  630793  [EMAIL PROTECTED]  http://www.minet.uni-
jena.de/~joe/ ...  Im naechsten Heft: Als Arbeiter in einer Radiofabrik -
Freundschaft mit dem Sohn  eines Luftwaffengenerals - Das  KGB ueberwacht
den Amerikaner auf Schritt und  Tritt - Alarmierende Verdachtsmomente bei
der Kaninchenjagd - Ungluecklich verliebt in eine rothaarige Juedin





Haskell 1.3: modules module categories

1995-09-30 Thread Manuel Chakravarty


Hi!

Talking to a friend, who is project manager in a software company, about
modules for Haskell, he made two comments that may be of interest to the
current discussion.

(1) With regard to the idea of 99% hand-written interfaces (just mark everthing
that should go into the interface in a combined interface/implementation
file) that I proposed and that was supported by Peter, my friend pointed
out that this could make multiple implementations for one interface a bit
more labour. You basically have to guarantee that the interfaces extracted
out of the combined file for version one and version two of the
implementation are equal, i.e., the interface is duplicated in both
versions.

Still, I find this less onerous than having a separate implementation and
interface for three reasons: (1) the common case is one implementation for
one interface (better shift the labour to the occasional case); (2) in the
Modula-2 style there is also some duplication of code (procedure/function
signatures); and (3) in the case of two implementations for one interface
you have to deal with issues of consistency between the versions anyway.

(2) He pointed out that it is desirable to be able to restrict the access to
some modules in a way that the compiler can control when a group of people
is working in one module hierarchy. Too illustrate this, assume that we
classify the modules into different levels of abstraction, say, three
levels: 

  level 3 modules

|
v

  level 2 modules

|
v

  level 1 modules

Now the modules in level 2 may use the modules from level 1; the modules
from level 3 may use the modules from level 2, but *not* the modules from
level 1---I think it is clear that such a case is rather frequent. Such
access control may be easy to achieve when it is possible to deny the
people working on level 3 the access to the interfaces of level 1 (e.g.,
don't copy them the interface or use UNIX file permissions). But this may
often not be possible, for instance because some people are working at
modules in level 2 and 3. So, we like to have some way to specify that the
compiler simply does not allow to import (directly) modules from level 1
within modules from level 3.

Actually, C++ has a rather ad-hoc solution (are you suprised?) to this
problem, the `friends'. An object may be friend of another object; then,
that object can access (private) fields that are not visible to other
non-friend objects. The problem here is that the object providing some
service has to specify all its friends, by name. If it is required to add a
new friend, the used object has to be changed. Consider, in our example
hierarchy, that you want to split some existing level 2 module into two
modules; using friends, this requires to change modules in level 1, which
is obviously bad.

Now, what about the following idea? Each module is element of a module
category. Such categories are named and each module states to which
category it belongs. Furthermore, a module lists all categories of which
the members may import it. In the example, we have three categories, say,
Level1, Level2, and Level3. All modules from Level1 allow to be imported
from Level2, and all modules from Level2 allow to be imported from
Level3. This pervents imports of modules of category Level1 from modules in
category Level3, and is easy to check for the compiler. Splitting a module
does not require any changes in underlying categories.

Cheers,

Manuel






Re: Haskell 1.3 (newtype)

1995-09-19 Thread wadler


Sebastian suggests using some syntax other than pattern
matching to express the isomorphism involved in a newtype.
I can't see any advantage in this.

Further, Simon PJ claims that if someone has written

data Age = Age Int
foo (Age n) = (n, Age (n+1))

that we want to be able to make a one-line change

newtype Age = Age Int

leaving all else the same: in particular, no need to add
twiddles, and no changes of the sort Sebastian suggests.
I strongly support this!  (No, Simon and I are not in collusion;
indeed, we hardly ever talk to each other!  :-)

Cheers,  -- P





Re: Haskell 1.3 (newtype)

1995-09-13 Thread wadler


Well, I'm glad to see I provoked some discussion!

Simon writes:

   Lennart writes:
   
   | So if we had
   | 
   |data Age = Age !Int
   |foo (Age n) = (n, Age (n+1))
   | 
   | it would translate to
   | 
   |foo (MakeAge n) = (n, seq MakeAge (n+1))
   | 
   | [makeAge is the "real" constructor of Age]
   
   Indeed, the (seq MakeAge (n+1) isn't eval'd till the second
   component of the pair is.  But my point was rather that foo
   evaluates its argument (MakeAge n), and hence n, as part of its
   pattern matching.  Hence foo is strict in n.

Why should foo evaluate its argument?  It sounds to me like
Lennart is right, and I should not have let Simon lead me astray!

I think its vital that users know how to declare a new isomorphic
datatype; it is not vital that they understand strictness declarations.
Hence, I favor that

newtype Age = Age Int
data Age = Age !Int

be synonyms, but that both syntaxes exist.

This is assuming I have understood Lennart correctly, and that

foo (Age n) = (n, Age (n+1))
foo' a = (n, Age (n+1)) where (Age n) = a

are equivalent when Age is declared as a strict datatype. Unlike
Sebastian or Simon, I believe it would be a disaster if for a newtype
one had to distinguish these two definitions.

Cheers,  -- P
   





Re: Haskell 1.3 (newtype)

1995-09-13 Thread Simon L Peyton Jones



Lennart writes:

| So if we had
| 
|   data Age = Age !Int
|   foo (Age n) = (n, Age (n+1))
| 
| it would translate to
| 
|   foo (MakeAge n) = (n, seq MakeAge (n+1))
| 
| [makeAge is the "real" constructor of Age]
| 
| Now, surely, seq does not evaluate its first argument when the
| closure is built, does it?  Not until we evaluate the second component
| of the pair is n evaluated.

Indeed, the (seq MakeAge (n+1) isn't eval'd till the second component
of the pair is.  But my point was rather that foo evaluates its argument
(MakeAge n), and hence n, as part of its pattern matching.  Hence
foo is strict in n.

Sebastian writes:

| Is it really a good idea to extend the language simply to allow foo and 
| foo' to be equivalent? The effect of foo' can still be achieved if Age is 
| a strict data constructor:
| 
|   data Age = Age !Int
|
|   foo'' :: Age - (Int, Age)
|   foo'' a = (n, Age (n+1)) where (Age n) = a
| 
| and compilers are free (obliged?) to represent a value of type Age by an
| Int.

Indeed, it's true that foo'' does just the right thing.  Furthermore, I
believe it's true that given the decl

data T = MkT !S

the compiler is free to represent a value of type T by one of type S (no
constructor etc).

Here are the only real objections I can think of to doing "newtype" via a
strict constructor. None are fatal, but they do have a cumulative effect.

1. It requires some explanation... it sure seems a funny way to
   declare an ADT!

2. The programmer would have to use let/where bindings to project values
from the new type to the old, rather than using pattern matching.  Perhaps
not a big deal.

3. We would *absolutely require* to make (-) an instance of Data.  It's
   essential to be able to get

data T = MkT !(Int - Int)

4. We would only be able to make a completely polymorphic "newtype" if
we added a quite-spurious Data constraint, thus:

data Data a = T a = MkT !a

(The Data is spurious because a value of type (T a) is going to be
represented by a value of type "a", and no seqs are actually going to be
done.)

5.  We would not be able to make a newtype at higher order:

data T k = MkT !(k Int)

because there's no way in the language to say that (k t) must be in class
Data for all t.  

[This is a somewhat subtle restriction on where you can put strictness
annotations, incidentally, unless I've misunderstood something.]


Simon
   





Re: Haskell 1.3 (Bounded;fromEnum;type class synonyms)

1995-09-12 Thread reid



Dear Sverker Nilsson,

Thanks for your message - interesting ideas and interesting questions.
[I'm copying the reply to the Haskell mailing list in case anyone
wishes to support your suggestions.]

First, one of Haskell's annoying features is that the scope of a type
variable in a type signature or instance heading only extends over the
signature.  So, when you want to write:

 instance (FromInt a, ToInt a, MinVal a, MaxVal a) = Enum a where
 enumFrom c = map fromInt [toInt c .. toInt (maxVal :: a)]

It doesn't work (because the "a" isn't in scope during the
declarations) - you have to use "asTypeOf" instead:

 instance (FromInt a, ToInt a, MinVal a, MaxVal a) = Enum a where
 enumFrom c = map fromInt [toInt c .. toInt (maxVal `asTypeOf` c)]


While developing something like the proposed "Bounded" class, 
you introduced separate classes for minVal and maxVal observing:

 Something having a minimum value, in my view, didn't necessarily
 imply it would have a maximum value.

Yes, perfectly true.  The best example is that there's a minimal list
(the empty list) but even though there's a maximal Char (say), there's
no maximal list of characters.  

Our primary motivation for adding Bounded is to clean up the
{min,max}{Char,Int} situation and make the derived Enum instances
slightly more regular (similar in spirit to your definitions above).
For this purpose, insisting on having both a min and a max isn't a
problem.

However, for other purposes, having one bound but not the other is
certainly possible and maybe useful.  

(I agree that defining a bogus instance in which "minVal" (say) is
defined but "maxVal" is undefined or has a bogus value is at least
untidy and at worst a bug waiting to happen.  I tried (and failed) to
get the Text instance of (a - b) removed from the Prelude for this
reason.)


The major disadvantage of separating the two is that it introduces
even more classes.

If you read the preludechanges document carefully, you'll see that
(even at this late stage) these are only proposed changes.  Glasgow
argue that it's hard enough to keep Ix and Enum separate in your mind
- adding another can only worsen things.


You were then surprised and disturbed to find that this isn't legal
Haskell:

 class (MinVal a, MaxVal a)=Bounded a
 
 instance Bounded T where
maxVal = T3
minVal = T1

There was a proposal to make this legal.  As far as I know, there's no
technical problems here - I guess it just got forgotten about (or the
proposer decided that Haskell 1.3 had too many changes in it already!)


 * Should Bounded be derived from Ord?
 
 The Bounded class that was suggested for Haskell 1.3 was derived from
 Ord. Myself playing with similar things I derived MinVal and MaxVal
 from nothing - I thought this more general. Maybe the reason for
 having Bounded derived from Ord was to imply that its functions shall
 satisfy certain laws, probably as being min/max as defined by the
 ordering functions in Ord. But as I don't see how this can be
 guaranteed by deriving Bounded from Ord, I would think that it could
 as well be standalone (or derived from something like MinBound and
 MaxBound if possible); for more generality and less dependency between
 the classes in the system.

Yes, the sole reason is because it seemed tidier to specify Ord -
without knowing which comparision is being used, it doesn't make much
sense to say you have a "maximum value".

 For example, the new proposal says:
 
  ...
  Programmers are free to define a class for partial orderings; here, we
  simply state that Ord is reserved for total orderings.
 
 That seems to imply also that a programmer should not use Bounded on
 types that have no total ordering. I believe this might be an unnecessary
 restriction.

It certainly looks that way.

  The names fromEnum and toEnum are misleading since
  their types involve both Enum and Bounded.  We couldn't face writing 
  fromBoundedEnum and toBoundedEnum.  Suggestions
  welcome. 
 
 Maybe names like ToInt and FromInt could be used for this?

 How about the following, assuming the proposed diff and succ functions:
 
 class (Bounded a, Enum a) = ToInt a where toInt :: a - Int [...]
 class (Bounded a, Enum a) = FromInt a where fromInt :: Int - a [...]

These names look good.  Three _minor_ concerns:

1) It introduces even more standard classes to confuse programmers
   with.  Why allow the programmer to override them?

2) Several implementations have added a non-standard method 

 fromInt :: Int - a

   to the Num class to avoid unnecessary uses of fromInteger.

   However, I think most normal uses would work unchanged if "fromInt"
   had type:

 fromInt :: (Bounded a, Enum a) = Int - a

3) There is a weak tradition of putting the name of the class into the
   name of the method.

   This tradition is often broken when it would get in the way of a
   good name.

Action:

1) I'll remove

Re: Haskell 1.3 (newtype)

1995-09-12 Thread Sebastian Hunt



On Tue, 12 Sep 1995, Lennart Augustsson wrote:

 The posted semantics for strict constructors, illustrated by this example
 from the Haskell 1.3 post, is to insert seq.
 
  data R = R !Int !Int
  
  R x y = seq x (seq y (makeR x y)) -- just to show the semantics of R
 
 So if we had
 
   data Age = Age !Int
   foo (Age n) = (n, Age (n+1))
 
 it would translate to
 
   foo (MakeAge n) = (n, seq MakeAge (n+1))
 
 [makeAge is the "real" constructor of Age]

I had assumed (as Simon seems to) that the semantics of pattern matching 
against a strict constructor would accord with the following:

1.  matching a simple pattern involves evaluating the expression being 
matched to the point that its outermost constructor is known

2.  for strict constructors this must result in the annotated
constructor argument(s) being evaluated

From what Lennart says, this is not the intended semantics. So what *is* 
the intended semantics?

Sebastian Hunt






Re: Haskell 1.3 (lifted vs unlifted)

1995-09-12 Thread smk


John Hughes mentioned a deficiency of Haskell:
  OK, so it's not the exponential of a CCC --- but 
  Haskell's tuples aren't the product either, and I note the proposal to 
  change that has fallen by the wayside. 

and Phil Wadler urged to either lift BOTH products and functions,
or none of them.

My two pence:
If functions/products are not products and exponentials of a CCC, you
should aim for the next best thing: an MCC, a monoidal closed category.
But Haskell's product isn't even monoidal:

There is no type I such that A*I and A are isomorphic.
The obvious candidate (in a lazy language) would be
the empty type 0, but A*0 is not isomorphic to A but to the lifting of A.

Another problem: the function space  A*B - C  should be naturally
isomorphic to  A - (B - C).  What does the iso look like?
One half is the obvious curry function:

curry f x y = f(x,y)

But what is the other half?  Apparently, it should be either

uncurry1 f (x,y) = f x y

or

uncurry2 f (~(x,y)) = f x y

Which one is right depends on which one establishes
the isomorphism.  Consider the definition

f1 (x,y) = ()

Now:
uncurry1 (curry f1) undef =
undef =
f1 undef

while on the other hand:
uncurry2 (curry f1) undef =
curry f1 (p1 undef, p2 undef) =
f1(p1 undef,p2 undef) =
() =/=
f1 undef

This suggests that uncurry2 is wrong and uncurry1 is right, but for

f2 (~(x,y)) = ()

the picture is just the other way around.
BTW  It doesn't help to employ "seq" in the body of curry.


Looks rather messy.
Can some of this be salvaged somehow?

--
Stefan Kahrs





Re: Haskell 1.3 (Bounded;fromEnum;type class synonyms)

1995-09-12 Thread Sverker Nilsson


* Playing around, learning the basics, reinventing the wheel...

I had been playing around with some classes, primarily to learn for
myself, being new to the Haskell language, when I got the report on
the current status of Haskell 1.3. The classes I had played with had
some similarities to some of the proposals for the new prelude, yet I
had made it in a quite different way. Trying to combine the two
styles, I ran into an unexpected problem. This problem I am naive
enough to believe could be solved by a simple language extension. 

Using Gofer, I had made some classes that could be used for
implementing ordering and other things for enumeration (data T=T1 |
T2 | T3) types but not restricted to those. I made 4 minimal classes
with just 1 function in each.  (I thought this would be most general.
Something having a minimum value, in my view, didn't necessarily
imply it would have a maximum value.) So:

class FromInt a where
fromInt:: Int-a

class ToInt a where
toInt:: a-Int

class MaxVal a where
maxVal:: a

class MinVal a where
minVal:: a

-- I then used this as follows:

data T = T1 | T2 | T3

instance ToInt T where
toInt e = case e of
T1 - 1
T2 - 2
T3 - 3

instance Eq T where
a == b = toInt a == toInt b

instance Ord T where
a = b = toInt a = toInt b

-- And so on. The MaxVal and MinVal classes also where used to make a generic
-- implementation of a bounded Enum class, generalizing how it was made in the
-- Gofer prelude for Char:

instance (FromInt a, ToInt a, MinVal a, MaxVal a) = Enum a where
enumFrom c  = map fromInt [toInt c .. toInt (maxVal `asTypeOf` c)]
enumFromThen c c'   = map fromInt [toInt c, 
   toInt c' .. toInt (lastVal `asTypeOf` c)]
  where lastVal = if c'  c then minVal else maxVal


-- This worked to my great delight! And I had began to learn the basics
-- of the type system in Haskell. My only problem was that I had to use
-- (maxVal `asTypeOf` c) instead of (maxVal::a). I believe the reason
-- for this might be clear when I learn more. Somebody have a clue?

*   Running into a problem: type class synonyms are not synonymous?

Then, I got the report on the developments of Haskell 1.3 and began to read
it with great curiosity. I then found the Bounded class, containing
corresponding functions to MinVal and MaxVal. A question then occured to me:
Why not have separate classes as I had done? Would not that perhaps be more
general, increasing the possibilities for reuse? (Without having to stub out
one of minBound or maxBound if you use it for a type without one of them.)
On the other hand, I saw the convenience of having both minBound and maxBound
in the same class, decreasing the number of classes that have to be mentioned
in various cases. But I thought, then, why not derive the Bounded class
from MinVal and MaxVal - would not that then be equivalent? So I tried

class (MinVal a, MaxVal a)=Bounded a   -- This was allowed, but then...

instance Bounded T where
maxVal = T3
minVal = T1

-- That didn't work! (Gofer said: ERROR "tst.gs" (line 45): No member
 "maxVal" in class "Bounded") Maybe I had done something wrong, or Gofer
does not allow something that would be allowed in Haskell? I suspect
however that I am simply not supposed to do this in either Haskell or Gofer...

Instead I had to use two separate instantiaions, exactly as before
I declared the Bounded class:

instance MinVal T where
minVal = T1
instance MaxVal T where
maxVal = T3

This seems to be somewhat unnecessary, wouldn't it be quite possible
for a compiler to transform the instantiation of Bounded to the two
instantiations of MinVal and MaxVal?

Maybe this would be a useful development of Haskell?

*   Should Bounded be derived from Ord?

The Bounded class that was suggested for Haskell 1.3 was derived from
Ord. Myself playing with similar things I derived MinVal and MaxVal
from nothing - I thought this more general. Maybe the reason for
having Bounded derived from Ord was to imply that its functions shall
satisfy certain laws, probably as being min/max as defined by the
ordering functions in Ord. But as I don't see how this can be
guaranteed by deriving Bounded from Ord, I would think that it could
as well be standalone (or derived from something like MinBound and
MaxBound if possible); for more generality and less dependency between
the classes in the system.

For example, the new proposal says:

 ...
 Programmers are free to define a class for partial orderings; here, we
 simply state that Ord is reserved for total orderings.

That seems to imply also that a programmer should not use Bounded on
types that have no total ordering. I believe this might be an unnecessary
restriction.

*   Can toInt be fromEnum and toEnum fromInt?

New functions fromEnum and toEnum were p

Re: Haskell 1.3 (newtype)

1995-09-12 Thread Simon L Peyton Jones



Phil writes:


| By the way, with `newtype', what is the intended meaning of
| 
|   case undefined of Foo _ - True ?
| 
| I cannot tell from the summary on the WWW page.  Defining `newtype'
| in terms of `datatype' and strictness avoids any ambiguity here.
| 
| Make newtype equivalent to a datatype with one strict constructor.
| Smaller language, more equivalences, simpler semantics, simpler
| implementation.  An all around win!

I believe it would be a mistake to do this!  Consider:

newtype Age = Age Int

foo :: Age - (Int, Age)
foo (Age n) = (n, Age (n+1))

Now, we intend that a value of type (Age Int) should be represented by
an Int.  Thus, apart from the types involved, the following program should
be equivalent:

type Age' = Int

foo' :: Age' - (Int, Age')
foo' n = (n, n+1)

So is foo' strict in n? No, it isn't.  What about foo?  If newtype is just a
strict data constructor, then it *is* strict in n.

Here's what I wrote a little while ago:

"This all very well, but it needs a more formal treatment.  As it happens, I
don't think it's difficult.  In the rules for case expressions (Fig 3  4 in
the 1.2 report) we need to say that the *dynamic* semantics of

case e of { K v - e1; _ - e2 }
is
let v = e in e1

if K is the constructor of a "newtype" declaration.
(Of course this translation breaks the static semantics.)

Similarly, the dynamic semantics of (K e) is just that of "e", if
K is the constructor of a "newtype" decl."

Does that make the semantics clear, Phil?

Simon






Re: Haskell 1.3 (lifted vs unlifted)

1995-09-12 Thread wadler


To the Haskell 1.3 committee,

Two choices in the design of Haskell are:
Should products be lifted?
Should functions be lifted?
Currently, the answer to the first is yes, and to the second is no.
This is ad hoc in the extreme, and I am severely embarrassed that I did
not recognise this more clearly at the time we first designed Haskell.

Dear committee, I urge you, don't repeat our earlier mistakes!  John
Hughes makes a compelling case for yes; and mathematical cleanliness
makes a compelling case for no.  I slightly lean toward yes. (John is a
persuasive individual!)  But unless someone presents a clear and clean
argument for answering the two questions differently, please answer
them consistently.

If both questions are answered yes, then there is a choice as to
whether or not to have a Data class.  Indeed, there are two choices:
Should polymorphic uses of seq be marked by class Data?
Should polymorphic uses of recursion be marked by class Rec?
John Launchbury and Ross Paterson have written a beautiful paper urging
yes on the latter point; ask them for a copy.  Here, I have a mild
preference to answer both questions no, as I think the extra
complication is not worthwhile.  But again, please answer them
consistently.

Cheers,  -- P





Re: Haskell 1.3 (newtype)

1995-09-12 Thread wadler


The design of newtype appears to me incorrect.

The WWW page says that declaring

newtype Foo = Foo Int

is distinct from declaring

data Foo = Foo !Int

(where ! is a strictness annotation) because the former gives

case Foo undefined of Foo _ - True  =  True

and the latter gives

case Foo undefined of Foo _ - True  =  undefined.


Now, on the face of it, the former behaviour may seem preferable.  But
trying to write a denotational semantics is a good way to get at the
heart of the matter, and the only way I can see to give a denotational
semantics to the former is to make `newtype' define a LIFTED type, and
then to use irrefutable pattern matching.  This seems positively weird,
because the whole point of `newtype' is that it should be the SAME as
the underlying type.

By the way, with `newtype', what is the intended meaning of

case undefined of Foo _ - True ?

I cannot tell from the summary on the WWW page.  Defining `newtype'
in terms of `datatype' and strictness avoids any ambiguity here.

Make newtype equivalent to a datatype with one strict constructor.
Smaller language, more equivalences, simpler semantics, simpler
implementation.  An all around win!

Cheers,  -- P










Re: Haskell 1.3

1995-09-11 Thread John Launchbury


I would like to respond to John's note. My response is largely positive,
though I disagree with a couple of points.

However, it is an independent question whether or not strictness annotations
should be applicable to function types. And this is where I disagree with
the committee. To quote `Introducing Haskell 1.3',

Every data type, except -, is a member of the Data class.

In other words, in Haskell 1.3

FUNCTIONS ARE NOT FIRST-CLASS CITIZENS

I cannot agree here. Functions are not members of the equality class either,
but that does not demote them to second class citizens. However, John may be
right in suggesting that people will become more reluctant to use functions
as values if they cannot force their evaluation.

I see a very great cost in such a philosophical change, and I do not see
that the arguments against strictly evaluating function values are so very
compelling.

  Implementation difficulties? hbc has provided it for years, and
  even under the STG machine is the problem so very much harder than handling
  shared partial applications correctly?

I haven't checked hbc, but I would be interested if someone would confirm
that function strictify works properly. It didn't use to in LML.

  Semantic difficulties? The semantics of lifted function spaces are
  perfectly well defined. OK, so it's not the exponential of a CCC --- but
  Haskell's tuples aren't the product either, and I note the proposal to
  change that has fallen by the wayside.

This is probably an important point. I see there being value in two sorts
of functions: lifted and non-lifted (or equivalently boxed and unboxed).
A lifted function may be expressed as a computation which delivers a function,
just like lifted integers are computations which deliver integers. Under this
view it would be entirely in keeping with the rest of Haskell for the standard
functions to be lifted, and to leave open the possibility in the future of
introducing unlifted functions.

So here's my proposal: change `Introducing Haskell 1.3' to read

Every data type, including -, is a member of the Data class.

I am inclined to agree. Is there a problem then that every type is in Data?
Not at all. The Data class indicates that forcing has been used in the
body of an expression. This is valuable information that is exposed in
the type.

John.







Re: Haskell 1.3

1995-09-11 Thread John Hughes




Let me make one more attempt to persuade the committee to change the way
strictness annotations are to be introduced.

First of all, let's recognise that strictness annotations and the seq
function are of enormous importance; this is a vital extension to the
language, not a small detail. Space debugging consists to quite a large
extent of placing applications of seq correctly, and we all know what
dramatic effects space debugging has been able to achieve. The strictness
features are going to be very heavily used in the future.

Recording uses of polymorphic strictness annotations using class Data has
both advantages and disadvantages. A big disadvantage is that curing a space
bug may change the types of many functions in many modules, which at the
least may require a lot of recompilation. The programmer who likes to state
the type of each function will be especially hard hit, of course, which will
unfortunately discourage such a style. But class Data seems to be vital for
cheap deforestation, which is such an important optimisation as to outweigh
the disadvantages.

However, it is an independent question whether or not strictness annotations
should be applicable to function types. And this is where I disagree with
the committee. To quote `Introducing Haskell 1.3',

Every data type, except -, is a member of the Data class.

In other words, in Haskell 1.3

FUNCTIONS ARE NOT FIRST-CLASS CITIZENS

To design a functional language today, in which this is true, is in my view
deeply mistaken. In the past, I've argued that it will be very frustrating
for those programmers who do discover they need to apply seq to a function
in order to cure a space bug, to find that they are unable to do so. Even
more seriously, programmers weighing up a choice of representation for an
abstract datatype, choosing between a representation as a function or as a
`Data' type, will know that if they choose the function then problems with
space debugging may lurk in the future. Excluding (-) from class Data is a
step away from true `functional' programming towards a style in which
higher-order functions are just a kind of macro.

I see a very great cost in such a philosophical change, and I do not see
that the arguments against strictly evaluating function values are so very
compelling. 

  Implementation difficulties? hbc has provided it for years, and
  even under the STG machine is the problem so very much harder than handling
  shared partial applications correctly? 

  Semantic difficulties? The semantics of lifted function spaces are 
  perfectly well defined. OK, so it's not the exponential of a CCC --- but 
  Haskell's tuples aren't the product either, and I note the proposal to 
  change that has fallen by the wayside. 

  Weaker strictness analysis? I'd like to hear the effect quantified. How
  much slower will Haskell 1.3 run if function spaces are lifted in the
  semantics? Will it be measurable? I'm prepared to pay a few percent.

So here's my proposal: change `Introducing Haskell 1.3' to read

Every data type, including -, is a member of the Data class.

John Hughes





Haskell 1.3 Prelude changes

1995-09-09 Thread John C. Peterson


Changes to the Haskell 1.3 Prelude

The following changes have been proposed (or accepted) for Haskell 1.3.


* Reorganize the Ord class  
* Add succ and diff to Enum 
* Add new class "Bounded"   
* Add strictness annotation to Complex and Ratio
* Use Int in take, drop and splitAt 
* Add replicate, lookup, curry and uncurry  
* Move functions into libraries 
* Non-overloaded versions of PreludeList functions 
* Numeric Issues
* Simplify lex  
* Add undefined 
* Monad Class   

 
 Changes to Ord

In Haskell 1.2, two comparisons are required to do a "three way branch":

if x == y then ...
else if x  y then ...
else ...

Even a standard two way branch can be inefficient - here's the 
default definition of "" in the standard prelude:

x  y = x = y  x /= y

Instead of defining a = operator which returns just two values, it
is almost as easy to define an operator which returns three different
values:

   case compare x y of
EQ - ...
LT - ...
GT - ...

The constructors  EQ , LT ,and GT belong to
a new type: Ordering.
In addition to this efficiency problem, many uses of Ord such as 
sorting or operations on ordered binary trees assume total ordering.
The compare operation formalizes this concept: it can not
return a value which indicates that its arguments are unordered.
Programmers are free to define a class for partial orderings; here, we
simply state that Ord is reserved for total orderings.

Proposed Changes:

 * Add a new type:

data Ordering = LT | EQ | GT  deriving (Eq,Ord,Ix,Enum,Bounded,Text)

 * Delete comment in definition of class Ord which explains how
   to define min and max for both total and partial orders.
 * Change definition of Ord to

class Ord a where
   compare :: a - a - Ordering
   (), (=), (=), ():: a - a - Bool
   max, min:: a - a - a
   -- circular default definition:
   -- either = or compare must be explicitly provided
 x  y  = compare x y == LT
 x = y  = compare x y /= GT
 x  y  = compare x y == GT
 x = y = compare x y /= LT
 compare x y 
   | x == y= EQ
   | x = y= LT
   | otherwise = GT
 max x y = case compare x y of
LT - x
_  - y
 min x y = case compare x y of
LT - y
_  - x

 * Change definitions of Ord instances in PreludeCore.  At present,
   Ord instances define the "=" method.  These should be deleted and
   replaced by definitions of the "compare" method. 
 * Add this sentence to Appendix E:

   "The operator compare is defined so at to compare its arguments
lexicographically (with earlier constructors in the datatype 
declaration counting as smaller than later ones) returning
LT, EQ and GT (respectively) as the first argument is strictly
less than, equal to and strictly greater than the second argument
(respectively)."


   The methods , =, , = could be removed from Ord and turned into 
   ordinary overloaded functions.  For efficiency, these could be
   specialized; the GHC specialize pragma allows an explicit definition
   of a function at a particular overloading:

  Specialize (=) :: Int - Int - Bool = primLeInt





Add succ and diff to Enum



Haskell 1.2 provides very limited facilities for operating on
enumerations.  The following elementary operations must be implemented
in an obscure and inefficient manner, if at all:

 * Get the next value in enumeration: (\ x - head [x..])
 * Get the previous value in enumeration: no reasonable way
 * Get the n'th value in enumeration: [C0..] !! (n - 1)
 (where C0 is first in enumeration)
 * Find where a value occurs in an enumeration: lookup (zip [C0..] [0..]) x 

Proposed changes:

 * Add two new methods to Enum:

   succ :: Int - a - a
   diff :: a - a - Int

Informally, given an enumeration:

 data T = C0 | C1 | ... Cm

we have:

 diff Ci Cj = i - j
 succ x Ci | 0 = i+x  i+x 

For example, given the datatype and function:

 data Colour = Red | Orange | Yellow | Green | Blue | Indigo | Violet

 toColour :: Int - Colour
 toColour i = succ i Red

we would have:

 toColour 0 = Red
 toColour 1 = Orange
 ...
 toColour 6 = Violet


 * Change definitions of Enum instances:

 instance Enum Char where
   succ = primCharSucc
   diff = primCharDiff
   enumFrom = boundedEnumFrom maxChar
   enumFromThen = boundedEnumFromThen minChar maxChar
  
 boundedEnumFrom hi x | x 

 * Change description of derived instances of Ix for enumerations and Enum:

   Given the enumeration:

 data c = T u1 ... uk = K1 | ... | Kn deriving (C1,...Cm)

succ i Cj returns C(i+j) if 0  Enum a where
enumFrom = boundedEnumFrom Kn
enumFromThen = boundedEnumFromThen K1 Kn

and the derived Ix instance is de

Changes in Haskell 1.3

1995-09-09 Thread John C. Peterson



Introducing Haskell 1.3

This new version of the Haskell Report adds many new features to the
Haskell language.  In the five years since Haskell has been available
to the functional programming community, Haskell programmers have
requested a number of new language features.  Most of these features
have been implemented and tested in the various Haskell systems and we
are confident that all of these additions to Haskell address a real
need on the part of the community.  This revision to the Haskell
report is much more substantial than previous ones: many significant
additions are being made.  We have also streamlined some aspects
of Haskell, eliminating features which have been little used and
complicate the language.

The final version of the Haskell 1.3 is expected to be complete in
October, 1995.  A preliminary version of the report will be available
soon.  All significant changes to the Haskell language, as well as
their motivation, are described here.  We are still open to comments
and suggestions; please send mail to [EMAIL PROTECTED]
regarding Haskell 1.3.  I will be happy to answer any questions or
forward mail to either the Haskell mailing list or the 1.3 committee,
as appropriate.  Information about the design of Haskell 1.3 and other
proposed extensions to Haskell is available on the web at 

http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html

There will be some minor incompatibilities with Haskell 1.2.  These
should not be serious and implementors are encouraged to provide a
Haskell 1.2 compatibility mode.



Overview


Haskell 1.3 introduces the following major features:
 * Standardized libraries and a reduced prelude
 * Constructor classes (as in Gofer) 
 * Monadic I/O 
 * Strictness annotations in type definitions  
 * Simple records  
 * A new type mechanism   
 * Special monad syntax (`do') 
 * Qualified names 
 * All names are now redefinable
 * The character set has been expanded to ISO-8559-1

Many other smaller changes to Haskell 1.2 have also been made.  A
complete description of new, changed, and eliminated features follows.


Prelude Changes

Haskell 1.3 will make a number of minor changes to the standard prelude.
Many prelude functions will be moved to libraries, reducing the size
of the Haskell core language.  These changes will be described separately.


Standard Libraries


As Haskell has grown, many informal libraries of useful functions have
been created.  In Haskell 1.3, we have decided to standardize a set of
libraries to accompany the core language.  Some of the functions
formerly in the prelude are now in libraries, decreasing the size of
the core language and giving the user more names in the default
namespace.  We are dividing the Haskell report into two separate
documents: a language report and a library report.  The prelude, now a
little smaller, will be described in the language report.  The library
report will continue to evolve after the 1.3 language report is complete.
We have moved much of the I/O, complex and rational arithmetic, many
lesser used list functions, and arrays to the libraries and also
developed a number of completely new libraries.  An initial Haskell
library report will be available at the same time as the 1.3 language
report.



Constructor Classes


We have observed that many programmers use Gofer instead of Haskell
to use Gofer's constructor classes.  Since constructor
classes are well understood, widely used, and easily implemented we
have added these to Haskell.  Briefly, constructor classes
remove the restriction that types be `first order'.  That is, `T
a' is a valid Haskell type, but `t a' is not since
`t' is a type variable.
Constructor classes increase the power of the class system.  For
example, this class definition uses constructor classes: 

  class Monad m where
(=) :: m a - (a - m b) - m b
return :: a - m a

Here, the type variable `m' must be instantiated to a polymorphic data
type, as in

  instance Monad [] where
f = g = concat (map g f)
return x = [x]

No changes to the expression language are necessary; constructor
classes are an extension of the type language only.

Constructor classes require an extra level of type information called
`kinds'.  Before type inference, the compiler must perform kind
inference to compute a kinding for each type constructor.  Kinds are
much simpler than types and are not ordinarily noticed by the programmer.

The changes to Haskell required to support constructor classes are:

 * The syntax of types includes type application.
 * Built-in types have names: [] for lists, (-)
 for arrow, and  (,) for tuples.  Using type application,
 the type `(,) a b' is identical to `(a,b)'.
 * Type constructors (but not type synonyms) can be partially applied.
 * Type variables in interface files may be annotated with a kind.
 This will not affect any type

Re: Haskell 1.3 Draft Report

1995-05-19 Thread David Bakin


Hi.  For the TeX-impaired, is there any chance of sticking postscript files
on an ftp site?  Thanks!  -- Dave

A draft of the Haskell 1.3 report is available by FTP from
ftp.dcs.glasgow.ac.uk [130.209.240.50] in

   pub/haskell/report/draft-report-1.3.dvi.gz  [Report]
   pub/haskell/report/draft-libraries-1.3.dvi.gz   [Libraries]

Highlights include:

   Monadic I/O
   A split into prelude and libraries, with qualified names
   Strict data types
   Some minor syntactic revisions

We are planning to revise this and release it in time for FPCA '95.
There will definitely be additional prelude and library changes;
including several new libraries.

Feedback is welcome and will be taken into account when revising the
report, but please remember that we will be very busy over the next few
weeks (I am also away for the next two weeks!).  Please mail typos., minor
notes on syntax etc. to me; substantive comments should be sent to
[EMAIL PROTECTED]

Regards,
Kevin




--
Dave Bakin  How much work would a work flow flow if a  #include
510-922-5678work flow could flow work?
std/disclaimer.h






Haskell 1.3 Draft Report

1995-05-19 Thread kh


A draft of the Haskell 1.3 report is available by FTP from
ftp.dcs.glasgow.ac.uk [130.209.240.50] in

pub/haskell/report/draft-report-1.3.dvi.gz  [Report]
pub/haskell/report/draft-libraries-1.3.dvi.gz   [Libraries]

Highlights include:

Monadic I/O
A split into prelude and libraries, with qualified names
Strict data types
Some minor syntactic revisions

We are planning to revise this and release it in time for FPCA '95.
There will definitely be additional prelude and library changes;
including several new libraries.

Feedback is welcome and will be taken into account when revising the
report, but please remember that we will be very busy over the next few
weeks (I am also away for the next two weeks!).  Please mail typos., minor
notes on syntax etc. to me; substantive comments should be sent to
[EMAIL PROTECTED]

Regards,
Kevin






Prelude and Library Issues in Haskell 1.3

1995-02-09 Thread Alastair Reid



Currently, the Haskell language does not mention any libraries or
facilities for using them.  The standard prelude is meant to serve as
a library but it lacks many important features.  All Haskell
implementations have begun to haphazardly include various libraries.
However, these libraries have not yet been standardized across the
different implementations and cannot always be used in a portable
manner.

We have produced a document which discusses some of the issues
involved in designing a standard Haskell library and describes what we
think the library should look like.  We welcome any comments or
suggestions teh Haskell community care to make.


The document is available in postscript format by anonymous ftp:

  /pub/haskell/yale/libs.ps
  on
  haskell.cs.yale.edu

and over the web:

  http://www.cs.yale.edu/HTML/YALE/CS/HyPlans/reid-alastair/libs/libs.html


Alastair Reid and John Peterson
Yale Haskell Project




Re: New Haskell 1.3 I/O Definition

1994-12-16 Thread Will Partain


Kevin Hammond writes: "We have attempted ... to consider portability
issues very carefully."

But we may have missed something.  For example, I don't think anyone
has actually *seen* a "Win32 Programmer's Reference Manual" -- i.e.,
the programming interface for most of the world's computers :-( -- and
something may have been overlooked.

If you are an "expert" about some particular system, *please* give
this I/O proposal a good reading!  Does the proposal make sense for
the system in question?  Could it be sort-of-plausibly implemented?
Your feedback will really help.

Haskell is not just for Unix boxes!  I can say this because I am as
Unix-centric as they come :-)

Will

Disclaimer -- not taking credit for others' efforts: I did none of the
Real Work on this I/O proposal.




Re: Haskell 1.3

1993-11-23 Thread kh


Ian Holyer writes:
 To go back to the debate on instances, here is a concrete proposal for 
 handling instances in Haskell 1.3:

I can see what you're doing, but I dislike the idea of no longer being
able to define instances local to a module.  This limits my choice of
class and type names, and may cause problems when importing libraries
defined by other users.  For global (exported) instances your rules
make sense (a variant of these was considered at one point) with the
caveats marked below.
 
   1) A C-T instance can be defined in any module in which C and T are 
  in scope.

Fine, in conjunction with 5 and 2 or similar constraints.
 
   2) A C-T instance defined in module M is in scope in every module which
  imports from M, directly or indirectly.  (If C or T are not in scope, a
  module just passes the instance on in its interface).

You need to ignore local C-T instances (i.e. those where a class C or
type T is defined locally and not exported), otherwise mayhem could
result.  Local instances will now also cause problems if there is a
global C-T instance defined in any importing module.

The interface is problematic if a new class with local name C or type
with local type T is defined (or both!), especially if there is a
(local) C-T instance.  Getting round this would involve being much more
explicit about global names in interface files (e.g. an M1.C-M2.T
instance).  There is also potential name capture of type, class, or
operator names by the importing module, which would require 
additional checking of interfaces import (something we would like to 
avoid for efficiency reasons).

   3) A C-T instance may be imported more than once via different routes,
  provided that the module of origin is the same.

This implies annotating instances with their module of origin, as
you note below.
 
   4) If an application of an overloaded function is resolved locally, the
  relevant instance must be in scope.

...a relevant instance must be in scope...
   ^

   5) There must be at most one C-T instance defined in the collection of
  modules which make up any one program (global resolution occurs in Main).

There should be at most one global C-T instance defined (otherwise you
lose the ability to create local types with instances)...  You also
shouldn't specify where resolution takes place.  Link resolution is
much faster...

 I would like to see the origin of instances in interface files.  My preference
 from an implementers point of view would be something like:
 
interface M1 whereinterface M3 where
import M2 (C(..))or   import M2 (C(..))
import M3 (T(..),fT)  type T = ...
instance C T where f = fT instance C T where f = fT
 
 The name fT is invented while compiling M3 and passed around in interface
 files, but not exported from them into implementation modules.  As well as
 specifying the origin of the instance, it gives the code generator something
 to link to. 

This really isn't a problem for an implementation.  We can always link to a
hidden name derived from the unique C-T combination.  Introducing magic
names in an interface sounds like a *very bad* idea -- you might well 
accidentally capture a user- or Prelude-defined name.  For example,

class From where
from :: Int - [a] - a

instance From Int where
from = ...

introduces fromInt in the interface, which will clash with the Prelude
name.

  interface M1 where
  import M2(C(...))
  import M3(T(...))
  import M4(instance M2.C M3.T)

is probably closer to what's required.

Regards,
Kevin





Haskell 1.3

1993-11-11 Thread ian


Here is another suggestion for Haskell 1.3.

The current restriction that instances must be defined either in the class
module or the type module is painful.  If a module defining an abstract type
contains a class definition, it may be impossible to define an instance in the
module defining the type (eg, it may be pre-defined in the prelude) and to put
it in the module defining the class would be breaking into the abstraction
(the module may not be mine, and I may not have source access to it).  If the
only reason for the restriction is that instances don't have names to control
their import/export, I suggest dropping the restriction and allowing one or
both of the following forms for controlling export of instances:

   module M (... (==) ...) where
   instance Eq T where ...

   module M (... Eq(..) ...) where
   instance Eq T where ...

The first means "export all the instances of (==) defined in this module" and
the second means "export all the instances of the Eq methods defined in this
module" (allowed even though the module does not define the Eq class, but
merely extends it).  This doesn't allow separate instances to be
distinguished, but I can live with that; I don't want this to get heavy.

There would be an incompatibility with Haskell 1.2: if there is an explicit
export list, and the list does not mention a method/class, then instances of
that method/class are not exported.

Incidentally, I think the class and module systems both have some nasty
problems (eg Warren Burton's recent comments) and that both need a more
thorough redesign for Haskell 2.0.

Ian[EMAIL PROTECTED],   Tel: 0272 303334




Re: Haskell 1.3 [instances]

1993-11-11 Thread Will Partain


   Ian Holyer writes:

   The current restriction that instances must be defined either in
   the class module or the type module is painful.

LISTEN TO THIS MAN!  Trying to use the module system in (what we
imagined to be) a sensible way on the Glasgow Haskell compiler [which
is written in Haskell] has been a nightmare.  Take a pile of
mutually-dependent modules, add the "instance virus" [instances go
with the class or type, and you can't stop them...], and you have
semi-chaos.  All attempts to have export/import lists that "show
what's going on" have been undermined by having to add piles of cruft
to keep instancery happy.

I would go for either of the following not-thought-through choices:

* Instances travel with the *type*, not the class.  99% of the time,
  this is what we want.  If your instance isn't going, add an explicit
  export of the type constructor.  Possibly have a special case for
  instances of user-defined classes for Prelude types...

* Make it so that imported instances whose class/type is out-of-scope
  may be silently ignored (i.e., an exception to the closure rule).

  For example, if I write "import Foo" and Foo's interface includes
  "instance Wibble Wobble" and none of my "imports" happen to bring
  "Wibble" (or "Wobble") into scope, then a compiler may drop this
  instance silently.  It is not an error.  (Of course, if you try to
  *use* such an instance, you will get an error downstream.)

Of course, something that involves new syntax/extra machinery would
also be fine.

Will

PS: Get rid of "default" declarations, too.  No-one uses them. (Hi,
Kevin!)




Wishlist for Haskell 1.3

1993-10-27 Thread Van Snyder


I would like to put two rather prosaic things into Haskell 1.3.  They almost
fall into the "syntactic sugar" class, but they would make my life easier.

The first is that I would like to see arrays be a class instead of whatever
they are.  I wanted to construct a subclass of arrays that were constrained
to have lower bounds equal to one, but after fooling around for some time I
just gave up.  Maybe it's easy, and I just don't know the right way to hold
my mouth.  I would also like to be able to construct a sub-class of one-
dimensional array that is a vector, and a sub-class of two-dimensional
array that is a matrix, and overload "*" to mean "inner product".

The second thing I would like is an array section notation.  In many operations
of linear algebra, one needs to view a matrix sometimes as an array of row
vectors, and sometimes as an array of column vectors.  This arose in development
of a function that implements Crout's method to factor a matrix.  (Crout's
method is especially attractive for functional languages because each element
of the factor is written exactly once.  That is not the case with Gauss-like
methods.)  I ended up writing three functions, one that computes the inner
product of two vectors, another that computes the inner product of a row and
column of a single matrix, and another that computes the inner product of a
row of one matrix with a column of another.  Others would need functions that
compute the inner product of a vector with a row or column of a matrix.  It
would be easier to write one function that computes the inner product of two
vectors, and create vectors out of pieces of a matrix by using a section
notation.  For example, I might write a Crout reduction with no pivoting as:

lu = array b
([(i,1) := a!(i,1) | i - [1..m]] ++
 [(1,j) := a!(1,j)/a!(1,1) | j - [2..n]] ++
 [(i,j) := (a!(i,j) - dot lu!(i,1..j-1) lu!(1..j-1,j))
   | i - [2..m], j - [2..i]] ++
 [(i,j) := (a!(i,j) - dot lu!(i,1..i-1) lu!(1..i-1,j)) / lu!(i,i)
   | i - [2..m], j - [i+1,m]])

where ((_,_),(m,n)) = b = bounds a.

BTW, I have developed a Crout reduction that uses pivoting, but I _think_
it's hitting something that's a little too strict -- the run-time system
insists there's a black hole, but if I run the code "by hand" I'm always able
to find an order such that data are available -- there aren't any circular
dependencies on un-computed data.  Maybe somebody can tell me where I've
gone wrong, or recommend a change in Haskell 1.3 to cope with the problem if
it's real.  The Crout reduction with pivoting follows.  If anybody wants to try
it through the compiler. you'll need a test harness, which I'll be happy to
send, but I don't think I ought to waste net bandwidth to post it.

It's also unfortunate that array bounds were defined to be ((array of low
bounds),(array of high bounds)) [I know the arrays are really tuples] instead
of array of tuples (low,high).  The latter could be used with the inRange
function from the Prelude, while the former cannot.  But it'd probably be
_really_ hard on a lot of people to change this now.

Best regards,
Van Snyder
 Crout reduction with pivoting --

module Croutp (croutp) where
-- Crout method for LU factorization

import Linalg (ipamaxc,rcdot2)
-- ipamaxc a j p m n returns the index x of the element of p(m..n) such that
--a!(p!(x),j) is the element of column j of a having the largest
--absolute value.
-- rcdot2 a b i j m n computes the inner product of a(i,m..n) and b(m..n,j)

-- croutp takes matrix a and returns l and u factors in one matrix lu.
-- performs pivoting.
-- calculates values of lu from values of a and lu.

croutp :: (RealFrac v) = Array (Int,Int) v - (Array (Int,Int) v,
   Array Int (Array Int Int), Array Int Int, Array Int Int)
croutp a = if k==1  l==1  m=n then (lu,p,mx,mk)
  else error "crout: lower bounds not 1 or #rows  #columns"
  where
  b = bounds a
  ((k,l),(m,n)) = b

--t :: (RealFrac v) = Array (Int,Int) v
  t = array ((1,1),(m,m))
   ([let k = p!1!i in (k,1) := a!(k,1) | i - [1..m]] ++
[let k = p!s!i
 in (k,s) := a!(k,s) - rcdot2 t lu k s 1 (s-1)
| s - [2..m], i - [s..m]])

--p :: Array Int (Array Int Int)
  p = array (1,m)
   ([1 := array (1,m) [i := i | i - [1..m]]]++
[s := let u = s-1
  k = mk!u in
  if u == k then p!u
  else p!u // [u := mx!u, k := (p!u)!u] | s - [2..m]])

--mk :: Array Int Int
--With the first definition of mk active, run-time insists there's a black hole.
--With the second, things work, but the function does no pivoting.
  mk = array (1,m) [s := ipamaxc t s (p!s) s m | s - [1..m]]
--mk = array (1,m) [s := s | s - [1..m]]

  mx :: Array Int Int
  mx = array (1,m) [s := (p!s)!(mk!s) | s - [1..m]]

--lu :: (RealFrac v) = Array (Int,Int) v
  lu = array b
   ([(s,j) := t!(mx!s,j) | s 

Haskell 1.3 (n+k patterns)

1993-10-12 Thread John Launchbury


I feel the need to be inflamatory:

  I believe n+k should go.

There are lots of good reasons why they should go, of course. The question
is: are there any good reasons why they should stay? My understanding is
that the only reason they are advocated is that they make teaching
induction easier. I don't believe it. I teach an introductory FP course
including induction. I introduce structural induction directly, and the
students have no problem with it. When I have tried to talk to individuals
about natural number induction using (n+k) patterns, then the problems
start. Because they are so unlike the form of patterns they have become
used to they find all sorts of difficulties. What if n is negative. Ah yes,
well it can't be. Why not. It just can't. etc.

Let's throw them out.

John.





Re: Haskell 1.3 (n+k patterns)

1993-10-12 Thread Lennart Augustsson



jl writes:
 I feel the need to be inflamatory:
 
   I believe n+k should go.
Again, I agree completely.  Let's get rid of this horrible wart
once and for all.  It's a special case that makes the language
more difficult to explain and implement.  I've hardly seen any
programs using it so I don't think backwards compat is a problem.
Anyone who thinks this change will cause them more than 10
minutes work, plese speak up.

-- Lennart




Haskell 1.3

1993-10-12 Thread ian


I hope that Haskell 1.3 will clean up the report, and maybe even the language,
and not just add features.  Recent work at Bristol has raised the following
points; I apologise for any which are well known already.


  o The layout rule that says that an implicit block can be terminated by the
surrounding construct (ie whenever an `illegal' token is found) is painful.
It forces layout processing to be intertwined with parsing, which (eg)
rules out the design of a language-sensitive editor based on matching
tokens rather than full parsing.  It can also make it difficult to report
syntax errors precisely.  There is little problem when the surrounding
construct is a multi-token one, as in:

   pair = (case n of 1-42, 43)

but pathological cases such as the following (all legal!) cause problems:

   a = n where n = 42 ; ; b = 43   -- terminated by second `;'
   c = case x of 1-y where {y=44} where {x=1} -- ditto by second `where'
   d = case 1 of 1-44 :: Int + 1  -- ditto by `+'

Is it not possible to find some better convention which rules these out
and allows layout processing to be carried out separately from parsing?


  o The expression 4/2/1 is illegal according to section 5.7 of the report
(division operators are not associative), but legal according to the fixity
declarations in appendix A.2 (infixl).  Existing compilers differ.
Also :% is missing from the table in 5.7.


  o Section 2.4 doesn't make it clear that decimal points are (presumably) the
one and only exception to the longest lexeme rule of section 2.3, which
explicitly says that no lookahead is required.  This exception is needed to
make expressions such as [1..n] legal.  Presumably, the rest of the
numeric literal syntax follows the longest lexeme rule, so that (f 1.2e)
is reported as an incomplete literal rather than accepted as (f 1.2 e).


  o Definitions such as (f x) = ... or (x # y) = ... are illegal (although
existing compilers allow them).  This prevents, for example, the
following natural definition of the composition (dot) operator:

   (f . g) x  =  f (g x)

Is this restriction intentional?


  o The situation with unary minus is still confused.  Expressions such as
(2 + -3) are technically illegal, although accepted by current compilers.
Also, it is not entirely clear from sections 3.3 and 3.4 whether (2-) is
legal (presumably meaning (\n-2-n)).  Also, the definition -42 = 42 is
legal (patdefs do not exclude minus patterns), and accepted by current
compilers, although it is meaningless.


  o The form (`div`) is illegal, even though it looks very natural in
definitions such as

   ops = [(+),(-),(`div`),(`mod`)]

This seems to be against the general policy of allowing any meaningful
expression in any suitable context.


  o There is a general inconsistency of language in the report.  A notable case
is that the functions associated with a class are variously called
methods, operations, or operators.  The last of these is surely wrong.


  o A number of other minor matters are raised by the tests available by
anonymous ftp from ftp.cs.bris.ac.uk, directory /pub/functional/brisk.


Ian[EMAIL PROTECTED],   Tel: 0272 303334




Re: Defining Haskell 1.3 - Committee volunteers wanted

1993-09-27 Thread wadler


Three cheers for Brian, for his work to keep Haskell a
living and growing entity.

I propose as a touchstone for 1.3 that they should only look
at extensions that have been incorporated in one or more
Haskell implementations.  Hence the following are all good
candidates for 1.3's scrutiny:

Monadic IO
Strict data constructors
Prelude hacking
Standardizing annotation syntax

But the following is not:

Records (naming field components)

If someone actually implemented records, then after we
had some experience with the implementation it would
become a suitable 1.3 candidate.

A further thing which 1.3 should look at is:

ISO Standardisation

The credit for this suggestion should go to Paul Hudak,
but I heartily endorse it.

Cheers,  -- P




Defining Haskell 1.3 - Committee volunteers wanted

1993-09-20 Thread Brian Boutel



Joe Fasel, John Peterson and I met recently to discuss the next step in
the evolution of Haskell.

While there are some big issues up ahead, (adding Gofer-like constructor
classes, for example), these should be considered for the next major
revision, Haskell 2.0.

For now, we want to be less ambitious, and produce a definition of
Haskell 1.3.

Topics on the agenda include:

Monadic IO
Strict data constructors
Records (naming field components)
Prelude hacking
Standardizing annotation syntax

We think the best way to proceed is to call for volunteers to form 
a new committee to do the work on this.

So, who's interested?

--brian





Re: Defining Haskell 1.3 - Committee volunteers wanted

1993-09-19 Thread A.


I'm probably not expert enough to be on the committee. However, I have a 
suggestion.  The syntax description of Haskell is hard to read. One reason
is that one repeatedly has to look in the index to find out where some
nonterminal is defined.   If the page number of the definition of each 
nonterminal were written in, say, the right hand margin for each use, then
it would be easier to decipher things. A disadvantage might be added clutter.

   Don