The Haskell 1.3 compiler NHC13 is now available
Version 0.0 of NHC13, Nearly a Haskell 1.3 Compiler, by Niklas Rojemo, is now available for download from ftp://ftp.cs.chalmers.se/pub/haskell/nhc13 It has the following features - Compiles Haskell 1.3 - Supports Fudgets - Supports several kind of heap profiles: producer constructor retainer life-time biographical combinations of the above Although NHC13 0.0 is probably not yet to be regarded as a mature Haskell 1.3 compiler, it may still be of interest since it provides some new kinds of heap profiles not found in any other Haskell 1.3 compiler. Finding space leaks or other undesired space behaviour using (combinations of) retainer and biographical profiles can be much simpler than with the traditional producer/constructor profiles. Heap profiling also works for Fudgets programs. The commands to use are nhc13 the compiler nhc13make a version of hbcmake for nhc13 nhc13xmake to compile Fudgets programs hp2graphto convert heap profiling output to postscript Manual pages with more details are included in the distributions. Recent papers on heap profiling are Niklas Rojemo and Colin Runciman: "Lag, drag, void and use - heap profiling and space-efficient compilation revisited". In the proceedings of ICFP'96. Colin Runciman and Niklas Rojemo: "Two-pass heap profiling: a matter of life and death". In the proceedings of IFL'96. These are available from ftp://ftp.cs.chalmers.se/pub/users/rojemo/icfp96.ps.gz ftp://ftp.cs.chalmers.se/pub/users/rojemo/ifl96.ps.gz Niklas Rojemo Thomas Hallgren
Haskell 1.3 Libraries Available for Comment
The current draft of the Haskell 1.3 Libraries is now available for public comment at ftp://ftp.dcs.st-and.ac.uk/pub/haskell/lib-28-Oct-96.{ps,dvi} in either PostScript or DVI format (HTML will follow). The document defines the required libraries for conforming Haskell 1.3 implementations: Ratio -- Rationals, as in Haskell 1.2 Complex -- Complex Numbers, ditto Ix -- Indexing Operations, ditto Array -- Array operations, ditto List-- Old and new list operations Maybe -- Operations on the Maybe type Char-- Operations on characters, mainly character-kind (isLower etc) Monad -- Monadic utility functions IO -- More advanced Input/Output Directory -- Operations on directories System -- Operating system interaction (system, getEnv, exit etc.) Time-- Date and Time Locale -- Local conventions (date/time only at present) CPUTime -- CPU Time usage Random -- Random number generation on Integer Bit -- Bit manipulation Natural -- Fixed-precision natural numbers Signed -- Fixed-precision signed numbers Most of the comments that have been made on previous versions have been acted upon. If you have read previous versions of the library, you may notice the omission of the Posix library. I intend to revise this and make it available as an optional library in the near future. Please send comments on these libraries either to me, or to the Haskell Committee ([EMAIL PROTECTED]) by November 30th 1996 (I will take late comments into account as far as possible, but may need to delay these for future reviews of the libraries). Assuming normal levels of change, I aim to have this version of the libraries stabilised by the end of the year. Our long-term goal is to provide a repository for these libraries at Glasgow, which will allow new libraries to be contributed and existing ones to be worked on remotely. The repository should be mirrored at Yale, Chalmers, and perhaps elsewhere. To help future-proof these libraries, we are considering adopting an SGML document standard, probably based on that used for ML '96. I hope to release details of this at the same time as the libraries are stabilised. Regards, Kevin -- Division of Computer Science, Tel: +44-1334 463241 (Direct) School of Mathematical Fax: +44-1334 463278 and Computational Sciences,URL: http://www.dcs.st-and.ac.uk/~kh/kh.html University of St. Andrews, Fife, KY16 9SS.
ANNOUNCE: Glasgow Haskell 2.01 release (for Haskell 1.3)
The Glasgow Haskell Compiler -- version 2.01 We are pleased to announce the first release of the Glasgow Haskell Compiler (GHC, version 2.01) for *Haskell 1.3*. Sources and binaries are freely available by anonymous FTP and on the World-Wide Web; details below. Haskell is "the" standard lazy functional programming language; the current language version is 1.3, agreed in May, 1996. The Haskell Report is online at http://haskell.cs.yale.edu/haskell-report/haskell-report.html. GHC 2.01 is a test-quality release, worth trying if you are a gung-ho Haskell user or if you are keen to try the new Haskell 1.3 features. We advise *AGAINST* relying on this compiler (2.01) in any way. We are releasing our current Haskell 1.2 compiler (GHC 0.29) at the same time; it should be pretty solid. If you want to hack on GHC itself, then 2.01 is for you. The release notes comment further on this point. What happens next? I'm on sabbatical for a year, and Will Partain (the one who really makes GHC go) is leaving at the end of July 96 for a Real Job. So you shouldn't expect rapid progress on 2.01 over the next 6-12 months. The Glasgow Haskell project seeks to bring the power and elegance of functional programming to bear on real-world problems. To that end, GHC lets you call C (including cross-system garbage collection), provides good profiling tools, and concurrency and parallelism. Our goal is to make it the "tool of choice for real-world applications". GHC 2.01 is substantially changed from 0.26 (July 1995), as the new version number suggests. (The 1.xx numbers are reserved for further spinoffs from the Haskell-1.2 compiler.) Changes worth noting include: * GHC is now a Haskell 1.3 compiler (only). Virtually all Haskell 1.2 modules need changing to go through GHC 2.01; the GHC documentation includes a ``crib sheet'' of conversion advice. * The Haskell compiler proper (ghc/compiler/ in the sources) has been substantially rewritten and is, of course, Much, Much, Better. The typechecker and the "renamer" (module-system support) are new. * Sadly, GHC 2.01 is currently slower than 0.26. It has taken all our cycles to get it correct. We fondly believe that the architectural changes we have made will end up making 2.0x *faster* than 0.2x, but we have yet to substantiate this belief; sorry. Still, 2.01 (built with 0.29) is quite usable. * GHC 2.01's optimisation (-O) is not nearly as good as 0.2x, mostly because we haven't taught it about cross-module information (arities, inlinings, etc.). For this reason, a 2.01-built-with-2.01 (bootstrapped) is no fun to use (too slow), and, sadly, that is where we would normally get .hc (intermediate C; used for porting) files from... (hence: none provided). * GHC 2.01 is much smarter than 0.26 about when to recompile. It will abort a compilation that "make" thought was necessary at a very early stage, if none of the imported types/classes/functions *that are actually used* have changed. This "recompilation checker" uses a completely different interface-file format than 0.26. (Interface files are a matter for the compilation system in Haskell 1.3, not part of the language.) * The 2.01 libraries are not "split" (yet), meaning you will end up with much larger binaries... * The not-mandated-by-the-language system libraries are now separate from GHC (though usually distributed with it). We hope they can take on a "life of their own", independent of GHC. * All the same cool extensions (e.g., unboxed values), system libraries (e.g., Posix), profiling, Concurrent Haskell, Parallel Haskell,... * New ports: Linux ELF (same as distributed as GHC 0.28). Please see the release notes for a complete discussion of What's New. To run this release, you need a machine with 16+MB memory (more if building from sources), GNU C (`gcc'), and `perl'. We have seen GHC 2.01 work on these platforms: alpha-dec-osf2, hppa1.1-hp-hpux9, sparc-sun-{sunos4,solaris2}, mips-sgi-irix5, and i386-unknown-{linux,solaris2,freebsd}. Similar platforms should work with minimal hacking effort. The installer's guide give a full what-ports-work report. Binaries are distributed in `bundles', e.g. a "profiling bundle" or a "concurrency bundle" for your platform. Just grab the ones you need. Once you have the distribution, please follow the pointers in ghc/README to find all of the documentation about this release. NB: preserve modification times when un-tarring the files (no `m' option for tar, please)! We run mailing lists for GHC users and bug reports; to subscribe, send mail to [EMAIL PROTECTED]; the msg body should be: subscribe glasgow-haskell- Your
Haskell 1.3 - what's it all about?
Maybe you have seen some mail lately on this list about something called "Haskell 1.3", and wondered What is this "Haskell 1.3" anyway?, Can I buy it?, or Do I have it? By compiling and running the following two-module Haskell program, you will at least get an answer to the last question. -- Put in M.hs --- module M where data M = M M | N () -- Put in Main.hs import M main = interact (const (case (M.N) () of M (N ()) -> "No\n"; N () -> "Yes\n")) --- Magnus & Thomas
Haskell 1.3 Report is finished!
The Haskell 1.3 Report is now complete. A web page with the entire report and other related information is at: http://haskell.cs.yale.edu/haskell-report/haskell-report.html This new report adds many new features to Haskell, including monadic I/O, standard libraries, constructor classes, labeled fields in datatypes, strictness annotations, an improved module system, and many changes to the Prelude. The Chalmers compiler, hbc, supports most (all?) of the new 1.3 features. The Glasgow compiler will soon be upgraded to 1.3. A new version of Hugs (now a combined effort between Mark Jones and Yale) will be available later this summer. A postscript version of the report is available at ftp://haskell.cs.yale/edu/pub/haskell/report/haskell-report.ps.gz. This file should be available at the other Haskell ftp areas soon. John Peterson [EMAIL PROTECTED] Yale Haskell Project
Re: Status of Haskell 1.3
> No implementations of 1.3 are available yet, but we expect all the > major Haskell systems to conform to the new report soon. While this strictly true, hbc 0..0 (announced in comp.lang.functional a few days ago) is almost Haskell 1.3, the only difference is some minor Prelude and Library differences. -- Lennart PS. There are binaries for more platforms available now.
Status of Haskell 1.3
The Haskell 1.3 report is nearly done. The text of the report is complete - I'm working on indexing and web pages. We also have an initial cut at the Library Report. If you are interested in seeing the new report on the web, look at http://haskell.cs.yale.edu/haskell-report/haskell-report.html We expect the report will be complete in another week - the web page will have the latest information and I will be announcing to comp.lang.functional. No implementations of 1.3 are available yet, but we expect all the major Haskell systems to conform to the new report soon. Announcements will be made to this list. Although the report is stable, the related web pages are still under construction. Please have patience! John Peterson Yale Haskell Project
Haskell 1.3
I thought there was an April 19 deadline...? Have there been some last-minute problems? -- Frank Christoph Next Solution Co. Tel: 0424-98-1811 [EMAIL PROTECTED] Fax: 0424-98-1500
Re: Haskell 1.3
We are still in the middle of a bunch of minor last-minute changes. While the technical aspects of Haskell 1.3 are stable, we're still fiddling with the prelude and the wording of the report. We've now set a `final' final release date at May 1. As before, the working version of the report is available via the web at http://haskell.cs.yale.edu/haskell-report/haskell-report.html More importantly, there is a lot of work going on to get the implementations ready. I hope people will be able to start using Haskell 1.3 soon after we release the report. John Peterson [EMAIL PROTECTED] Yale Haskell Project
Haskell 1.3, monad expressions
Suggestion: add another form of statement for monad expressions: stmts -> ... if exp which is defined for MonadZero as follows: do {if exp ; stmts} = if exp then do {stmts} else zero Based on this, one can define list comprehensions by [ e | q1,...,qn ] = do { q1' ; ... ; qn'; return e } where either qi' = if qi (whenever qi is an exp) or qi' = qi (otherwise). -- Stefan Kahrs
Re: Haskell 1.3
Lennart Augustsson wrote: > It looks ugly, but we could say that a data declaration does not > have to have any constructors: > > data Empty = Philip Wadler responded: > I'm not keen on the syntax you propose. How about if we allow the > rhs of a data declaration to be just `empty', where `empty' is a > keyword? > > data Empty = empty Another suggestion is to omit the equal sign, as in data Empty Cheers, Ronny Wichers Schreur [EMAIL PROTECTED]
Re: Haskell 1.3
Philip Wadler writes: > > > It looks ugly, but we could say that a data declaration does not > > have to have any constructors: > > > >data Empty = > > > >-- Lennart > > I agree that the best way to fix this is to have a form of data > declaration with no constructors, but I'm not keen on the syntax you > propose. How about if we allow the rhs of a data declaration to be > just `empty', where `empty' is a keyword? > > data Empty = empty > > -- P I would like to propose an alternative that in my view has both good syntax, and does not introduce a new keyword: data Empty /Magnus
Re: Haskell 1.3
> It looks ugly, but we could say that a data declaration does not > have to have any constructors: > > data Empty = > >-- Lennart I agree that the best way to fix this is to have a form of data declaration with no constructors, but I'm not keen on the syntax you propose. How about if we allow the rhs of a data declaration to be just `empty', where `empty' is a keyword? data Empty = empty -- P
Re: Haskell 1.3
> Suggestion: Include among the basic types of Haskell a type `Empty' > that contains no value except bottom. Absolutely! But I don't think it should be built in (unless absolutely necessary). It looks ugly, but we could say that a data declaration does not have to have any constructors: data Empty = -- Lennart PS. There are other ways of getting empty types, but they are all convoluted, like data Empty = Empty !Empty
Re: Haskell 1.3
> Suggestion: Include among the basic types of Haskell a type `Empty' > that contains no value except bottom. Absolutely! But I don't think it should be built in (unless absolutely necessary). It looks ugly, but we could say that a data declaration does not have to have any constructors: data Empty = -- Lennart PS. There are other ways of getting empty types, but they are all convoluted, like data Empty = Empty !Empty
Re: Preliminary Haskell 1.3 report now available
Thomas Hallgren <[EMAIL PROTECTED]> writes: > In the syntax for labeled fields (records) the symbol <- is chosen > as the operator used to associate a label with a value in > constructions and patterns: [...] > According to a committee member, there were no convincing reasons > why <- was chosen. Other symbols, like = and := were also considered. I support Thomas Hallgen's suggestion that `=' be used instead. Another reason, in addition to the two he mentioned, is that the `<-' symbol is very unintuitive when used for pattern matching, because the arrow is in the *opposite* direction to the data-flow. I find this very confusing. -- Fergus Henderson | Designing grand concepts is fun; [EMAIL PROTECTED] | finding nitty little bugs is just work. http://www.cs.mu.oz.au/~fjh | -- Brooks, in "The Mythical Man-Month". PGP key fingerprint: 00 D7 A2 27 65 09 B6 AC 8B 3E 0F 01 E7 5D C4 3F
Haskell 1.3
Congratulations to all involved on Haskell 1.3! I especially like the introduction of qualified names and the attendant simplifications. Here are some small suggestions for further improvement. Interfaces ~~ Suggestion: the introduction should draw attention to the fact that interface files are no longer part of the language. Such a wondrous improvement should not go unremarked! ISO Character Set ~ Suggestion: Add a one-page appendix, giving the mapping between characters and character codes. Fields and records ~~ Suggestion: Use = to bind fields in a record, rather than <-. I concur with Thomas Hallgren's argument that <- should be reserved for comprehensions and for `do'. SML has already popularised the = syntax. Suggestion: Use the SML syntax, `#field' to denote the function that extracts a field. Then there is no possibility of accidentally shadowing a field name with a local variable. Just as it is a great aid to the readability of Haskell for constructors to be lexically distinguished from functions, I predict it will also a great aid for field extractors to be lexically distinguished from functions. (Alternative suggestion: Make field names lexically like constructor names rather than like variable names. This again makes shadowing impossible, and still distinguished fields from functions, though now field extractors and constructors would look alike.) The empty type ~~ Suggestion: Include among the basic types of Haskell a type `Empty' that contains no value except bottom. It was a dreadful oversight to omit the empty type from Haskell, though it took me a long time to recognise this. One day, I bumped into the following example. I needed the familiar type data Tree a = Null | Leaf !a | Branch (Tree a) (Tree a) instantiated to the unfamiliar case `Tree Empty', which has `Null' and `Branch' as the only possible constructors. One can simulate the empty type by declaring data Empty = Impossible and then vowing never to use the constructor `Impossible'. But by including `Empty' in the language, we support a useful idiom and (perhaps more importantly) educate our users about the possibility of an algebraic type with no constructors. It would be folly to allow only non-empty lists. So why do we allow only non-empty algebraic types? The infamous (n+1) patterns ~~~ Suggestion: Retain (n+1) patterns. If Haskell was a language for seasoned programmers only, I would concede that the disadvantages of (n+1) patterns outweigh the advantages. But Haskell is also intended to be a language for teaching. The idea of primitive recursion is powerful but subtle. I believe that the notation of (n+1) patterns is a great aid in helping students to grasp this paradigm. The paradigm is obscured when recursion over naturals appears profoundly different than recursion over any other structure. For instance, I believe student benefit greatly by first seeing power x 0 = 1 power x (n+1) = x * power x n and shortly thereafter seeing product [] = 1 product (x:xs) = x * product xs which has an identical structure. By comparison, the definition power x 0 = 1 power x n | n > 0 = x * power x (n-1) completely obscures the similarity between `power' and `product'. As a case in point, I cannot see a way to rewrite the Bird and Wadler text without (n+1) patterns. This is profoundly disappointing, because now that Haskell 1.3 is coming out, it seems like a perfect time to do a new edition aimed at Haskell. The best trick I know is to define data Natural = Zero | Succ Natural but that doesn't work because one must teach recursion on naturals and lists before one introduces algebraic data types. Bird and Wadler introduces recursion and induction at the same time, and that is one of its most praised features; but to try to introduce recursion, induction, and algebraic data types all three at the same time would be fatal. Now, perhaps (n+1) patterns introduce a horrible hole in the language that has escaped me; if so, please point it out. Or perhaps no one else believes that teaching primitive recursion is important; if so, please say. Or perhaps you know a trick that will solve the problem of how to rewrite Bird and Wadler without (n+1) patterns; if so, please reveal it immediately! Otherwise, I plead, reinstate (n+1) patterns. Yours, -- P --- Professor Philip Wadler[EMAIL PROTECTED] Department of Computing Sciencehttp://www.dcs.glasgow.ac.uk/~wadler University of Glasgow office: +44 141 330 4966 Glasgow G12 8QQ fax: +44 141 330 4913 SC
Re: Preliminary Haskell 1.3 report now available
I always favoured `=' over `<-', but I don't care much. -- Lennart
Re: Preliminary Haskell 1.3 report now available
I always favoured `=' over `<-', but I don't care much. -- Lennart
Re: Preliminary Haskell 1.3 report now available
> Thomas Hallgren <[EMAIL PROTECTED]> writes: > > > In the syntax for labeled fields (records) the symbol <- is chosen > > as the operator used to associate a label with a value in > > constructions and patterns: > [...] > > According to a committee member, there were no convincing reasons > > why <- was chosen. Other symbols, like = and := were also considered. > > I support Thomas Hallgen's suggestion that `=' be used instead. > Another reason, in addition to the two he mentioned, is that the `<-' > symbol is very unintuitive when used for pattern matching, because the > arrow is in the *opposite* direction to the data-flow. I find this > very confusing. Indeed, a couple of reasons I find convincing myself: 1 - SML uses '=' too, therefore it is one less problem for people moving to/from SML/Haskell. 2 - The '<-' notation always reminds me of list comprehensions, e.g. at first sight if I see an expression like R{v <- [1..10]} I could think v is an integer (taken from [1..10]) when it is actually a list. the following expression is also confusing: [R{v <- [1..x]} | x <- [1..10]] (defines a list of records) An expression using records on the rhs of the '|' should be even more interesting (and useful for obfuscated Haskell competitions). The same applies for records with fields defined with list comprehensions. Andre. Andre SantosDepartamento de Informatica e-mail: [EMAIL PROTECTED] Universidade Federal de Pernambuco http://www.di.ufpe.br/~alms CP 7851, CEP 50732-970, Recife PE Brazil
Re: Preliminary Haskell 1.3 report now available
First, I am happy to see that Haskell 1.3, with its many valuable improvements over Haskell 1.2, is finally getting ready, but I also have a comment: In the syntax for labeled fields (records) the symbol <- is chosen as the operator used to associate a label with a value in constructions and patterns: data Date = Date {day, month, year :: Int} today = Date{day <- 11, month <- 10, year <- 1995} According to a committee member, there were no convincing reasons why <- was chosen. Other symbols, like = and := were also considered. Here are some (in my opinion) good reasons for using = instead of <- : 1. In ordinary declarations, :: is used to specify the type of a name and = is used to specify its value: day, month, year :: Int day = 11; month = 10; year = 1995 so for consistency I think the same notations should be used inside record values: data Date = Date {day, month, year :: Int} date :: Date date = Date {day = 11, month = 10, year = 1995} 2. The <- symbol is used also in list comprehensions and the new monad syntax ('do'): [ 2*x | x <- [1..10] ] do c <- getChar; putChar c In these uses of <- the name on the lhs does not have the same type as the expression on the rhs (above, x::Int, but [1..10]::[Int] and c::Char but getChar::IO Char). The value that the lhs name (or, indeed, pattern) is bound to is "extracted" from the value of the rhs expression. This is very different from what happens with field labels, so a difference in syntax is motivated. Sadly, I suspect it would be difficult to convince the committee to change their minds about this at this late stage, but I am sure it would be even more difficult to change it for a later version of Haskell... Regards, Thomas Hallgren
Preliminary Haskell 1.3 report now available
Announcing a preliminary version of the Haskell 1.3 report. The Haskell 1.3 report is nearly complete. All technical issues appear to be resolved and the report is nearly ready. The report will be finalized April 19. Any comments must be submitted by April 15. We do not anticipate making any serious technical changes is the current version. The report is being made available both on the web and as a .dvi file. A summary of changes made in the Haskell 1.3 report can be found in http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html This has pointers to the html version of the report. The dvi file is available via anonymous ftp at ftp://haskell.cs.yale.edu/pub/haskell/report/new-report.dvi.gz Send comments or questions to [EMAIL PROTECTED]
Haskell 1.3?
Quoting from "Introducing Haskell 1.3" (http://www.cs.yale.edu/ HTML/YALE/CS/haskell/haskell13.html): "The final version of the Haskell 1.3 is expected to be complete in January, 1996." Does anyone know what happens? Regards, Tommy -- "When privacy is outlawed, only outlaws will have privacy." -- Phil Zimmerman
Changes in Haskell 1.3 since last posting
If you have looked at the Haskell 1.3 material previously, the main difference is that all issues regarding records have finally been resolved. In addition, a number of smaller changes have been made: `module M' is used in export lists instead of `M..' The text of the new Prelude is now included. The Bounded class has been added to the Prelude A few more monadic functions for imperative-style programming have been added. The descriptions of the new features have often been clarified and elaborated. John
Haskell 1.3 nearly ready
The Haskell 1.3 effort is nearly complete. Although a new report is not yet complete, all proposed changes to the language as well as the new Prelude are now available for public comment. These documents are available on the web at http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html Any feedback is appreciated! A new report should be ready soon. John Peterson [EMAIL PROTECTED] Yale Haskell Project
Re: Haskell 1.3: modules & module categories
>Date: Mon, 2 Oct 1995 05:53:44 -0400 >Reply-To: [EMAIL PROTECTED] >From: Manuel Chakravarty <[EMAIL PROTECTED]> > >> To me, one of the most regrettable characteristics of >> the Algolic family of languages is the tendency of the >> compiler to turn into a giant black box of facilities >> open only to an elite minority of compiler hackers, which >> then begins inexorably sucking the entire programming >> support environment down its event horizon. >> >> I would much prefer that the concept of "compiler" in this >> sense did not exist, and that instead one had a nicely >> factored translation toolset wide open to the application >> programmer. Lisp and Forth begin to approach this ideal. > > Would you mind divulging the identity of your hilarious correspondent? I got the impression that his original mail was distributed to the whole Haskell mailing list. Anyway, I append it at this message. Cheers, Manuel P.S.: As it seems that there are a number of people who didn't get the mail I responded to, I CC this to the whole mailing list. Sorry, for any duplicates. --- Date: Sat, 30 Sep 1995 10:09:08 -0700 From: [EMAIL PROTECTED] (Jeff Prothero) To: [EMAIL PROTECTED] Subject: Re: Haskell 1.3: modules & module categories Cc: [EMAIL PROTECTED] Manuel Chakravarty <[EMAIL PROTECTED]> writes: | [...] it is desirable to be able to | restrict the access to some modules in a way that the | compiler can control when a group of people is working | in one module hierarchy. Too illustrate this, assume | that we classify the modules into different levels of | abstraction, say, three levels: [...] To me, one of the most regrettable characteristics of the Algolic family of languages is the tendency of the compiler to turn into a giant black box of facilities open only to an elite minority of compiler hackers, which then begins inexorably sucking the entire programming support environment down its event horizon. I would much prefer that the concept of "compiler" in this sense did not exist, and that instead one had a nicely factored translation toolset wide open to the application programmer. Lisp and Forth begin to approach this ideal. At the least, it would be very nice if the compiler could be kept distinct enough from the rest of the programming support environment that it doesn't begin sucking what sound to me like logically separate project management concerns (above) into its orbit. Would it be possible to define an interface which allows the above sort of "Not if you're a left-handed programmer and it's Tuesday" restrictions to be separately implemented and kept out of the core language? (To my mind, one of the successes of C -- as distinct from C++, say -- is that it clearly defined what was and wasn't the task of the compiler, and stuck to its guns, resulting in that very rare bird: An Algolic language with a stable language definition and compiler.)
Resend: Why is Haskell 1.3 taking so long
[Due to an upgrade in Yale's mail system, this message (and a few others) didn't get through to the Yale half of the Haskell mailing list - so I'm resending. I guess both halves of the list have already seen Kevin Hammond's response that it's not true... so it goes. -- Alastair] Chris Dornan ([EMAIL PROTECTED]) asks: > Has anybody considered limiting the 1.3 upgrade to the new > much-improved I/O libraries? Several committee members who'd love to do just that. My (personal) view on why we're not limiting the upgrade to just the I/O libraries is that (most of) the other new features are necessary to properly support the I/O libraries. * The I/O monad seemed to be on the point of stealing the names >>=, >> and return for the exclusive use of the I/O monad. Adding constructor classes avoided that. * The LibTime module will benefit significantly from the addition of records (and may well benefit from the addition of strictness annotations). We could omit LibTime from Haskell 1.3 and produce Haskell 1.4 sometime next year with records and LibTime - but we'd rather get all the changes over with in one go. We also felt unhappy about encouraging programmers to use monads without providing some syntactic sugar to make them palatable. (Old time monad hackers have become used to the "gzinto"-style of programming that goes with it - but it's not much fun for beginning programmers or those who have to teach it.) We could omit newtype at this stage but we chose to add it in anticipation of defining further standard libraries. The current design makes good use of newtype to preserve abstraction within implementations and to enable the use of type/ constructor classes (you can't define an instance for a type synonym). We could also omit other changes (deletion of n+k patterns, various prelude changes, ...) but I don't think it would speed the process up. It might break less code - we'd certainly welcome comments on which changes will break a lot of code (especially if the benefit from changing it seems insignificant). Alastair Reid Yale Haskell Project
Re: Haskell 1.3: modules & module categories
Date: Mon, 2 Oct 1995 05:53:44 -0400 Reply-To: [EMAIL PROTECTED] From: Manuel Chakravarty <[EMAIL PROTECTED]> > To me, one of the most regrettable characteristics of > the Algolic family of languages is the tendency of the > compiler to turn into a giant black box of facilities > open only to an elite minority of compiler hackers, which > then begins inexorably sucking the entire programming > support environment down its event horizon. > > I would much prefer that the concept of "compiler" in this > sense did not exist, and that instead one had a nicely > factored translation toolset wide open to the application > programmer. Lisp and Forth begin to approach this ideal. Would you mind divulging the identity of your hilarious correspondent? David
Re: Haskell 1.3: modules & module categories
> > With present Haskell modules, it seems that `with' > > automatically comes with `use' and clutters up your namespace. > > That's why you sometimes need re-naming when importing. Sorry, I missed that one. Manuel pointed out that with/use is already contained in the `qualified names'-proposal. When I'm comparing Haskell to Ada, it seems that basically import Foo = with Foo; use Foo; import qualified Foo= with Foo; Still I'd like to have Ada's `use' on its own, as in with Text_Io; package Foo is ... procedure Bar is use Text_Io; begin ... end; ... end Foo; And while we're at it, what about - nested modules - with possibly private sub-modules similar to the Ada(-95) things. -- Johannes Waldmann, Institut f\"ur Informatik, UHH, Jena, D-07740 Germany, (03641) 630793 [EMAIL PROTECTED] http://www.minet.uni- jena.de/~joe/ ... Im naechsten Heft: Als Arbeiter in einer Radiofabrik - Freundschaft mit dem Sohn eines Luftwaffengenerals - Das KGB ueberwacht den Amerikaner auf Schritt und Tritt - Alarmierende Verdachtsmomente bei der Kaninchenjagd - Ungluecklich verliebt in eine rothaarige Juedin
Re: Haskell 1.3: modules & module categories
Has the Ada solution been properly considered? What I really like about Ada packages is that you have `with' and `use' as separate operations (on namespaces). Typical (simplified) examples are: Put_Line ("Foo."); -- won't work with Text_Io; Text_Io.Put_Line ("Foo."); -- will work with Text_Io; Put_Line ("Foo."); -- won't work with Text_Io; use Text_Io; Put_Line ("Foo."); -- will work use Text_Io; Put_Line ("Foo."); -- won't work That is, `with Bar' makes module Bar's namespace accessible, but prefixed with that module's name. On the other hand, `use Bar' adds `Bar.' to a set of default prefixes that are tried when looking up names from then on. If an ambiguity arises, the compiler complains. You may resolve this by using the prefixed name. You can only `use' what you have `with'ed, and all `with's have to go at the very start of a module, so you (or a configuration management system) can easily check on what packages your code depends. With present Haskell modules, it seems that `with' automatically comes with `use' and clutters up your namespace. That's why you sometimes need re-naming when importing. (As I'm mostly using Gofer/Hugs, you may imagine that I'm not so sure about Haskell modules. However, I _do_ like the Ada solution. Please correct me if the above is basically wrong or inapplicable.) -- Johannes Waldmann, Institut f\"ur Informatik, UHH, Jena, D-07740 Germany, (03641) 630793 [EMAIL PROTECTED] http://www.minet.uni- jena.de/~joe/ ... Im naechsten Heft: Als Arbeiter in einer Radiofabrik - Freundschaft mit dem Sohn eines Luftwaffengenerals - Das KGB ueberwacht den Amerikaner auf Schritt und Tritt - Alarmierende Verdachtsmomente bei der Kaninchenjagd - Ungluecklich verliebt in eine rothaarige Juedin
Re: Haskell 1.3: modules & module categories
> To me, one of the most regrettable characteristics of > the Algolic family of languages is the tendency of the > compiler to turn into a giant black box of facilities > open only to an elite minority of compiler hackers, which > then begins inexorably sucking the entire programming > support environment down its event horizon. > > I would much prefer that the concept of "compiler" in this > sense did not exist, and that instead one had a nicely > factored translation toolset wide open to the application > programmer. Lisp and Forth begin to approach this ideal. I am a bit puzzled about this statement. I used to think about Lisp environments just in the same way that you characterize the compilers for Algol-style languages. The typical Common Lisp environment is one big engine with thousands of features and it takes a rather long time to get to the status of an experienced user. Maybe it is a matter of familiarity with either style of environment. > Would it be possible to define an interface which > allows the above sort of "Not if you're a left-handed > programmer and it's Tuesday" restrictions to be > separately implemented and kept out of the core > language? I am not sure if you can separate these issues, but there is one important requirement. There must not be an easy or even moderately difficult way to circumvent the restrictions. As they say, there is always a bad programmer in your team. Cheers, Manuel
Re: Haskell 1.3: modules & module categories
If you could email me the piece of code you are having this problem, I can look at it and try to see what is wrong. -- Ming
Haskell 1.3: modules & module categories
Hi! Talking to a friend, who is project manager in a software company, about modules for Haskell, he made two comments that may be of interest to the current discussion. (1) With regard to the idea of 99% hand-written interfaces (just mark everthing that should go into the interface in a combined interface/implementation file) that I proposed and that was supported by Peter, my friend pointed out that this could make multiple implementations for one interface a bit more labour. You basically have to guarantee that the interfaces extracted out of the combined file for version one and version two of the implementation are equal, i.e., the interface is duplicated in both versions. Still, I find this less onerous than having a separate implementation and interface for three reasons: (1) the common case is one implementation for one interface (better shift the labour to the occasional case); (2) in the Modula-2 style there is also some duplication of code (procedure/function signatures); and (3) in the case of two implementations for one interface you have to deal with issues of consistency between the versions anyway. (2) He pointed out that it is desirable to be able to restrict the access to some modules in a way that the compiler can control when a group of people is working in one module hierarchy. Too illustrate this, assume that we classify the modules into different levels of abstraction, say, three levels: level 3 modules | v level 2 modules | v level 1 modules Now the modules in level 2 may use the modules from level 1; the modules from level 3 may use the modules from level 2, but *not* the modules from level 1---I think it is clear that such a case is rather frequent. Such access control may be easy to achieve when it is possible to deny the people working on level 3 the access to the interfaces of level 1 (e.g., don't copy them the interface or use UNIX file permissions). But this may often not be possible, for instance because some people are working at modules in level 2 and 3. So, we like to have some way to specify that the compiler simply does not allow to import (directly) modules from level 1 within modules from level 3. Actually, C++ has a rather ad-hoc solution (are you suprised?) to this problem, the `friends'. An object may be friend of another object; then, that object can access (private) fields that are not visible to other non-friend objects. The problem here is that the object providing some service has to specify all its friends, by name. If it is required to add a new friend, the used object has to be changed. Consider, in our example hierarchy, that you want to split some existing level 2 module into two modules; using friends, this requires to change modules in level 1, which is obviously bad. Now, what about the following idea? Each module is element of a module category. Such categories are named and each module states to which category it belongs. Furthermore, a module lists all categories of which the members may import it. In the example, we have three categories, say, Level1, Level2, and Level3. All modules from Level1 allow to be imported from Level2, and all modules from Level2 allow to be imported from Level3. This pervents imports of modules of category Level1 from modules in category Level3, and is easy to check for the compiler. Splitting a module does not require any changes in underlying categories. Cheers, Manuel
Re: Haskell 1.3
JL writes, A formal treatment of parametricity in the presence of overloading has not been written up (Eric Meijer has talked of doing so). The problem with writing it up is that it's too simple: it reduces to a single observation, namely that the parametricity theorem coming from an overloaded type is the regular parametricity theorem that arises after performing the dictionary expansion. You can find this observation in Section 3.4 of the original `Theorems for Free'. So it is written up! -- P
Re: Haskell 1.3 (newtype)
Sebastian suggests using some syntax other than pattern matching to express the isomorphism involved in a newtype. I can't see any advantage in this. Further, Simon PJ claims that if someone has written data Age = Age Int foo (Age n) = (n, Age (n+1)) that we want to be able to make a one-line change newtype Age = Age Int leaving all else the same: in particular, no need to add twiddles, and no changes of the sort Sebastian suggests. I strongly support this! (No, Simon and I are not in collusion; indeed, we hardly ever talk to each other! :-) Cheers, -- P
Re: Haskell 1.3 (newtype)
In a recent message Sebastian Hunt suggests a solution to the 'newtype' problem. Let me recall another approach which can cure several things at a time (probably introducing new problems though). Some time ago Mark Jones wrote a paper " From Hindley-Milner Types to Modular Structures". He suggested introducing record types like type Point = {x,y: Real} If we define records to be unlifted then the type Int and {int: Int} will be isomorphic and there is no reason to introduce a special 'newtype' syntax. There is a problem with class instances - type synonyms are not allowed there. May be, the restriction could be relaxed to allow types defined as structures to be subject to instantiating. It's worth noting that with Mark's ideas the records of the 1.3 proposal can be replaced by something more general - another step forward. I don't like neither 'newtype' nor the records of Haskell 1.3. Both mean a lot of syntax with little semantics. Even if Mark's ideas seem premature at this stage it's worth working on them and not introducing some bad syntax to be withdrawn from the language later. Removing even the worst syntax >from a language is always a painful process, vide the n+k patterns. Rysiek PS. I've written the above without Mark's permit. Sorry, Mark, it was too difficult for me to wait...
Re: Changes in Haskell 1.3 (Qualified names)
The proposed change should not break too much code. The `.' is only treated specially after a constructor. This will break: tstpatp9= (P_cons P_write.P_cons P_read) P_empty_list because of the P in P_write but this will be fine: proc_emulator f = prm_em.f Since constructors appear relatively infrequently in compositions, this should be a rather painless change in the syntax. We did consider other punctuation besides `.' but almost everything is taken or is visually unappealing; `.' has a certain amount of tradition behind it. John
Re: Haskell 1.3 (newtype)
I think that the following points have emerged from the recent discussion about the proposed newtype declaration: 1) Pattern matching against strict constructors will result in functions which are strict in the annotated constructor argument. For example: data T = A !Int f :: T -> Bool f (A n) = True results in f such that f (A undefined) = undefined This results in a loss of referential transparency because, before we can replace f (A e) by True, we must check that e is not undefined. Of course, a transformation in this direction can only make a program more defined, so perhaps we should be more worried by the fact that it would be unsafe to replace True by f (A e), in general. 2) It would be wrong to define the proposed newtype N = B G declaration as being equivalent to data N = B !G because a) it will restrict its use to G such that !G makes sense, and b) programmers will be tripped up by the strict pattern matching semantics above. On reflection, I think I agree with point 2, so if the proposed newtype syntax is adopted it will need to be given a semantics independently of that of strict constructors, as Simon has described. On the other hand, it also seems to me that the reason this discussion started is that the proposed newtype syntax hijacks the constructor syntax for a new purpose: the definition of an isomorphism. Since the declaration newtype N = B G means "let N be isomorphic to G and let B :: G -> N be one half of the isomorphism" it is clear that B is not a constructor in the usual sense at all. In Haskell and its antecedents (though not in the general setting of term rewriting systems) it is well established that for f (B e) = ... to be legal, B must be a constructor (with the consequence that f is strict). The proposed newtype syntax is a significant departure from this. Would it really be so inconvenient if pattern matching couldn't be used for the iso from G to N? How about a syntax which made both halves of the isomorphism explicit? newtype in :: N <-> G :: out The example from the earlier postings would then be rendered as newtype in :: Arg <-> Int :: out foo :: Arg -> (Int, Arg) foo a = (n, in (n + 1)), where n = out a Implementations would be free to implement in and out as id foo a = (n, id (n + 1)), where n = id a and then magic them away foo a = (a, a + 1) Sebastian Hunt
Re: Changes in Haskell 1.3 (Qualified names)
I just had a look in the proposed changes for Haskell 1.3 again, being stuck in what I was otherwise trying to do, (more of that later maybe) when I found the Qualified names: > Qualified names are defined in the lexical syntax. Thus, > `Foo.a' and `Foo . a' are quite different. No whitespace is permitted in > a qualified name. Symbols may also be qualified: `Prelude.+' is > an operator which can be used in exactly the same manner as `+'. Does this mean spaces must be put around '.' when used as an operator? Or will the parser make use of semantic information to distinguish qualified names from functional composition, when '.' is not surrounded with whitespace? I checked some of the code I had written in a functional language (Miranda) to see how I had used the functional composition.operator. I found I really never put spaces around it. And it was also one of the commonest operators I used... maybe 3rd most commonly used altogether. Maybe this style will be broken in Haskell 1.3? tstpatp9= (P_cons P_write.P_cons P_read) P_empty_list proc_emulator f = prm_em.f I guess one of the reasons for me not using spaces around '.' (which I with some regularity do around other operators) is that I am not _used_ to see spaces around '.'. That is because previously I have used it in C without spaces around it. Spaces around it are allowed in C though: '.' is just an operator like '+' or '->' in C. Maybe some other operator could be used for qualified names, maybe some operator that could also be used for record selection? If some existing operator must be "overloaded", why not use one that is (presumably) less popular than '.', for example '~'. (Does this ever occur like a~b ?) Best regards, Sverker Nilsson S.Nilsson Computer System AB Ekholmsv.28B S-582 61 LINKOPING
Re: Haskell 1.3 (newtype)
Phil says: | I think its vital that users know how to declare a new isomorphic | datatype; it is not vital that they understand strictness declarations. | Hence, I favor that | | newtype Age = Age Int | data Age = Age !Int | | be synonyms, but that both syntaxes exist. | | This is assuming I have understood Lennart correctly, and that | | foo (Age n) = (n, Age (n+1)) | foo' a = (n, Age (n+1)) where (Age n) = a | | are equivalent when Age is declared as a strict datatype. Unlike | Sebastian or Simon, I believe it would be a disaster if for a newtype | one had to distinguish these two definitions. I agree that it is rather undesirable for them to differ. If someone had declared a *non-strict* verion like this: data Age = Age Int foo (Age n) = (n, Age (n+1)) (where foo is patently non-strict in n), and then just wanted to say "do away with the Age constructor", I'd like it to be a one-line change (data --> newtype), rather than also having to add a twiddle to every pattern-match: foo ~(Age n) = (n, Age (n+1)) [which is eqiuvalent to using a where binding] In effect, newtype could be explained as (a) a data decl with a !, and (b) adding a ~ to every pattern match. This is a hard one to call: which version actually requires least explanation?! Simon
Records in Haskell 1.3
Hi all, I'd like to make a few comments on the proposal for simple records in Haskell 1.3. * The possibility of having polymorphic field types has been left out of the proposal. Polymorphic field types essentially bring second order polymorphism into the language, by allowing function arguments to have polymorphic types (wrapped in a record). IMHO, this is an important extension to the language and adds very little complexity to the typechecker. * The operator := was chosen for field initialisation and update, when the equals sign (=) would (almost) be sufficient. It seems the only reason we cannot use equals is the new syntax for anonymous updates: f = pointX := 1 which is equivalent to f = \p -> p with pointX := 1 It seems a good compromise to give up this abbreviation and use the equals sign for field update. * There was some discussion before the proposal of having an explicit field selection operator (such as '.', '->', or '#') so that the namespace for field names can be kept seperate from the general function/value namespace. For the record, I still prefer '->' (pun intended :-) Cheers, Simon -- Simon Marlow [EMAIL PROTECTED] Research Assistant http://www.dcs.gla.ac.uk/~simonm/ finger for PGP pulic key
Re: Haskell 1.3 (newtype)
On Wed, 13 Sep 1995 [EMAIL PROTECTED] wrote: > Well, I'm glad to see I provoked some discussion! ... > Why should foo evaluate its argument? It sounds to me like > Lennart is right, and I should not have let Simon lead me astray! ... > This is assuming I have understood Lennart correctly, and that > > foo (Age n) = (n, Age (n+1)) > foo' a = (n, Age (n+1)) where (Age n) = a > > are equivalent when Age is declared as a strict datatype. Unlike > Sebastian or Simon, I believe it would be a disaster if for a newtype > one had to distinguish these two definitions. I don't see how these two can be equivalent, unless a special case is made in the semantics for data types with a single constructor when the constructor happens to be strict. Consider data G = F !Int | D Int f :: G -> Bool f (D _) = True f (F _) = False If Lennart is right about foo, doesn't it follow that f (D undefined) = True? In which case, since D is strict, we have f undefined = f (D undefined) = True and so, by monotonicity of f, f v = True for all v and, in particular, f (F e) = True This can't be right, surely? Sebastian
Re: Haskell 1.3 (newtype)
Well, I'm glad to see I provoked some discussion! Simon writes: Lennart writes: | So if we had | |data Age = Age !Int |foo (Age n) = (n, Age (n+1)) | | it would translate to | |foo (MakeAge n) = (n, seq MakeAge (n+1)) | | [makeAge is the "real" constructor of Age] Indeed, the (seq MakeAge (n+1) isn't eval'd till the second component of the pair is. But my point was rather that foo evaluates its argument (MakeAge n), and hence n, as part of its pattern matching. Hence foo is strict in n. Why should foo evaluate its argument? It sounds to me like Lennart is right, and I should not have let Simon lead me astray! I think its vital that users know how to declare a new isomorphic datatype; it is not vital that they understand strictness declarations. Hence, I favor that newtype Age = Age Int data Age = Age !Int be synonyms, but that both syntaxes exist. This is assuming I have understood Lennart correctly, and that foo (Age n) = (n, Age (n+1)) foo' a = (n, Age (n+1)) where (Age n) = a are equivalent when Age is declared as a strict datatype. Unlike Sebastian or Simon, I believe it would be a disaster if for a newtype one had to distinguish these two definitions. Cheers, -- P
Re: Haskell 1.3 (newtype)
Lennart writes: | So if we had | | data Age = Age !Int | foo (Age n) = (n, Age (n+1)) | | it would translate to | | foo (MakeAge n) = (n, seq MakeAge (n+1)) | | [makeAge is the "real" constructor of Age] | | Now, surely, seq does not evaluate its first argument when the | closure is built, does it? Not until we evaluate the second component | of the pair is n evaluated. Indeed, the (seq MakeAge (n+1) isn't eval'd till the second component of the pair is. But my point was rather that foo evaluates its argument (MakeAge n), and hence n, as part of its pattern matching. Hence foo is strict in n. Sebastian writes: | Is it really a good idea to extend the language simply to allow foo and | foo' to be equivalent? The effect of foo' can still be achieved if Age is | a strict data constructor: | | data Age = Age !Int | | foo'' :: Age -> (Int, Age) | foo'' a = (n, Age (n+1)) where (Age n) = a | | and compilers are free (obliged?) to represent a value of type Age by an | Int. Indeed, it's true that foo'' does just the right thing. Furthermore, I believe it's true that given the decl data T = MkT !S the compiler is free to represent a value of type T by one of type S (no constructor etc). Here are the only real objections I can think of to doing "newtype" via a strict constructor. None are fatal, but they do have a cumulative effect. 1. It requires some explanation... it sure seems a funny way to declare an ADT! 2. The programmer would have to use let/where bindings to project values >from the new type to the old, rather than using pattern matching. Perhaps not a big deal. 3. We would *absolutely require* to make (->) an instance of Data. It's essential to be able to get data T = MkT !(Int -> Int) 4. We would only be able to make a completely polymorphic "newtype" if we added a quite-spurious Data constraint, thus: data Data a => T a = MkT !a (The Data is spurious because a value of type (T a) is going to be represented by a value of type "a", and no seqs are actually going to be done.) 5. We would not be able to make a newtype at higher order: data T k = MkT !(k Int) because there's no way in the language to say that (k t) must be in class Data for all t. [This is a somewhat subtle restriction on where you can put strictness annotations, incidentally, unless I've misunderstood something.] Simon
Re: Haskell 1.3 (Bounded;fromEnum;type class synonyms)
Dear Sverker Nilsson, Thanks for your message - interesting ideas and interesting questions. [I'm copying the reply to the Haskell mailing list in case anyone wishes to support your suggestions.] First, one of Haskell's annoying features is that the scope of a type variable in a type signature or instance heading only extends over the signature. So, when you want to write: > instance (FromInt a, ToInt a, MinVal a, MaxVal a) => Enum a where > enumFrom c = map fromInt [toInt c .. toInt (maxVal :: a)] It doesn't work (because the "a" isn't in scope during the declarations) - you have to use "asTypeOf" instead: > instance (FromInt a, ToInt a, MinVal a, MaxVal a) => Enum a where > enumFrom c = map fromInt [toInt c .. toInt (maxVal `asTypeOf` c)] While developing something like the proposed "Bounded" class, you introduced separate classes for minVal and maxVal observing: > Something having a minimum value, in my view, didn't necessarily > imply it would have a maximum value. Yes, perfectly true. The best example is that there's a minimal list (the empty list) but even though there's a maximal Char (say), there's no maximal list of characters. Our primary motivation for adding Bounded is to clean up the {min,max}{Char,Int} situation and make the derived Enum instances slightly more regular (similar in spirit to your definitions above). For this purpose, insisting on having both a min and a max isn't a problem. However, for other purposes, having one bound but not the other is certainly possible and maybe useful. (I agree that defining a bogus instance in which "minVal" (say) is defined but "maxVal" is undefined or has a bogus value is at least untidy and at worst a bug waiting to happen. I tried (and failed) to get the Text instance of (a -> b) removed from the Prelude for this reason.) The major disadvantage of separating the two is that it introduces even more classes. If you read the preludechanges document carefully, you'll see that (even at this late stage) these are only proposed changes. Glasgow argue that it's hard enough to keep Ix and Enum separate in your mind - adding another can only worsen things. You were then surprised and disturbed to find that this isn't legal Haskell: > class (MinVal a, MaxVal a)=>Bounded a > > instance Bounded T where >maxVal = T3 >minVal = T1 There was a proposal to make this legal. As far as I know, there's no technical problems here - I guess it just got forgotten about (or the proposer decided that Haskell 1.3 had too many changes in it already!) > * Should Bounded be derived from Ord? > > The Bounded class that was suggested for Haskell 1.3 was derived from > Ord. Myself playing with similar things I derived MinVal and MaxVal > from nothing - I thought this more general. Maybe the reason for > having Bounded derived from Ord was to imply that its functions shall > satisfy certain laws, probably as being min/max as defined by the > ordering functions in Ord. But as I don't see how this can be > guaranteed by deriving Bounded from Ord, I would think that it could > as well be standalone (or derived from something like MinBound and > MaxBound if possible); for more generality and less dependency between > the classes in the system. Yes, the sole reason is because it seemed tidier to specify Ord - without knowing which comparision is being used, it doesn't make much sense to say you have a "maximum value". > For example, the new proposal says: > > > ... > > Programmers are free to define a class for partial orderings; here, we > > simply state that Ord is reserved for total orderings. > > That seems to imply also that a programmer should not use Bounded on > types that have no total ordering. I believe this might be an unnecessary > restriction. It certainly looks that way. > > The names fromEnum and toEnum are misleading since > > their types involve both Enum and Bounded. We couldn't face writing > > fromBoundedEnum and toBoundedEnum. Suggestions > > welcome. > > Maybe names like ToInt and FromInt could be used for this? > > How about the following, assuming the proposed diff and succ functions: > > class (Bounded a, Enum a) => ToInt a where toInt :: a -> Int [...] > class (Bounded a, Enum a) => FromInt a where fromInt :: Int -> a [...] These names look good. Three _minor_ concerns: 1) It introduces even more standard classes to confuse programmers with. Why allow the programmer to override them? 2) Several implementations have added a non-standard method fromInt :: Int -> a to the Num class to avoid unnecessary uses of fromInteger. However, I think most n
Re: Haskell 1.3 (newtype)
On Tue, 12 Sep 1995, Lennart Augustsson wrote: > The posted semantics for strict constructors, illustrated by this example > from the Haskell 1.3 post, is to insert seq. > > > data R = R !Int !Int > > > > R x y = seq x (seq y (makeR x y)) -- just to show the semantics of R > > So if we had > > data Age = Age !Int > foo (Age n) = (n, Age (n+1)) > > it would translate to > > foo (MakeAge n) = (n, seq MakeAge (n+1)) > > [makeAge is the "real" constructor of Age] I had assumed (as Simon seems to) that the semantics of pattern matching against a strict constructor would accord with the following: 1. matching a simple pattern involves evaluating the expression being matched to the point that its outermost constructor is known 2. for strict constructors this must result in the annotated constructor argument(s) being evaluated >From what Lennart says, this is not the intended semantics. So what *is* the intended semantics? Sebastian Hunt
Re: Haskell 1.3 (Bounded;fromEnum;type class synonyms)
* Playing around, learning the basics, reinventing the wheel... I had been playing around with some classes, primarily to learn for myself, being new to the Haskell language, when I got the report on the current status of Haskell 1.3. The classes I had played with had some similarities to some of the proposals for the new prelude, yet I had made it in a quite different way. Trying to combine the two styles, I ran into an unexpected problem. This problem I am naive enough to believe could be solved by a simple language extension. Using Gofer, I had made some classes that could be used for implementing ordering and other things for enumeration (data T=T1 | T2 | T3) types but not restricted to those. I made 4 minimal classes with just 1 function in each. (I thought this would be most general. Something having a minimum value, in my view, didn't necessarily imply it would have a maximum value.) So: class FromInt a where fromInt:: Int->a class ToInt a where toInt:: a->Int class MaxVal a where maxVal:: a class MinVal a where minVal:: a -- I then used this as follows: data T = T1 | T2 | T3 instance ToInt T where toInt e = case e of T1 -> 1 T2 -> 2 T3 -> 3 instance Eq T where a == b = toInt a == toInt b instance Ord T where a <= b = toInt a <= toInt b -- And so on. The MaxVal and MinVal classes also where used to make a generic -- implementation of a bounded Enum class, generalizing how it was made in the -- Gofer prelude for Char: instance (FromInt a, ToInt a, MinVal a, MaxVal a) => Enum a where enumFrom c = map fromInt [toInt c .. toInt (maxVal `asTypeOf` c)] enumFromThen c c' = map fromInt [toInt c, toInt c' .. toInt (lastVal `asTypeOf` c)] where lastVal = if c' < c then minVal else maxVal -- This worked to my great delight! And I had began to learn the basics -- of the type system in Haskell. My only problem was that I had to use -- (maxVal `asTypeOf` c) instead of (maxVal::a). I believe the reason -- for this might be clear when I learn more. Somebody have a clue? * Running into a problem: type class synonyms are not synonymous? Then, I got the report on the developments of Haskell 1.3 and began to read it with great curiosity. I then found the Bounded class, containing corresponding functions to MinVal and MaxVal. A question then occured to me: Why not have separate classes as I had done? Would not that perhaps be more general, increasing the possibilities for reuse? (Without having to stub out one of minBound or maxBound if you use it for a type without one of them.) On the other hand, I saw the convenience of having both minBound and maxBound in the same class, decreasing the number of classes that have to be mentioned in various cases. But I thought, then, why not derive the Bounded class >from MinVal and MaxVal - would not that then be equivalent? So I tried class (MinVal a, MaxVal a)=>Bounded a -- This was allowed, but then... instance Bounded T where maxVal = T3 minVal = T1 -- That didn't work! (Gofer said: ERROR "tst.gs" (line 45): No member "maxVal" in class "Bounded") Maybe I had done something wrong, or Gofer does not allow something that would be allowed in Haskell? I suspect however that I am simply not supposed to do this in either Haskell or Gofer... Instead I had to use two separate instantiaions, exactly as before I declared the Bounded class: instance MinVal T where minVal = T1 instance MaxVal T where maxVal = T3 This seems to be somewhat unnecessary, wouldn't it be quite possible for a compiler to transform the instantiation of Bounded to the two instantiations of MinVal and MaxVal? Maybe this would be a useful development of Haskell? * Should Bounded be derived from Ord? The Bounded class that was suggested for Haskell 1.3 was derived from Ord. Myself playing with similar things I derived MinVal and MaxVal >from nothing - I thought this more general. Maybe the reason for having Bounded derived from Ord was to imply that its functions shall satisfy certain laws, probably as being min/max as defined by the ordering functions in Ord. But as I don't see how this can be guaranteed by deriving Bounded from Ord, I would think that it could as well be standalone (or derived from something like MinBound and MaxBound if possible); for more generality and less dependency between the classes in the system. For example, the new proposal says: > ... > Programmers are free to define a class for partial orderings; here, we > simply state that Ord is reserved for total orderings. That seems to imply also that a programmer should not use Bounded on types that have no total ordering. I believe this might be an unnecessary restrict
Re: Haskell 1.3 (lifted vs unlifted)
John Hughes mentioned a deficiency of Haskell: OK, so it's not the exponential of a CCC --- but Haskell's tuples aren't the product either, and I note the proposal to change that has fallen by the wayside. and Phil Wadler urged to either lift BOTH products and functions, or none of them. My two pence: If functions/products are not products and exponentials of a CCC, you should aim for the next best thing: an MCC, a monoidal closed category. But Haskell's product isn't even monoidal: There is no type I such that A*I and A are isomorphic. The obvious candidate (in a lazy language) would be the empty type 0, but A*0 is not isomorphic to A but to the lifting of A. Another problem: the function space A*B -> C should be naturally isomorphic to A -> (B -> C). What does the iso look like? One half is the obvious curry function: curry f x y = f(x,y) But what is the other half? Apparently, it should be either uncurry1 f (x,y) = f x y or uncurry2 f (~(x,y)) = f x y Which one is right depends on which one establishes the isomorphism. Consider the definition f1 (x,y) = () Now: uncurry1 (curry f1) undef = undef = f1 undef while on the other hand: uncurry2 (curry f1) undef = curry f1 (p1 undef, p2 undef) = f1(p1 undef,p2 undef) = () =/= f1 undef This suggests that uncurry2 is wrong and uncurry1 is right, but for f2 (~(x,y)) = () the picture is just the other way around. BTW It doesn't help to employ "seq" in the body of curry. Looks rather messy. Can some of this be salvaged somehow? -- Stefan Kahrs
Re: Haskell 1.3 (newtype)
Simon, I think you're mistaken. Simon writes: > > newtype Age = Age Int > > foo :: Age -> (Int, Age) > foo (Age n) = (n, Age (n+1)) > > Now, we intend that a value of type (Age Int) should be represented by > an Int. Thus, apart from the types involved, the following program should > be equivalent: > > type Age' = Int > > foo' :: Age' -> (Int, Age') > foo' n = (n, n+1) > > So is foo' strict in n? No, it isn't. What about foo? If newtype is just a > strict data constructor, then it *is* strict in n. The posted semantics for strict constructors, illustrated by this example >from the Haskell 1.3 post, is to insert seq. > data R = R !Int !Int > > R x y = seq x (seq y (makeR x y)) -- just to show the semantics of R So if we had data Age = Age !Int foo (Age n) = (n, Age (n+1)) it would translate to foo (MakeAge n) = (n, seq MakeAge (n+1)) [makeAge is the "real" constructor of Age] Now, surely, seq does not evaluate its first argument when the closure is built, does it? Not until we evaluate the second component of the pair is n evaluated. The other behaviour of strict constructors would worry me since we would loose referential transparency. I'm not opposing newtype, but an ordinary datatype with one constructor with one strict argument is very similar. The only way to distinguish them (and it is debatable if this is what you want) is like this data T = T !Int f (T _) = True newtype T' = T' Int f' (T' _) = True Now we get f undefined ==> undefined f' undefined ==> True -- Lennart
Re: Haskell 1.3 (newtype)
On Tue, 12 Sep 1995, Simon L Peyton Jones wrote: > > > Phil writes: > > | Make newtype equivalent to a datatype with one strict constructor. > | Smaller language, more equivalences, simpler semantics, simpler > | implementation. An all around win! > > I believe it would be a mistake to do this! Consider: > > newtype Age = Age Int > > foo :: Age -> (Int, Age) > foo (Age n) = (n, Age (n+1)) > > Now, we intend that a value of type (Age Int) should be represented by > an Int. Thus, apart from the types involved, the following program should > be equivalent: > > type Age' = Int > > foo' :: Age' -> (Int, Age') > foo' n = (n, n+1) Is it really a good idea to extend the language simply to allow foo and foo' to be equivalent? The effect of foo' can still be achieved if Age is a strict data constructor: foo'' :: Age -> (Int, Age) foo'' a = (n, Age (n+1)) where (Age n) = a and compilers are free (obliged?) to represent a value of type Age by an Int. It might even be rather confusing if foo were not strict, given that it appears to pattern match on its argument. (Of course, you could equally argue that it is confusing that case Foo undefined of Foo _ -> True = undefined in a lazy language, but that can't be helped if strict constructors are allowed - unless some lexical distinction is introduced, eg strict constructor names must start with `!'.) Why not keep things simple and, as Ryszard Kubiak suggests, abandon the newtype syntax altogether? Sebastian Hunt
Re: Haskell 1.3 (newtype)
In a recent message Phil Wadler argues: > ... > Make newtype equivalent to a datatype with one strict constructor. > Smaller language, more equivalences, simpler semantics, simpler > implementation. An all around win! I strongly agree with Phil and suggest that because of the equivalences the extra syntax for 'newtype' is simply omitted. It doesn't make sense to have syntax with so little semantic significance. Regards, Rysiek
Re: Haskell 1.3 (newtype)
Simon offers a compelling reason to make newtype distinct from a strict datatype with one constructor. And a semantics to boot! I withdraw my objection. -- P PS. The informal explanation might be modified to explain why newtype must be distinct from a strict datatype. strict datatype case Foo undefined of Foo _ -> True = undefined newtype case Foo undefined of Foo _ -> True = True The latter must be the right thing to do (as pointed out by Simon) because removing Foo should not change the meaning: case undefined of _ -> True = True Cheers, -- P
Re: Haskell 1.3 (newtype)
Phil writes: | By the way, with `newtype', what is the intended meaning of | | case undefined of Foo _ -> True ? | | I cannot tell from the summary on the WWW page. Defining `newtype' | in terms of `datatype' and strictness avoids any ambiguity here. | | Make newtype equivalent to a datatype with one strict constructor. | Smaller language, more equivalences, simpler semantics, simpler | implementation. An all around win! I believe it would be a mistake to do this! Consider: newtype Age = Age Int foo :: Age -> (Int, Age) foo (Age n) = (n, Age (n+1)) Now, we intend that a value of type (Age Int) should be represented by an Int. Thus, apart from the types involved, the following program should be equivalent: type Age' = Int foo' :: Age' -> (Int, Age') foo' n = (n, n+1) So is foo' strict in n? No, it isn't. What about foo? If newtype is just a strict data constructor, then it *is* strict in n. Here's what I wrote a little while ago: "This all very well, but it needs a more formal treatment. As it happens, I don't think it's difficult. In the rules for case expressions (Fig 3 & 4 in the 1.2 report) we need to say that the *dynamic* semantics of case e of { K v -> e1; _ -> e2 } is let v = e in e1 if K is the constructor of a "newtype" declaration. (Of course this translation breaks the static semantics.) Similarly, the dynamic semantics of (K e) is just that of "e", if K is the constructor of a "newtype" decl." Does that make the semantics clear, Phil? Simon
Re: Haskell 1.3 (newtype)
The design of newtype appears to me incorrect. The WWW page says that declaring newtype Foo = Foo Int is distinct from declaring data Foo = Foo !Int (where ! is a strictness annotation) because the former gives case Foo undefined of Foo _ -> True = True and the latter gives case Foo undefined of Foo _ -> True = undefined. Now, on the face of it, the former behaviour may seem preferable. But trying to write a denotational semantics is a good way to get at the heart of the matter, and the only way I can see to give a denotational semantics to the former is to make `newtype' define a LIFTED type, and then to use irrefutable pattern matching. This seems positively weird, because the whole point of `newtype' is that it should be the SAME as the underlying type. By the way, with `newtype', what is the intended meaning of case undefined of Foo _ -> True ? I cannot tell from the summary on the WWW page. Defining `newtype' in terms of `datatype' and strictness avoids any ambiguity here. Make newtype equivalent to a datatype with one strict constructor. Smaller language, more equivalences, simpler semantics, simpler implementation. An all around win! Cheers, -- P
Re: Haskell 1.3 (lifted vs unlifted)
To the Haskell 1.3 committee, Two choices in the design of Haskell are: Should products be lifted? Should functions be lifted? Currently, the answer to the first is yes, and to the second is no. This is ad hoc in the extreme, and I am severely embarrassed that I did not recognise this more clearly at the time we first designed Haskell. Dear committee, I urge you, don't repeat our earlier mistakes! John Hughes makes a compelling case for yes; and mathematical cleanliness makes a compelling case for no. I slightly lean toward yes. (John is a persuasive individual!) But unless someone presents a clear and clean argument for answering the two questions differently, please answer them consistently. If both questions are answered yes, then there is a choice as to whether or not to have a Data class. Indeed, there are two choices: Should polymorphic uses of seq be marked by class Data? Should polymorphic uses of recursion be marked by class Rec? John Launchbury and Ross Paterson have written a beautiful paper urging yes on the latter point; ask them for a copy. Here, I have a mild preference to answer both questions no, as I think the extra complication is not worthwhile. But again, please answer them consistently. Cheers, -- P
Re: Haskell 1.3
Let me make one more attempt to persuade the committee to change the way strictness annotations are to be introduced. First of all, let's recognise that strictness annotations and the seq function are of enormous importance; this is a vital extension to the language, not a small detail. Space debugging consists to quite a large extent of placing applications of seq correctly, and we all know what dramatic effects space debugging has been able to achieve. The strictness features are going to be very heavily used in the future. Recording uses of polymorphic strictness annotations using class Data has both advantages and disadvantages. A big disadvantage is that curing a space bug may change the types of many functions in many modules, which at the least may require a lot of recompilation. The programmer who likes to state the type of each function will be especially hard hit, of course, which will unfortunately discourage such a style. But class Data seems to be vital for cheap deforestation, which is such an important optimisation as to outweigh the disadvantages. However, it is an independent question whether or not strictness annotations should be applicable to function types. And this is where I disagree with the committee. To quote `Introducing Haskell 1.3', Every data type, except ->, is a member of the Data class. In other words, in Haskell 1.3 FUNCTIONS ARE NOT FIRST-CLASS CITIZENS To design a functional language today, in which this is true, is in my view deeply mistaken. In the past, I've argued that it will be very frustrating for those programmers who do discover they need to apply seq to a function in order to cure a space bug, to find that they are unable to do so. Even more seriously, programmers weighing up a choice of representation for an abstract datatype, choosing between a representation as a function or as a `Data' type, will know that if they choose the function then problems with space debugging may lurk in the future. Excluding (->) from class Data is a step away from true `functional' programming towards a style in which higher-order functions are just a kind of macro. I see a very great cost in such a philosophical change, and I do not see that the arguments against strictly evaluating function values are so very compelling. Implementation difficulties? hbc has provided it for years, and even under the STG machine is the problem so very much harder than handling shared partial applications correctly? Semantic difficulties? The semantics of lifted function spaces are perfectly well defined. OK, so it's not the exponential of a CCC --- but Haskell's tuples aren't the product either, and I note the proposal to change that has fallen by the wayside. Weaker strictness analysis? I'd like to hear the effect quantified. How much slower will Haskell 1.3 run if function spaces are lifted in the semantics? Will it be measurable? I'm prepared to pay a few percent. So here's my proposal: change `Introducing Haskell 1.3' to read Every data type, including ->, is a member of the Data class. John Hughes
Re: Haskell 1.3
I would like to respond to John's note. My response is largely positive, though I disagree with a couple of points. >However, it is an independent question whether or not strictness annotations >should be applicable to function types. And this is where I disagree with >the committee. To quote `Introducing Haskell 1.3', > >Every data type, except ->, is a member of the Data class. > >In other words, in Haskell 1.3 > >FUNCTIONS ARE NOT FIRST-CLASS CITIZENS I cannot agree here. Functions are not members of the equality class either, but that does not demote them to second class citizens. However, John may be right in suggesting that people will become more reluctant to use functions as values if they cannot force their evaluation. >I see a very great cost in such a philosophical change, and I do not see >that the arguments against strictly evaluating function values are so very >compelling. > > Implementation difficulties? hbc has provided it for years, and > even under the STG machine is the problem so very much harder than handling > shared partial applications correctly? I haven't checked hbc, but I would be interested if someone would confirm that function strictify works properly. It didn't use to in LML. > Semantic difficulties? The semantics of lifted function spaces are > perfectly well defined. OK, so it's not the exponential of a CCC --- but > Haskell's tuples aren't the product either, and I note the proposal to > change that has fallen by the wayside. This is probably an important point. I see there being value in two sorts of functions: lifted and non-lifted (or equivalently boxed and unboxed). A lifted function may be expressed as a computation which delivers a function, just like lifted integers are computations which deliver integers. Under this view it would be entirely in keeping with the rest of Haskell for the standard functions to be lifted, and to leave open the possibility in the future of introducing unlifted functions. >So here's my proposal: change `Introducing Haskell 1.3' to read > >Every data type, including ->, is a member of the Data class. I am inclined to agree. Is there a problem then that every type is in Data? Not at all. The Data class indicates that forcing has been used in the body of an expression. This is valuable information that is exposed in the type. John.
Haskell 1.3 Prelude changes
Changes to the Haskell 1.3 Prelude The following changes have been proposed (or accepted) for Haskell 1.3. * Reorganize the Ord class * Add succ and diff to Enum * Add new class "Bounded" * Add strictness annotation to Complex and Ratio * Use Int in take, drop and splitAt * Add replicate, lookup, curry and uncurry * Move functions into libraries * Non-overloaded versions of PreludeList functions * Numeric Issues * Simplify lex * Add undefined * Monad Class Changes to Ord In Haskell 1.2, two comparisons are required to do a "three way branch": if x == y then ... else if x < y then ... else ... Even a standard two way branch can be inefficient - here's the default definition of "<" in the standard prelude: x < y = x <= y && x /= y Instead of defining a <= operator which returns just two values, it is almost as easy to define an operator which returns three different values: case compare x y of EQ -> ... LT -> ... GT -> ... The constructors EQ , LT ,and GT belong to a new type: Ordering. In addition to this efficiency problem, many uses of Ord such as sorting or operations on ordered binary trees assume total ordering. The compare operation formalizes this concept: it can not return a value which indicates that its arguments are unordered. Programmers are free to define a class for partial orderings; here, we simply state that Ord is reserved for total orderings. Proposed Changes: * Add a new type: data Ordering = LT | EQ | GT deriving (Eq,Ord,Ix,Enum,Bounded,Text) * Delete comment in definition of class Ord which explains how to define min and max for both total and partial orders. * Change definition of Ord to class Ord a where compare :: a -> a -> Ordering (<), (<=), (>=), (>):: a -> a -> Bool max, min:: a -> a -> a -- circular default definition: -- either <= or compare must be explicitly provided x < y = compare x y == LT x <= y = compare x y /= GT x > y = compare x y == GT x >= y = compare x y /= LT compare x y | x == y= EQ | x <= y= LT | otherwise = GT max x y = case compare x y of LT -> x _ -> y min x y = case compare x y of LT -> y _ -> x * Change definitions of Ord instances in PreludeCore. At present, Ord instances define the "<=" method. These should be deleted and replaced by definitions of the "compare" method. * Add this sentence to Appendix E: "The operator compare is defined so at to compare its arguments lexicographically (with earlier constructors in the datatype declaration counting as smaller than later ones) returning LT, EQ and GT (respectively) as the first argument is strictly less than, equal to and strictly greater than the second argument (respectively)." The methods >, >=, <, <= could be removed from Ord and turned into ordinary overloaded functions. For efficiency, these could be specialized; the GHC specialize pragma allows an explicit definition of a function at a particular overloading: Specialize (<=) :: Int -> Int -> Bool = primLeInt Add succ and diff to Enum Haskell 1.2 provides very limited facilities for operating on enumerations. The following elementary operations must be implemented in an obscure and inefficient manner, if at all: * Get the next value in enumeration: (\ x -> head [x..]) * Get the previous value in enumeration: no reasonable way * Get the n'th value in enumeration: [C0..] !! (n - 1) (where C0 is first in enumeration) * Find where a value occurs in an enumeration: lookup (zip [C0..] [0..]) x Proposed changes: * Add two new methods to Enum: succ :: Int -> a -> a diff :: a -> a -> Int Informally, given an enumeration: data T = C0 | C1 | ... Cm we have: diff Ci Cj = i - j succ x Ci | 0 <= i+x && i+x For example, given the datatype and function: data Colour = Red | Orange | Yellow | Green | Blue | Indigo | Violet toColour :: Int -> Colour toColour i = succ i Red we would have: toColour 0 = Red toColour 1 = Orange ... toColour 6 = Violet * Change definitions of Enum instances: instance Enum Char where succ = primCharSucc diff = primCharDiff enumFrom = boundedEnumFrom maxChar enumFromThen = boundedEnumFromThen minChar maxChar boundedEnumFrom hi x | x * Change description of derived instances of Ix for enumerations and Enum: Given the enumeration: data c =&
Changes in Haskell 1.3
Introducing Haskell 1.3 This new version of the Haskell Report adds many new features to the Haskell language. In the five years since Haskell has been available to the functional programming community, Haskell programmers have requested a number of new language features. Most of these features have been implemented and tested in the various Haskell systems and we are confident that all of these additions to Haskell address a real need on the part of the community. This revision to the Haskell report is much more substantial than previous ones: many significant additions are being made. We have also streamlined some aspects of Haskell, eliminating features which have been little used and complicate the language. The final version of the Haskell 1.3 is expected to be complete in October, 1995. A preliminary version of the report will be available soon. All significant changes to the Haskell language, as well as their motivation, are described here. We are still open to comments and suggestions; please send mail to [EMAIL PROTECTED] regarding Haskell 1.3. I will be happy to answer any questions or forward mail to either the Haskell mailing list or the 1.3 committee, as appropriate. Information about the design of Haskell 1.3 and other proposed extensions to Haskell is available on the web at http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html There will be some minor incompatibilities with Haskell 1.2. These should not be serious and implementors are encouraged to provide a Haskell 1.2 compatibility mode. Overview Haskell 1.3 introduces the following major features: * Standardized libraries and a reduced prelude * Constructor classes (as in Gofer) * Monadic I/O * Strictness annotations in type definitions * Simple records * A new type mechanism * Special monad syntax (`do') * Qualified names * All names are now redefinable * The character set has been expanded to ISO-8559-1 Many other smaller changes to Haskell 1.2 have also been made. A complete description of new, changed, and eliminated features follows. Prelude Changes Haskell 1.3 will make a number of minor changes to the standard prelude. Many prelude functions will be moved to libraries, reducing the size of the Haskell core language. These changes will be described separately. Standard Libraries As Haskell has grown, many informal libraries of useful functions have been created. In Haskell 1.3, we have decided to standardize a set of libraries to accompany the core language. Some of the functions formerly in the prelude are now in libraries, decreasing the size of the core language and giving the user more names in the default namespace. We are dividing the Haskell report into two separate documents: a language report and a library report. The prelude, now a little smaller, will be described in the language report. The library report will continue to evolve after the 1.3 language report is complete. We have moved much of the I/O, complex and rational arithmetic, many lesser used list functions, and arrays to the libraries and also developed a number of completely new libraries. An initial Haskell library report will be available at the same time as the 1.3 language report. Constructor Classes We have observed that many programmers use Gofer instead of Haskell to use Gofer's constructor classes. Since constructor classes are well understood, widely used, and easily implemented we have added these to Haskell. Briefly, constructor classes remove the restriction that types be `first order'. That is, `T a' is a valid Haskell type, but `t a' is not since `t' is a type variable. Constructor classes increase the power of the class system. For example, this class definition uses constructor classes: class Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a Here, the type variable `m' must be instantiated to a polymorphic data type, as in instance Monad [] where f >>= g = concat (map g f) return x = [x] No changes to the expression language are necessary; constructor classes are an extension of the type language only. Constructor classes require an extra level of type information called `kinds'. Before type inference, the compiler must perform kind inference to compute a kinding for each type constructor. Kinds are much simpler than types and are not ordinarily noticed by the programmer. The changes to Haskell required to support constructor classes are: * The syntax of types includes type application. * Built-in types have names: [] for lists, (->) for arrow, and (,) for tuples. Using type application, the type `(,) a b' is identical to `(a,b)'. * Type constructors (but not type synonyms) can be partially applied. * Type variabl
Haskell 1.3
Presenting Haskell 1.3 Haskell is a general purpose, purely functional programming language incorporating many recent innovations in programming language research, including higher-order functions, non-strict semantics, static polymorphic typing, user-defined algebraic datatypes, pattern-matching, list comprehensions, a module system, and a rich set of primitive datatypes, including lists, arrays, arbitrary and fixed precision integers, and floating-point numbers. Version 1.3 of the Haskell Report is nearly complete. The text of the new report is not quite finished, but the Haskell Committee is ready to unveil the proposed changes to Haskell. Most issues have been decided, although a few points are still being debated. We are open to comments on the new report and welcome any input from the functional programming community. Haskell 1.3 contains a number of significant changes in the language which we would like to expose for discussion. We have constructed a web page describing the changes proposed for Haskell 1.3. The url is http://www.cs.yale.edu/HTML/YALE/CS/haskell/haskell13.html For those unable to access this web page, a plaintext version of this same page will also be posted. Please ignore these messages if you can access the web pages (which will be updated as we progress). We hope to have the new report completed in October. John Peterson [EMAIL PROTECTED] Yale Haskell Project
Haskell 1.3 Draft Report
A draft of the Haskell 1.3 report is available by FTP from ftp.dcs.glasgow.ac.uk [130.209.240.50] in pub/haskell/report/draft-report-1.3.dvi.gz [Report] pub/haskell/report/draft-libraries-1.3.dvi.gz [Libraries] Highlights include: Monadic I/O A split into prelude and libraries, with qualified names Strict data types Some minor syntactic revisions We are planning to revise this and release it in time for FPCA '95. There will definitely be additional prelude and library changes; including several new libraries. Feedback is welcome and will be taken into account when revising the report, but please remember that we will be very busy over the next few weeks (I am also away for the next two weeks!). Please mail typos., minor notes on syntax etc. to me; substantive comments should be sent to [EMAIL PROTECTED] Regards, Kevin
Re: Haskell 1.3 Draft Report
Hi. For the TeX-impaired, is there any chance of sticking postscript files on an ftp site? Thanks! -- Dave >A draft of the Haskell 1.3 report is available by FTP from >ftp.dcs.glasgow.ac.uk [130.209.240.50] in > > pub/haskell/report/draft-report-1.3.dvi.gz [Report] > pub/haskell/report/draft-libraries-1.3.dvi.gz [Libraries] > >Highlights include: > > Monadic I/O > A split into prelude and libraries, with qualified names > Strict data types > Some minor syntactic revisions > >We are planning to revise this and release it in time for FPCA '95. >There will definitely be additional prelude and library changes; >including several new libraries. > >Feedback is welcome and will be taken into account when revising the >report, but please remember that we will be very busy over the next few >weeks (I am also away for the next two weeks!). Please mail typos., minor >notes on syntax etc. to me; substantive comments should be sent to >[EMAIL PROTECTED] > >Regards, >Kevin > > > -- Dave Bakin How much work would a work flow flow if a #include 510-922-5678work flow could flow work?
Prelude and Library Issues in Haskell 1.3
Currently, the Haskell language does not mention any libraries or facilities for using them. The standard prelude is meant to serve as a library but it lacks many important features. All Haskell implementations have begun to haphazardly include various libraries. However, these libraries have not yet been standardized across the different implementations and cannot always be used in a portable manner. We have produced a document which discusses some of the issues involved in designing a standard Haskell library and describes what we think the library should look like. We welcome any comments or suggestions teh Haskell community care to make. The document is available in postscript format by anonymous ftp: /pub/haskell/yale/libs.ps on haskell.cs.yale.edu and over the web: http://www.cs.yale.edu/HTML/YALE/CS/HyPlans/reid-alastair/libs/libs.html Alastair Reid and John Peterson Yale Haskell Project
Re: New Haskell 1.3 I/O Definition
Kevin Hammond writes: "We have attempted ... to consider portability issues very carefully." But we may have missed something. For example, I don't think anyone has actually *seen* a "Win32 Programmer's Reference Manual" -- i.e., the programming interface for most of the world's computers :-( -- and something may have been overlooked. If you are an "expert" about some particular system, *please* give this I/O proposal a good reading! Does the proposal make sense for the system in question? Could it be sort-of-plausibly implemented? Your feedback will really help. Haskell is not just for Unix boxes! I can say this because I am as Unix-centric as they come :-) Will Disclaimer -- not taking credit for others' efforts: I did none of the Real Work on this I/O proposal.
New Haskell 1.3 I/O Definition
The revised monadic I/O definition is now available for comment at http://www.dcs.gla.ac.uk/~kh/Haskell1.3/IO.html You should access this using Mosaic or another WWW browser. There is no PostScript version yet. We have tried to address all comments which were sent to us, and have made significant changes to the previous design, but we haven't followed every single suggestion (one goal is a reasonably compact and simple design!). We have attempted to bomb-proof this version as far as possible, and to consider portability issues very carefully. The design should give most programmers most of the basic I/O functionality they need. The definition has been implemented in the Glasgow Haskell compiler, and the new version should be released before the end of the year. Unless major problems are identified, we intend this to be the final definition of I/O for Haskell 1.3. Note that some references are still missing. These will be supplied over the next few days. To save wasted bandwidth, please check the design rationale to see if your question has been answered before mailing haskell or haskell1.3! Regards, Kevin and Andy [on behalf of the other authors and members of the Haskell 1.3 committee]
Re: Request for comments on the Haskell 1.3 I/O Proposal
Some quick comments on the Haskell 1.3 proposal. (1) In the design of Haskell 1.0, the type IOError was a bit of a guess. It wasn't clear whether it defined too many or too few error classes; it might even have been better to just replace IOError by type string. By now, we have more experience, and I think the Haskell 1.3 designers should reconsider the design of IOError in light of this experience. (2) For similar reasons, it's not clear to me if the various enumerated types connected with file handling, or the lack of an enumerated type for interupts, are good choices. It would be helpful to have some guiding principle, e.g., `make available the facilities provided by Unix'. If, say, that's the principle we go for, then channels and files should not be distinguished. (3) It is clear why the definition of IO in terms of IOPrim is in the document, but I am afraid that it also may be confusing, since of course IO is simply a primitive type and need not be implemented in terms of IOPrim. I think I'd rather see a very dry description, with the definition of IO in terms of IOPrim relegated to a separate section on motivation. Cheers, -- P
Request for comments on the Haskell 1.3 I/O Proposal
Haskell users, As part of the effort to produce version 1.3 of the Haskell report, we (a group of Haskell users and implementors) have drafted a proposal for a portable form of monadic I/O in Haskell. The current version is available at URL http://www.cl.cam.ac.uk/users/adg/io.html and via FTP (with a few extra typos!) at ftp://ftp.cl.cam.ac.uk/pub/adg-haskell-io-940718.ps.gz We would be very grateful for any comments you have on the proposal. We intend to finalise the I/O mechanism this autumn. Please send any comments you have on the proposal as soon as possible, but in any case by OCTOBER 16, to [EMAIL PROTECTED] If you send a comment we will log it at URL http://www.cl.cam.ac.uk/users/adg/IO-Comments/io-comments.html Thanks for your help, Andy Gordon (for Haskell 1.3 Committee) Phone: (+44) 223 334411University of Cambridge Computer Laboratory, Fax:(+44) 223 334679New Museums Site, CAMBRIDGE CB2 3QG, UK. Email: [EMAIL PROTECTED] Web:http://www.cl.cam.ac.uk/users/adg/
Re: Status of Haskell 1.3 and pH ?
Recently David Goblirsch asked the following on the Haskell mailing list: >What is the status of the Haskell 1.3 and pH efforts announced in this >group several months ago? Are there separate mailing lists for these >efforts? pH development has been continuing both at MIT (in my group) and at Cambridge Research Lab, DEC. We are collaborating with Lennart Augustsson and Nikhil at DEC is collaborating with Simon Peyton Jones. I will briefly describe the pH effort at MIT. There are two Id compiler substrates at MIT. One is written in Lisp and the other in Id. The Lisp version has been the main work horse for years while the Id-in-Id compiler is still in development. The target of both these compilers were various dataflow machines but now considerable effort is being spent to target them to Unix workstations and commercial parallel machines. The pH compiler will be based on the Lisp substrate. There are several components in this developmental work: 1. A translator from Id to pH. This work is almost complete for the functional subset of Id. We have been running functional programs translated from Id to pH on various Haskell compilers. We plan to release some these Haskell programs in the near future. 2. Extensions to the Haskell front end to accept the full pH syntax, and Haskell type-checking to accept imperative types in pH. Lennart is doing most of this work. 3. New desugaring modules for comprehensions to take full advantage of I-structures and parallelism. 4. Connecting the Haskell/pH front end to the middle part of the Id compiler written in Lisp. No significant changes are expected in the middle part or the code generator of the Id compiler to tackle pH. Work is continuing on 2, 3 and 4. So far we have code to handle the functional part of pH almost ready, and are able to compile some small example programs, although a lot more engineering efforts are expected. If all goes well we may have pH running by the summer of 1994. So far only a few technical problems have been encountered, all these have to do with arrays and the Haskell prelude. Please feel free to contact me for further information. Arvind
Re: Status of Haskell 1.3 and pH ?
David Goblirsch writes: > What is the status of the Haskell 1.3 and pH efforts announced in this > group several months ago? We're considering a concrete proposal for Haskell 1.3 I/O at the moment. We plan to make this public once the committee has had time to comment. I haven't seen anything from the pH camp recently, but this could be due to poor communications! > Are there separate mailing lists for these > efforts? Yes, but the Haskell 1.3 list is really for "internal" communication between committee members. If you send email to me then I'll make sure it's forwarded, or if you post to the Haskell list then everyone (committee and others) will have a chance to comment. There is a public pH mailing list. Send mail to [EMAIL PROTECTED] if you'd like to join this. Regards, Kevin
Status of Haskell 1.3 and pH ?
What is the status of the Haskell 1.3 and pH efforts announced in this group several months ago? Are there separate mailing lists for these efforts? david goblirsch
Re: Haskell 1.3
Ian Holyer writes: > To go back to the debate on instances, here is a concrete proposal for > handling instances in Haskell 1.3: I can see what you're doing, but I dislike the idea of no longer being able to define instances local to a module. This limits my choice of class and type names, and may cause problems when importing libraries defined by other users. For global (exported) instances your rules make sense (a variant of these was considered at one point) with the caveats marked below. > 1) A C-T instance can be defined in any module in which C and T are > in scope. Fine, in conjunction with 5 and 2 or similar constraints. > 2) A C-T instance defined in module M is in scope in every module which > imports from M, directly or indirectly. (If C or T are not in scope, a > module just passes the instance on in its interface). You need to ignore local C-T instances (i.e. those where a class C or type T is defined locally and not exported), otherwise mayhem could result. Local instances will now also cause problems if there is a global C-T instance defined in any importing module. The interface is problematic if a new class with local name C or type with local type T is defined (or both!), especially if there is a (local) C-T instance. Getting round this would involve being much more explicit about global names in interface files (e.g. an M1.C-M2.T instance). There is also potential name capture of type, class, or operator names by the importing module, which would require additional checking of interfaces import (something we would like to avoid for efficiency reasons). > 3) A C-T instance may be imported more than once via different routes, > provided that the module of origin is the same. This implies annotating instances with their module of origin, as you note below. > 4) If an application of an overloaded function is resolved locally, the > relevant instance must be in scope. ...a relevant instance must be in scope... ^ > 5) There must be at most one C-T instance defined in the collection of > modules which make up any one program (global resolution occurs in Main). There should be at most one global C-T instance defined (otherwise you lose the ability to create local types with instances)... You also shouldn't specify where resolution takes place. Link resolution is much faster... > I would like to see the origin of instances in interface files. My preference > from an implementers point of view would be something like: > >interface M1 whereinterface M3 where >import M2 (C(..))or import M2 (C(..)) >import M3 (T(..),fT) type T = ... >instance C T where f = fT instance C T where f = fT > > The name fT is invented while compiling M3 and passed around in interface > files, but not exported from them into implementation modules. As well as > specifying the origin of the instance, it gives the code generator something > to link to. This really isn't a problem for an implementation. We can always link to a hidden name derived from the unique C-T combination. Introducing magic names in an interface sounds like a *very bad* idea -- you might well accidentally capture a user- or Prelude-defined name. For example, class From where from :: Int -> [a] -> a instance From Int where from = ... introduces fromInt in the interface, which will clash with the Prelude name. interface M1 where import M2(C(...)) import M3(T(...)) import M4(instance M2.C M3.T) is probably closer to what's required. Regards, Kevin
Haskell 1.3
To go back to the debate on instances, here is a concrete proposal for handling instances in Haskell 1.3: 1) A C-T instance can be defined in any module in which C and T are in scope. 2) A C-T instance defined in module M is in scope in every module which imports from M, directly or indirectly. (If C or T are not in scope, a module just passes the instance on in its interface). 3) A C-T instance may be imported more than once via different routes, provided that the module of origin is the same. 4) If an application of an overloaded function is resolved locally, the relevant instance must be in scope. 5) There must be at most one C-T instance defined in the collection of modules which make up any one program (global resolution occurs in Main). This retains the Global Instance Property, except that instead of instances having universal significance over all programs, they have global significance over any one program. An instance can be redefined by replacing one module by another. There is still no easy way to redefine Prelude instances, except perhaps by selective inclusion of individual Prelude modules, but you can define instances for Prelude types which don't already have them. These rules for overloaded functions and instances seem to me to be closely analogous to polymorphic functions and types. If an application of a polymorphic function is resolved locally, the relevant type must be in scope. If globally, it may be applied to any type defined in the program as a whole. There is a small related issue. It is highly desirable (whether the instance rules are changed or not) to be able to tell from an interface file which original module an instance was defined in. Current rules and practice seem not to allow for this. You see interfaces such as: interface M1 whereinterface M3 where import M2 (C(..))or import M2 (C(..)) import M3 (T(..)) data T = ... instance C T instance C T ... ... In either case, you can't tell whether the instance is defined in M2 or M3. I would like to see the origin of instances in interface files. My preference from an implementers point of view would be something like: interface M1 whereinterface M3 where import M2 (C(..))or import M2 (C(..)) import M3 (T(..),fT) type T = ... instance C T where f = fT instance C T where f = fT The name fT is invented while compiling M3 and passed around in interface files, but not exported from them into implementation modules. As well as specifying the origin of the instance, it gives the code generator something to link to. The report already allows types to be imported into an interface but not exported into the implementation module (5.3.2, para 4), so it is not much of a stretch to do the same for functions. By the way, there is another situation where interfaces seem insufficient. There is no official ruling on whether the original names of constructors or class methods should be specified in an interface. For example, given: module M1 where data T = A | B ... module M2 (M1..) where import M1 (T(..), ...) renaming (B to C) the following interface for M2 is possible: interface M2 where import M1 (T(..), ...) renaming (B to C) data T = A | C The constructor has been renamed, but you can't tell that from the interface. (Some compilers even leave the renaming clause out.) It is true that a compiler can manage on this information alone, but it is poor documentation on what has happened. I think that the standard should insist that renaming be explicit in interfaces). This can be done with current syntax, eg: interface M2 where import M1 (T(A,B), ...) renaming (B to C) data T = A | C However, it would be safe to allow, in interface files only, an import to specify a subset of the constructors (at least those that are renamed). The full set appears later anyway. All this applies to class methods as well. Ian[EMAIL PROTECTED], Tel: 0272 303334
Re: Haskell 1.3 [instances]
Ian Holyer writes: The current restriction that instances must be defined either in the class module or the type module is painful. LISTEN TO THIS MAN! Trying to use the module system in (what we imagined to be) a sensible way on the Glasgow Haskell compiler [which is written in Haskell] has been a nightmare. Take a pile of mutually-dependent modules, add the "instance virus" [instances go with the class or type, and you can't stop them...], and you have semi-chaos. All attempts to have export/import lists that "show what's going on" have been undermined by having to add piles of cruft to keep instancery happy. I would go for either of the following not-thought-through choices: * Instances travel with the *type*, not the class. 99% of the time, this is what we want. If your instance isn't going, add an explicit export of the type constructor. Possibly have a special case for instances of user-defined classes for Prelude types... * Make it so that imported instances whose class/type is out-of-scope may be silently ignored (i.e., an exception to the closure rule). For example, if I write "import Foo" and Foo's interface includes "instance Wibble Wobble" and none of my "imports" happen to bring "Wibble" (or "Wobble") into scope, then a compiler may drop this instance silently. It is not an error. (Of course, if you try to *use* such an instance, you will get an error downstream.) Of course, something that involves new syntax/extra machinery would also be fine. Will PS: Get rid of "default" declarations, too. No-one uses them. (Hi, Kevin!)
Haskell 1.3
Here is another suggestion for Haskell 1.3. The current restriction that instances must be defined either in the class module or the type module is painful. If a module defining an abstract type contains a class definition, it may be impossible to define an instance in the module defining the type (eg, it may be pre-defined in the prelude) and to put it in the module defining the class would be breaking into the abstraction (the module may not be mine, and I may not have source access to it). If the only reason for the restriction is that instances don't have names to control their import/export, I suggest dropping the restriction and allowing one or both of the following forms for controlling export of instances: module M (... (==) ...) where instance Eq T where ... module M (... Eq(..) ...) where instance Eq T where ... The first means "export all the instances of (==) defined in this module" and the second means "export all the instances of the Eq methods defined in this module" (allowed even though the module does not define the Eq class, but merely extends it). This doesn't allow separate instances to be distinguished, but I can live with that; I don't want this to get heavy. There would be an incompatibility with Haskell 1.2: if there is an explicit export list, and the list does not mention a method/class, then instances of that method/class are not exported. Incidentally, I think the class and module systems both have some nasty problems (eg Warren Burton's recent comments) and that both need a more thorough redesign for Haskell 2.0. Ian[EMAIL PROTECTED], Tel: 0272 303334
Wishlist for Haskell 1.3
I would like to put two rather prosaic things into Haskell 1.3. They almost fall into the "syntactic sugar" class, but they would make my life easier. The first is that I would like to see arrays be a class instead of whatever they are. I wanted to construct a subclass of arrays that were constrained to have lower bounds equal to one, but after fooling around for some time I just gave up. Maybe it's easy, and I just don't know the right way to hold my mouth. I would also like to be able to construct a sub-class of one- dimensional array that is a vector, and a sub-class of two-dimensional array that is a matrix, and overload "*" to mean "inner product". The second thing I would like is an array section notation. In many operations of linear algebra, one needs to view a matrix sometimes as an array of row vectors, and sometimes as an array of column vectors. This arose in development of a function that implements Crout's method to factor a matrix. (Crout's method is especially attractive for functional languages because each element of the factor is written exactly once. That is not the case with Gauss-like methods.) I ended up writing three functions, one that computes the inner product of two vectors, another that computes the inner product of a row and column of a single matrix, and another that computes the inner product of a row of one matrix with a column of another. Others would need functions that compute the inner product of a vector with a row or column of a matrix. It would be easier to write one function that computes the inner product of two vectors, and create vectors out of pieces of a matrix by using a section notation. For example, I might write a Crout reduction with no pivoting as: lu = array b ([(i,1) := a!(i,1) | i <- [1..m]] ++ [(1,j) := a!(1,j)/a!(1,1) | j <- [2..n]] ++ [(i,j) := (a!(i,j) - dot lu!(i,1..j-1) lu!(1..j-1,j)) | i <- [2..m], j <- [2..i]] ++ [(i,j) := (a!(i,j) - dot lu!(i,1..i-1) lu!(1..i-1,j)) / lu!(i,i) | i <- [2..m], j <- [i+1,m]]) where ((_,_),(m,n)) = b = bounds a. BTW, I have developed a Crout reduction that uses pivoting, but I _think_ it's hitting something that's a little too strict -- the run-time system insists there's a black hole, but if I run the code "by hand" I'm always able to find an order such that data are available -- there aren't any circular dependencies on un-computed data. Maybe somebody can tell me where I've gone wrong, or recommend a change in Haskell 1.3 to cope with the problem if it's real. The Crout reduction with pivoting follows. If anybody wants to try it through the compiler. you'll need a test harness, which I'll be happy to send, but I don't think I ought to waste net bandwidth to post it. It's also unfortunate that array bounds were defined to be ((array of low bounds),(array of high bounds)) [I know the arrays are really tuples] instead of array of tuples (low,high). The latter could be used with the inRange function from the Prelude, while the former cannot. But it'd probably be _really_ hard on a lot of people to change this now. Best regards, Van Snyder Crout reduction with pivoting -- module Croutp (croutp) where -- Crout method for LU factorization import Linalg (ipamaxc,rcdot2) -- ipamaxc a j p m n returns the index x of the element of p(m..n) such that --a!(p!(x),j) is the element of column j of a having the largest --absolute value. -- rcdot2 a b i j m n computes the inner product of a(i,m..n) and b(m..n,j) -- croutp takes matrix a and returns l and u factors in one matrix lu. -- performs pivoting. -- calculates values of lu from values of a and lu. croutp :: (RealFrac v) => Array (Int,Int) v -> (Array (Int,Int) v, Array Int (Array Int Int), Array Int Int, Array Int Int) croutp a = if k==1 && l==1 && m<=n then (lu,p,mx,mk) else error "crout: lower bounds not 1 or #rows > #columns" where b = bounds a ((k,l),(m,n)) = b --t :: (RealFrac v) => Array (Int,Int) v t = array ((1,1),(m,m)) ([let k = p!1!i in (k,1) := a!(k,1) | i <- [1..m]] ++ [let k = p!s!i in (k,s) := a!(k,s) - rcdot2 t lu k s 1 (s-1) | s <- [2..m], i <- [s..m]]) --p :: Array Int (Array Int Int) p = array (1,m) ([1 := array (1,m) [i := i | i <- [1..m]]]++ [s := let u = s-1 k = mk!u in if u == k then p!u else p!u // [u := mx!u, k := (p!u)!u] | s <- [2..m]]) --mk :: Array Int Int --With the first definition of mk active, run-time insists there's a black hole. --With the second, things work, but the function does no pivoting. mk = array (1,m) [s := ipamaxc t s (p!s) s m | s <- [1..m]] --mk = array (1,m) [s := s | s <- [1..m]]
Re: Haskell 1.3 trivia
> How about removing the `where' from `module...where' and > `interface...where' ... > > The reason we used the "module...where" convention is to allow for > multiple modules to be included in one "file". Your proposal is > workable, but requires saying something extra about what terminates > a module. I agree, however, that having to write "where" all the > time is a pain (I still forget to put it in sometimes!), so perhaps > you could complete the proposal with the wording required to say when > a module ends. I suggest: *) remove the paragraph about top-level indenting from 1.5 *) change 5.2: script -> module1 module2 ... (n>=1) module -> { [header ; body] [;] } | { header[;] } | { body[;] } header -> { [moddecl ; impfix] [;] } | { moddecl[;] } | { impfix[;] } impfix -> { [impdecls ; fixdecls] [;]} | { impdecls[;] } | { fixdecls[;] } moddecl -> `module' modid [exports] body-> topdecls NB The form { ... [[fixdecls ;] topdecls [;]] } in the current syntax is inconsistent in disallowing {;} which other blocks in the syntax allow. *) change the text of 5.2 to correspond, and in particular change the second paragraph to: If the first lexeme in a module is not a {, then the layout rule applies for the top level of the module. Several modules may appear in one script. Each module ends when the `module' keyword of the next is encountered. An abbreviated form of module is permitted, which omits the moddecl. If this is used, the moddecl is assumed to be `module Main'. An abbreviated module may not appear in the same script as some unabbreviated modules. NB The first paragraph of 5.2 *already* uses the term body for the topdecls alone, in contradiction to the current syntax. *) do the same for interfaces in 5.3 (we don't want modules and interfaces in the same file, do we?) and change B.4, B.5, B.6 to match. Ian
Haskell 1.3 trivia
A couple of miniscule suggestions for Haskell 1.3: How about removing the `where' from `module...where' and `interface...where' so that these become ordinary topdecls like the rest. This would mean that the convention about topdecls not having to be indented would no longer be an ugly exception, it would be more consistent with implicit main programs which have no introductory `where', and it would be more consistent with the fact that the natural break between the header and body of a module comes after the fixity decls. In PreludeText, why not rename readDec as readInt (so that it matches showInt) and rename the current readInt as readRadix or something? Ian[EMAIL PROTECTED], Tel: 0272 303334
Re: Haskell 1.3 trivia
How about removing the `where' from `module...where' and `interface...where' so that these become ordinary topdecls like the rest. This would mean that the convention about topdecls not having to be indented would no longer be an ugly exception, it would be more consistent with implicit main programs which have no introductory `where', and it would be more consistent with the fact that the natural break between the header and body of a module comes after the fixity decls. The reason we used the "module...where" convention is to allow for multiple modules to be included in one "file". Your proposal is workable, but requires saying something extra about what terminates a module. I agree, however, that having to write "where" all the time is a pain (I still forget to put it in sometimes!), so perhaps you could complete the proposal with the wording required to say when a module ends. -Paul --- Professor Paul Hudak Department of Computer Science Yale University P.O. Box 208285 New Haven, CT 06520-8285 (203) 432-4715 [EMAIL PROTECTED]
Re: Haskell 1.3 (n+k patterns)
jl writes: > I feel the need to be inflamatory: > > I believe n+k should go. Again, I agree completely. Let's get rid of this horrible wart once and for all. It's a special case that makes the language more difficult to explain and implement. I've hardly seen any programs using it so I don't think backwards compat is a problem. Anyone who thinks this change will cause them more than 10 minutes work, plese speak up. -- Lennart
Haskell 1.3 (n+k patterns)
I feel the need to be inflamatory: I believe n+k should go. There are lots of good reasons why they should go, of course. The question is: are there any good reasons why they should stay? My understanding is that the only reason they are advocated is that they make teaching induction easier. I don't believe it. I teach an introductory FP course including induction. I introduce structural induction directly, and the students have no problem with it. When I have tried to talk to individuals about natural number induction using (n+k) patterns, then the problems start. Because they are so unlike the form of patterns they have become used to they find all sorts of difficulties. What if n is negative. Ah yes, well it can't be. Why not. It just can't. etc. Let's throw them out. John.
Haskell 1.3
I hope that Haskell 1.3 will clean up the report, and maybe even the language, and not just add features. Recent work at Bristol has raised the following points; I apologise for any which are well known already. o The layout rule that says that an implicit block can be terminated by the surrounding construct (ie whenever an `illegal' token is found) is painful. It forces layout processing to be intertwined with parsing, which (eg) rules out the design of a language-sensitive editor based on matching tokens rather than full parsing. It can also make it difficult to report syntax errors precisely. There is little problem when the surrounding construct is a multi-token one, as in: pair = (case n of 1->42, 43) but pathological cases such as the following (all legal!) cause problems: a = n where n = 42 ; ; b = 43 -- terminated by second `;' c = case x of 1->y where {y=44} where {x=1} -- ditto by second `where' d = case 1 of 1->44 :: Int + 1 -- ditto by `+' Is it not possible to find some better convention which rules these out and allows layout processing to be carried out separately from parsing? o The expression 4/2/1 is illegal according to section 5.7 of the report (division operators are not associative), but legal according to the fixity declarations in appendix A.2 (infixl). Existing compilers differ. Also :% is missing from the table in 5.7. o Section 2.4 doesn't make it clear that decimal points are (presumably) the one and only exception to the longest lexeme rule of section 2.3, which explicitly says that no lookahead is required. This exception is needed to make expressions such as [1..n] legal. Presumably, the rest of the numeric literal syntax follows the longest lexeme rule, so that (f 1.2e) is reported as an incomplete literal rather than accepted as (f 1.2 e). o Definitions such as (f x) = ... or (x # y) = ... are illegal (although existing compilers allow them). This prevents, for example, the following natural definition of the composition (dot) operator: (f . g) x = f (g x) Is this restriction intentional? o The situation with unary minus is still confused. Expressions such as (2 + -3) are technically illegal, although accepted by current compilers. Also, it is not entirely clear from sections 3.3 and 3.4 whether (2-) is legal (presumably meaning (\n->2-n)). Also, the definition -42 = 42 is legal (patdefs do not exclude minus patterns), and accepted by current compilers, although it is meaningless. o The form (`div`) is illegal, even though it looks very natural in definitions such as ops = [(+),(-),(`div`),(`mod`)] This seems to be against the general policy of allowing any meaningful expression in any suitable context. o There is a general inconsistency of language in the report. A notable case is that the functions associated with a class are variously called methods, operations, or operators. The last of these is surely wrong. o A number of other minor matters are raised by the tests available by anonymous ftp from ftp.cs.bris.ac.uk, directory /pub/functional/brisk. Ian[EMAIL PROTECTED], Tel: 0272 303334
Re: Defining Haskell 1.3 - Committee volunteers wanted
Three cheers for Brian, for his work to keep Haskell a living and growing entity. I propose as a touchstone for 1.3 that they should only look at extensions that have been incorporated in one or more Haskell implementations. Hence the following are all good candidates for 1.3's scrutiny: Monadic IO Strict data constructors Prelude hacking Standardizing annotation syntax But the following is not: Records (naming field components) If someone actually implemented records, then after we had some experience with the implementation it would become a suitable 1.3 candidate. A further thing which 1.3 should look at is: ISO Standardisation The credit for this suggestion should go to Paul Hudak, but I heartily endorse it. Cheers, -- P
Defining Haskell 1.3 - Committee volunteers wanted
Joe Fasel, John Peterson and I met recently to discuss the next step in the evolution of Haskell. While there are some big issues up ahead, (adding Gofer-like constructor classes, for example), these should be considered for the next major revision, Haskell 2.0. For now, we want to be less ambitious, and produce a definition of Haskell 1.3. Topics on the agenda include: Monadic IO Strict data constructors Records (naming field components) Prelude hacking Standardizing annotation syntax We think the best way to proceed is to call for volunteers to form a new committee to do the work on this. So, who's interested? --brian
Re: Defining Haskell 1.3 - Committee volunteers wanted
I'm probably not expert enough to be on the committee. However, I have a suggestion. The syntax description of Haskell is hard to read. One reason is that one repeatedly has to look in the index to find out where some nonterminal is defined. If the page number of the definition of each nonterminal were written in, say, the right hand margin for each use, then it would be easier to decipher things. A disadvantage might be added clutter. Don