cvs head, winxp, make fails to make target `HSwin32.o', needed by `all'.
I'm trying to build todays cvs ghc on a winxp box, without trying anything unusual. The build fails and I am rather lost in the makefiles - any suggestions? Cheers, Claus $ autoreconf $ ./configure --host=i386-unknown-mingw32 --with-gcc=c:/mingw/bin/gcc $ make 21 | tee make.log the last segments of make.log: .. ===fptools== Finished recursively making `all' for ways: p ... PWD = /cygdrive/c/fptools/fptools/hslibs/hssource ==fptools== make all - --unix -wr; in /cygdrive/c/fptools/fptools/hslibs/win32 .. c:/mingw/bin/gcc -mno-cygwin -O -DTARGET_GHC -I../../ghc/includes -c spawnProc.c -o spawnProc.o rm -f libHSwin32.a libHSwin32.a.tmp (echo GDITypes_stub_ffi.o Win32Bitmap_stub_ffi.o Win32Brush_stub_ffi.o Win32Clip_stub_ffi.o Win32Control_stub_ffi.o Win32DLL_stub_ffi.o Win32Dialogue_stub_ffi.o Win32File_stub_ffi.o Win32Font_stub_ffi.o Win32Graphics2D_stub_ffi.o Win32HDC_stub_ffi.o Win32Icon_stub_ffi.o Win32Key_stub_ffi.o Win32MM_stub_ffi.o Win32Menu_stub_ffi.o Win32Misc_stub_ffi.o Win32NLS_stub_ffi.o Win32Palette_stub_ffi.o Win32Path_stub_ffi.o Win32Pen_stub_ffi.o Win32Process_stub_ffi.o Win32Region_stub_ffi.o Win32Registry_stub_ffi.o Win32Resource_stub_ffi.o Win32SystemInfo_stub_ffi.o Win32Types_stub_ffi.o Win32WinMessage_stub_ffi.o Win32Window_stub_ffi.o WndProc.o diatemp.o dumpBMP.o errors.o finalizers.o spawnProc.o Win32Dialogue_stub.o Win32Window_stub.o GDITypes_stub_ffi.o Win32Bitmap_stub_ffi.o Win32Brush_stub_ffi.o Win32Clip_stub_ffi.o Win32Control_stub_ffi.o Win32DLL_stub_ffi.o Win32Dialogue_stub_ffi.o Win32File_stub_ffi.o Win32Font_stub_ffi.o Win32Graphics2D_stub_ffi.o Win32HDC_stub_ffi.o Win32Icon_stub_ffi.o Win32Key_stub_ffi.o Win32MM_stub_ffi.o Win32Menu_stub_ffi.o Win32Misc_stub_ffi.o Win32NLS_stub_ffi.o Win32Palette_stub_ffi.o Win32Path_stub_ffi.o Win32Pen_stub_ffi.o Win32Process_stub_ffi.o Win32Region_stub_ffi.o Win32Registry_stub_ffi.o Win32Resource_stub_ffi.o Win32SystemInfo_stub_ffi.o Win32Types_stub_ffi.o Win32WinMessage_stub_ffi.o Win32Window_stub_ffi.o WndProc.o diatemp.o dumpBMP.o errors.o finalizers.o spawnProc.o ; /usr/bin/find GDITypes_split StdDIS_split Win32_split Win32Bitmap_split Win32Brush_split Win32Clip_split Win32Control_split Win32DLL_split Win32Dialogue_split Win32File_split Win32Font_split Win32Graphics2D_split Win32HDC_split Win32Icon_split Win32Key_split Win32MM_split Win32Menu_split Win32Misc_split Win32NLS_split Win32Palette_split Win32Path_split Win32Pen_split Win32Process_split Win32Region_split Win32Registry_split Win32Resource_split Win32Spawn_split Win32SystemInfo_split Win32Types_split Win32WinMessage_split Win32Window_split -name '*.o' -print) | xargs C:/cygwin/bin/ar q libHSwin32.a : libHSwin32.a make[2]: *** No rule to make target `HSwin32.o', needed by `all'. Stop. make[1]: *** [all] Error 1 make[1]: Leaving directory `/cygdrive/c/fptools/fptools/hslibs' make: *** [build] Error 1 ___ Glasgow-haskell-bugs mailing list Glasgow-haskell-bugs@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
url problem, missing GLUT, and building package ghc
perhaps some items got lost from my last mail: - url problem on http://www.haskell.org/ghc/ Mailing Lists points to non-existent http://www.haskell.org/ghc/docs/latest/html/users_guide/introduction-ghc.html#MAILING-LISTS-GHC - missing GLUT the building guide might want to mention that one has to add GLUT to the MSYS (in section 13.4), or configure will exclude the corresponding package from the build. in particular, the ghc 6.4 msi bundle I got seems to lack the GLUT-2.0 package - package ghc I now seem to have a ghc from cvs head, and a package ghc in ghc/compiler. From Simon's comment, it seems that trying to build the package will not depend on a compiler build, so the package is presumably built with the pre-installed ghc (6.4), but from the sources of the cvs head. Is that correct, meaning that I should register the package with the pre-installed ghc? Cheers, Claus old message: http://www.haskell.org//pipermail/glasgow-haskell-bugs/2005-March/004778.html ___ Glasgow-haskell-bugs mailing list Glasgow-haskell-bugs@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
[Haskell] Change of editors for HCA Report
I am happy to report that the half-yearly editions of the - Haskell Communities and Activities Report (next edition: May 2004) http://www.haskell.org/communities/ - are going to continue as planned, thanks to Arthur van Leeuwen, Universiteit Utrecht, who has kindly agreed to take on the May 2004 edition, and Andres Loeh, also at Utrecht, who has offered to take over as editor starting from November 2004. For those of you who haven't heard of these reports before, have a look at the web site - you should find them interesting reading. They come out twice a year (May/November, since November 2001), with the goal of helping to improve the communication between the increasingly diverse groups, projects, and individuals working on, with or inspired by Haskell. The idea of these reports is simple: Every six months, a call goes out to all of you on the Haskell mailing list to contribute brief summaries of your own area of work. Many of you respond (eagerly, unprompted, and well in time for the deadline;-) to the call. The editor then collects all these into a single report and feeds it back to this very mailing list. A big thanks to Arthur and Andres for volunteering to keep these reports alive, and I hope that everyone else will support their work at least as well as you have supported mine (just continue to do good work, and be ready to talk about it). It has certainly been an interesting time, and I look forward to reading my first HCA report from the other side!-) So please make sure to reserve some time in April for writing a brief summary of your favourite Haskell activities (Arthur will send his call for contributions in the not too distant future). Happy Haskelling, Claus -- Computing Laboratory University of Kent http://www.cs.kent.ac.uk/~cr3/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
ghc 6.2 gets confused about Main.hi reuse
When trying to build HaRe with ghc 6.2 (builds fine with ghc 6.0.1), we encountered a long list of strange error messages of the kind: ... *** Compiling Main: compile: input file pfe_client.hs *** Checking old interface for Main: Failed to load interface for `MapDeclMBase': Could not find interface file for `MapDeclMBase' locations searched: /proj/haskell/lib/ghc-6.2/imports/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/text/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/data/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/util/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/posix/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/concurrent/MapDeclMBase.hi /proj/haskell/lib/ghc-6.2/hslibs-imports/lang/MapDeclMBase.hi ... It turns out that ghc 6.2 seems to get confused about the reuse of Main.hi: we build two different executables (client/server) in the same directory, from the same set of modules, and ghc traditionally ignores the -o directive and names the interface file Main.hi. ghc 6.0.1 had no trouble with this, but for ghc 6.2, I have to delete Main.hi before building the second executable, to avoid the Failed to load interface errors mentioned above (btw, that was the complete list of locations searched, as given in the error message - note that it does *not* include the locations where ghc has put the interface files for the current project). As I've got a work-around (remove Main.hi), this isn't critical, but I'd be interested to learn what happened between 6.0.1 and 6.2, and others who get this kind of error message might be interested to know what to look for. Cheers, Claus ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Re: Problem with ghc on Windows ME
It's called 'raw' because it is supposed to get the arguments through *unmodified* to the called program. No file globbing, no escape stuff, nothing. That's exactly what I'm worried about: it seems that rawSystem is *not* passing the arguments unmodified, but tries to compensate for windows interpretation, from a unix perspective (*). As long as raw means unmodified, I just have the problem of finding the appropriate documentation for the system I'm working on (after all, that's why it's in System). But if there are two interpreters fighting with each other for some unstable balance, things do not look so promising. For instance, I can't use rawSystem in ghc-6.x because there are already two different versions in circulation, and a third one planned. So, instead of simplifying my code, I'd have to triplicate it, and add version testing, which adds preprocessing.. Therefore, my suggestion would be to keep the rawSystem from ghc-6.0.1 (which doesn't seem to do any interpretation?), and to provide a system-specific escape function System.Cmd.escape :: String - String - String -- (System.Cmd.escape chars string) escapes occurrences of -- chars in string, according to convention on the current -- system If really necessary, there could be a convenience function somewhat like: -- try to do the right thing System.Cmd.rawSystem' :: String - [String] - IO ExitCode System.Cmd.rawSystem' path args = rawSystem $ concat (path:[' ':(escape \\\ a) | a - args]) Since the implementation of rawSystem in the current binary release appears to be buggy, there is still a chance to declare the original behaviour as the intended one and have the same code work in all versions of ghc-6 that may be installed out there.. Claus (*) I can't think of any case where I could pretend that a rawSystem call on unix could be the same as one on any other system - not with all the system variations around on windows alone (mingw ghc/cygwin ghc/international variants of windows/different versions of windows/..). So what I really need is a way to specify different code, depending on what system I'm on (which seems to be the purpose of System.Info). And for that it doesn't really help me if I have to think about what a unix system call would do while writing my windows system call - it is well-meant, but it just adds complications. ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Problem with ghc on Windows ME
No, the effect is that the arguments are passed unmodified to the program. The implementation of rawSystem might have to do some compensation under the hood (eg. on Windows), but that's not visible to the client. As I've learned to interpret the uncompensated arguments, I'd prefer a rawSystem without compensation, on the grounds that it'll work as anything else on this system, with more backslashes, but only one possible source of bugs instead of two. But that's just my preference. This is an unfortunate situation, granted. ghc-6.0.x had a version of rawSystem that was not very raw: on Windows there was a layer of translation between rawSystem and the invoked program, and on Unix it didn't even allow you to pass any arguments to the program. We thought this was wrong, and decided to make rawSystem truly raw. The translation layer is standard on Windows, isn't it? Of course, rawSystem needs to be in a form that works on Unix as well. Unfortunately we got it slightly wrong in 6.2. 6.2.1 will be better (I hope), and in the meantime we can offer an implementation of rawSystem that you can use locally to work around the differences - how does that sound? In released software, I'm using only system so far, so won't be affected negatively. But I still haven't managed to work around the works in win98/fails in winXP problem I mentioned, and have so far avoided trying rawSystem because of the version problem. If you can offer a workaroung to the version problem, I'll try whether rawSystem is any help in my case. Generally, it'd be great if working code would less often break with new releases (oh, and a portable popen2, while we're at it!-). I don't think it's necessary to do any of this, if rawSystem works as it's intended. But I may have misunderstood your intention... My intention was to get access both to the raw rawSystem and to the compensating rawSystem. Exporting both would be a simpler option. Cheers, Claus PS. On Windows98, System.system always returns ExitSuccess. Is there a way of fixing that, or at least return ExitFailure instead of ExitSuccess, to alert unsuspecting testers to the problem? ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
[Haskell] ANNOUNCE: HaRe, the Haskell Refactorer, version 0.2; and a workshop
Dear Haskellers, as part of our project on Refactoring Functional Programs http://www.cs.kent.ac.uk/projects/refactor-fp/ we are pleased to announce the availability of HaRe 0.2 (also known as HaRe 27/01/2004 ;-), a snapshot of our Haskell Refactorer prototype. The major changes since HaRe 0.1 (apart from numerous bug-fixes, a clean-up of the Emacs binding, and initial support for literate Haskell files) are that all refactorings are now module-aware and can thus be used in multi-module settings. You can get HaRe 0.2 via http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html Please see the README.txt for build/use instructions and known issues, and let us know about any problems, bugs, suggestions, or additional platforms you can confirm as working: our project address at kent.ac.uk is refactor-fp. The catalogue describing the refactorings implemented in HaRe has been updated and is included in the doc/ directory. This means that our basic platform is now fairly complete (type-awareness is the next big step), and that in future we can hopefully focus a bit more on extending the range of refactorings, and the range of our collaborations. To this end, we are organising a one-day workshop here in Canterbury, on Monday, 09 February 2004: http://www.cs.kent.ac.uk/projects/refactor-fp/workshop.html If you'd be interested in participating, please let us know. Happy Refactoring! The HaRe Team (Huiqing Li, Claus Reinke, Simon Thompson) project email: refactor-fp (at kent.ac.uk) -- Background: Refactoring is the process of changing the structure of programs without changing their functionality, i.e., refactorings are meaning-preserving program transformations that implement design changes. For more details about refactoring, about our project and for background on HaRe, see our project pages and the papers/presentations/catalogue/demo/etc. available there, especially our contribution to last year's Haskell Workshop. HaRe - the Haskell Refactorer: HaRe is our prototype tool supporting a first few basic refactorings for Haskell 98 (see README.txt for known issues and limitations). It is implemented as a separate refactoring engine (on top of Programatica's Haskell frontend and Strafunski's generic traversal strategy library), with small scripting frontends that call this engine from either Vim or Emacs. The refactoring engine itself has been seen to build (with ghc-6.0.1) and run on most flavours of Windows (cygwin needed to build) and on Suns (binutils recommended to build), so we expect it to build and work on other unix-like platforms with almost no changes. In other words, we've tried to make sure that most of you should be able to build and use HaRe from your favourite OS/editor. Currently supported refactorings: removeDef : remove an unused definition duplicateDef : duplicate a definition under a new name liftToTopLevel : move a local definition to top level liftOneLevel : move a local definition one level up demote : move a definition local to point of use rename : rename an identifier introNewDef: turn expression into use of new definition unfoldDef : replace use of identifier by right-hand side addOneParameter: add parameter to definition rmOneParameter : remove unused parameter from definition generaliseDef : turn expression on rhs of definition into new parameter of that definition A series of screenshots illustrating some of the tasks one might want to accomplish with these refactorings can be found via the HaRe page (see above for URL). Caveats (see also README.txt): Please keep in mind that this is a prototype, so we do not recommend to use it on your productions sources just yet. Just play with it to get an idea of tool-supported refactoring in Haskell, and send us your feedback and bug-reports. Our goal is to develop this into a tool that many of you will find indispensible for Haskell development, and while we won't be able to follow every suggestion, we've got about 1.5 more years in which to work towards this goal!-) History: Functionally, HaRe 0.1 was still roughly the snapshot you'd seen at the Haskell workshop, packaged up for relative ease of build/use, but unaware of types and modules, and all refactorings only working on a single module. It had some annoying issues that plagued some of our Emacs users, didn't work at all with literate Haskell files, and had several other minor problems. HaRe 0.2 has not added refactorings, but all refactorings have now been modified to take Haskell's module system into account. This means that a single refactoring may affect multiple modules in a given project (e.g., renaming an exported function should trigger corresponding renamings in all
ANNOUNCE: HCA Report (5th edition, November 2003)
On behalf of the many contributors, I am happy to announce that the - Haskell Communities and Activities Report (5th edition, November 2003) http://www.haskell.org/communities/ - is now available from the Haskell Communities home page in several formats: in PDF (for those who haven't noticed: that format is not just for printing, but also for online viewing, with working links and table of contents) or, for those who have problems with the PDF, in HTML (using John's secret weapon yet again) and Postscript. A big thanks here to everyone who contributed, be it by making sure that there are so many interesting activities to report on, or by sending in descriptions to make sure that we can read about these activities! I hope you will find it as interesting to read as we did. For those of you who haven't heard of these reports before (and have filtered all previous calls into the spam-folder..), the first edition of the HCA Report was released in November 2001, with the goal of helping to improve the communication between the increasingly diverse groups, projects, and individuals working on, with or inspired by Haskell. The idea of these reports is simple: Every six months, a call goes out to all of you on the Haskell mailing list to contribute brief summaries of your own area of work. Many of you respond (eagerly, unprompted, and well in time for the deadline;-) to the call. The editor then collects all these into a single report and feeds it back to this very mailing list. And when we try for the next update in six months, you might want to add your own work, project, research area or group as well. So, please, put that item into your diary now! --- End of April 2004: target deadline for contributions to the May 2004 edition of the HCA Report --- It has become clear that many Haskellers who work on interesting projects no longer have the time to follow the Haskell mailing list closely and may thus miss the calls for contribution. If you are a member, user or friend of such projects, please point them to the current edition, and invite them to register with me for a simple email-reminder in the middle of April (and no, you can't register anyone else:). Of course, they'll still have to act on that reminder, but perhaps we can extend our reach this way.. Enjoy (and communicate;-)! Claus Reinke -- Computing Laboratory University of Kent http://www.cs.kent.ac.uk/~cr3/ PS. Please note the preface: as announced at this year's Haskell workshop, there'll be a change of editor after this edition. Andres Loeh has kindly offered to take over starting with the November 2004 edition, but we are still looking for someone to take on the May 2004 edition. If you're interested, please get in touch with me! ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
ANNOUNCE: HCA Report (5th edition, November 2003)
On behalf of the many contributors, I am happy to announce that the - Haskell Communities and Activities Report (5th edition, November 2003) http://www.haskell.org/communities/ - is now available from the Haskell Communities home page in several formats: in PDF (for those who haven't noticed: that format is not just for printing, but also for online viewing, with working links and table of contents) or, for those who have problems with the PDF, in HTML (using John's secret weapon yet again) and Postscript. A big thanks here to everyone who contributed, be it by making sure that there are so many interesting activities to report on, or by sending in descriptions to make sure that we can read about these activities! I hope you will find it as interesting to read as we did. For those of you who haven't heard of these reports before (and have filtered all previous calls into the spam-folder..), the first edition of the HCA Report was released in November 2001, with the goal of helping to improve the communication between the increasingly diverse groups, projects, and individuals working on, with or inspired by Haskell. The idea of these reports is simple: Every six months, a call goes out to all of you on the Haskell mailing list to contribute brief summaries of your own area of work. Many of you respond (eagerly, unprompted, and well in time for the deadline;-) to the call. The editor then collects all these into a single report and feeds it back to this very mailing list. And when we try for the next update in six months, you might want to add your own work, project, research area or group as well. So, please, put that item into your diary now! --- End of April 2004: target deadline for contributions to the May 2004 edition of the HCA Report --- It has become clear that many Haskellers who work on interesting projects no longer have the time to follow the Haskell mailing list closely and may thus miss the calls for contribution. If you are a member, user or friend of such projects, please point them to the current edition, and invite them to register with me for a simple email-reminder in the middle of April (and no, you can't register anyone else:). Of course, they'll still have to act on that reminder, but perhaps we can extend our reach this way.. Enjoy (and communicate;-)! Claus Reinke -- Computing Laboratory University of Kent http://www.cs.kent.ac.uk/~cr3/ PS. Please note the preface: as announced at this year's Haskell workshop, there'll be a change of editor after this edition. Andres Loeh has kindly offered to take over starting with the November 2004 edition, but we are still looking for someone to take on the May 2004 edition. If you're interested, please get in touch with me! ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Marshalling functions was: Transmitting Haskell values
Is [marshaling functions] something absolutely impossible in Haskell and by what reason? Just because of strong typing (forgive my stupidity ;)? Or are there some deeper theoretical limitations? If you're interested in some recent work here, have a look at Clean (similar enough to Haskell), specifically their work on first-class I/O. I gave a reference in another thread: http://www.haskell.org/pipermail/haskell-cafe/2003-October/005243.html Then follow the references. orthogonal persistence is another nice search key.. The big theoretical issue is whether it would provide an Eq or Show instance for - by the backdoor. Careful API design could avoid the worst of this. once you can do IO, you can inspect the GHC heap for equality, or map memory into Haskell data structures, so that's not really new .. What's the problem with a Show instance for -? (Or, put another way: what makes the current Show instance in the Prelude harmless compared to a fuller implementation?) have you tried to show a function?-) an equality based on that would be very rough, a proper equality would be undecidable, and there are whole worlds of other equalities in between. Cheers, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Special Invitation :-) HCA Report (November 2003)
Dear GHC/GPH/GDH users and developers, in case you haven't seen the calls for contributions on the main Haskell list: the Haskell community hopes to hear from you about all the interesting stuff you've been brewing over the last six months, not to mention the even more interesting stuff in the pipeline!-) Simon PJ has sent an update for GHC itself, but there have been several interesting developments, here on the list, or on workshops, etc. during the last six months, which are not yet represented/ documented in the draft of our report. Here's a non-exclusive list of topics that are not yet included: - Simon M.' usual batch (he's away;-): - Concurrent Haskell - Alex/Happy - Haddock - hierarchical libraries (the libraries list members are taking care of this one) - Robert Ennal's new ghc debugger - Wolfgang Thaller's bound threads work - the whole parallel and distributed Haskell story.. - etc., etc., etc. .. Check the topics and contributors list to see what still needs filling in: http://www.haskell.org/communities/topics.html - Haskell Communities and Activities Report (November 2003 edition) http://www.haskell.org/communities/ Could everybody send in their individual reports ** by the end of THIS week ** ? Please?-) - Thanks, Claus (current editor) ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
REMINDER - Contributions to HCA Report (November 2003 edition)
Dear Haskeller, *you* are still envited to let the Haskell community know what you've been doing with Haskell recently! Remember, in spite of its printable formats, this is still basically an email survey: - just hit the reply button now, and take a few minutes to compose your contribution about your own recent Haskell work and play (ASCII-format is fine, LaTeX is fine - please do not send HTML..). - if you know of someone else doing interesting stuff with Haskell (perhaps even at work?-), who is not yet confirmed on our topics and contributors list below, let them know about this survey, and encourage them to contribute (please cc me, so that I know what contributions to expect) As you can see from the topics and contributors page for our Haskell Communities Activities Reports, at: http://www.haskell.org/communities/topics.html both confirmations from well-known contributors and offers from new contributors are still a bit sparse at this stage, although we're making progress (I'm delighted to see that most contributors send their text with their confirmation - thanks!-). - Haskell Communities and Activities Report (November 2003 edition) http://www.haskell.org/communities/ Could everybody send in their individual reports ** by the end of THIS week ** ? Please?-) - Thanks, Claus (current editor) -- Haskell Communities and Activities Report (November 2003 edition) Contributions are due in by the end of THIS week! http://www.haskell.org/communities/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Calling an external command from a Haskell program
The function system works fine in Hugs except on windows where DOS limitations cause the function to always return ExitSuccess. (ghc suffers from the same problem on Windows.) Actually, that is not quite correct: ghc seems to suffer from this problem only on older Windows versions (such as Windows 98), whereas Hugs seems to have the bug also on newer Windows versions (such as XP), at least in its current binary release.. I reported the ghc/win98 problem earlier, below is what happens on Windows XP. So the bug seems even worse in Hugs than in GHC? Claus PS. why the differences in default access to standard modules? should GHC be more restrictive there? --- ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | GHC Interactive, version 6.0.1, for Haskell 98. / /_\\/ __ / /___| | http://www.haskell.org/ghc/ \/\/ /_/\/|_| Type :? for help. Loading package base ... linking ... done. Prelude System.system fail = print 'fail' is not recognized as an internal or external command, operable program or batch file. ExitFailure 1 Prelude --- || || || || || || ||__ Hugs 98: Based on the Haskell 98 standard ||___|| ||__|| ||__|| __|| Copyright (c) 1994-2002 ||---|| ___|| World Wide Web: http://haskell.org/hugs || || Report bugs to: [EMAIL PROTECTED] || || Version: Nov 2002 _ Haskell 98 mode: Restart with command line option -98 to enable extensions Reading file C:\Program Files\Hugs98\lib\Prelude.hs: Hugs session for: C:\Program Files\Hugs98\lib\Prelude.hs Type :? for help Prelude :a System Reading file C:\Program Files\Hugs98\lib\System.hs: Hugs session for: C:\Program Files\Hugs98\lib\Prelude.hs C:\Program Files\Hugs98\lib\System.hs System system fail = print 'fail' is not recognized as an internal or external command, operable program or batch file. ExitSuccess ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Calling an external command from a Haskell program
If you want access to its I/O streams as well, you can use Posix.popen, which is not standard Haskell 98, I think, but it's in GHC. Even worse, it is not portable (AFAICT)! I'm worried by the tendency towards Posix, at a time when, e.g., GHC by default no longer supports this on Windows (went missing in the move from cygwin to mingw). Some also thought Posix overkill, and wanted a portable replacement for the most-commonly-wanted functionality subset.. Could someone in the know please summarise the state of Posix-support in Haskell implementations in general? I've cc-ed to the libraries list - perhaps someone there could comment on the state of portable replacements for Posix functionality, especially for popen and friends? I seem to recall a very promising discussion about portable and flexible ways to start subprocesses a while ago - has that led to any concrete libraries? Cheers, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
HaRe 0.1 updates, and ghc-6.0.1
Since releasing version 0.1 of the Haskell Refactorer HaRe, we've continued to update the HaRe snapshots on our webpage every now and then, in response to your feedback and bug reports. http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html We do not usually announce every new snapshot here, but there have been several requests for making HaRe build with ghc-6.0.1, and, since last week, the snapshots include the necessary workarounds. Apart from several smaller things, the current snapshot (20/10/2003) also includes a fix for one rather substantial bug in introNewDef and generaliseDef; String and Char literals should also no longer cause problems (see README.txt). So, if this has been keeping you from playing with HaRe, you might want to check again (and please keep those experience/bug reports and suggestions coming!-). Happy Refactoring! The HaRe Team (Huiqing Li, Claus Reinke, Simon Thompson) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Call for Contributions - HCA Report (November 2003 edition)
Dear Haskellers, once again, we set out to get an overview of what is going on in all things Haskell, and *you* are invited to help with this effort! Please contribute to the forthcoming fifth edition of our Haskell Communities Activities Report http://www.haskell.org/communities/ Submission deadline: 31 October 2003 (please send your contributions to me, in plain-ASCII or LaTeX format, keeping HCA Report in the subject line) The Haskell Communities Activities Reports are a bi-annual, birds-eye overview of Haskell development over the last 6 months, and perhaps an outlook over concrete plans for the next 6 months. If you have only recently joined the Haskell world, have a look through the May 2003 edition - it provides a useful overview, starting points and links that may answer many a FAQ. The current plan is to get contributions in by the end of October, and to get the collective report out early next month (you will find this an excellent opportunity to update your webpages, get out pending releases, announce new projects, summarize recent developments in sub-communities for all Haskellers, etc. ;-). The general idea is to update all existing summaries (these reports are really about *recent/current* activities), to drop any topics that haven't had any activity for two consecutive 6-month periods, and to add any new developments or topics for which no-one contributed summaries to the previous edition, while trying to keep the whole from blowing up (last time, we ended up with about 30 pages). Looking forward to your contributions, Claus (current editor) -- Haskell Communities and Activities Report (November 2003 edition) All contributions are due in by the end of October 2003! http://www.haskell.org/communities/ - topics New suggestions for current hot topics, activities, projects, .. are welcome - especially with names and addresses of potential contacts, but here is a non-exclusive list of likely topics (see also http://www.haskell.org/communities/topics.html ): General Haskell developments; Haskell implementations; Haskell extensions; Standardization and documentation; Haskell tutorials, how-tos and wikis; Organisation of Haskell tool and library development; Haskell-related projects and publications; new research, fancy tools, lonq-awaited libraries, cool applications; Feedback from specialist mailing lists to the Haskell community as a whole; Haskell announcements; .. all (recent) things Haskell Announcements: if you've announced anything new on the Haskell list over the last six months, you'll want to make sure that is reflected in this edition! Project pings: if you're maintaining a Haskell tool/library/.., you'll want to let everyone know that it is still alive and actively maintained, even if there may have been no new additions (and even more so if there have been new developments). Tutorials: if you've fought with some previously undocumented corner of Haskell, and have been kind enough to write down how you did manage to build that networking program, or if you've written a tutorial about some useful programming techniques/patterns, this is your opportunity to spread the word (btw, short, topic-specific, and hands-on tutorials that only show how to achieve a certain practical task would do a lot to make things easier for new Haskellers - Erlang and Perl folks seem to be good at this kind of thing, but why not have a similar effort for Haskell?) Applications: if you've been working quietly, using Haskell for some interesting project or application (commercial or otherwise), you might want to let others know about what you're using Haskell for, and about your experiences using the existing tools and libraries; are you using Haskell on your job? There was an interesting thread about using Haskell for non-Haskell things not too long ago - why not write a sentence or two about your use of Haskell for our report? Feedback: if you're on one of the many specialist Haskell mailing lists, you'll want to report on whatever progress has been made there (GUI API discussions, library organisation, etc.) If you're unsure whether a contact for your area of work has come forward yet, have a look at the report's potential topics page, or get in touch with me. I've contacted last time's contributors, hoping they will volunteer to provide updates of their reports, and will update the contacts on the topics page fairly regularly. But where you don't yet see contacts listed for your own subject of interest, you are very welcome to volunteer, or to remind your local community/project team/mailing list/research group/etc. that they really ought to get their act together and let the Haskell community as a whole
odd interactions (was: IO behaves oddly if used nested)
[moved to haskell-cafe] The odd is in the conceptual explanation. If I give a description of some f x = y function in Haskell I expect that some program f x is reduced to y and the result is given back (possibly printed). A good story to sell to students. This is almost everywhere the case except for the IO monad. indeed. Although, for the benefit of your students, you'll want to separate printing and reduction, even in the first case. The important thing to realise here (and to pass on to your students) is that input/output-interactions and functional program reductions are two conceptually very different things, and that this difference is independent of which of the many functional I/O systems you use. So you can not use the functional reduction explanation for I/O as well. Once upon a long ago, I had to add IO to a purely functional reduction system as part of my PhD work, and to clarify things for myself, I tried to express the various available functional I/O systems in the common framework of transformation rules (program reduces to program, state transforms into state). The difference then simply became one of context-free versus context-sensitive transformations. [summarised in chapter 3 of my old thesis http://www.cs.kent.ac.uk/people/staff/cr3/publications/phd.html perhaps you can find some useful suggestions there?] The differences between character streams, request/response streams, result continuations, monadic I/O, or even uniqueness-typed environment passing are merely differences in how the context-free functional program reductions are embedded in a context-sensitive environment of I/O devices. They each have their proscons (the chapter tries to outline a logical development between the systems), and they all try to embed functional reductions into their I/O context in such a way that doing I/O does not look too foreign. So, each of the functional I/O systems lets you play down the difference between context-sensitive input/output and functional reductions to some extent, but in each of the systems, you quickly run into trouble if you try to take the similarities too far. 3* Hmm, feels like math, looks like math, ahah! is math! (designers and thinkers) Making the distinction between context-free and context-sensitive transformations explicit should help your group 3, as it gives them an operational semantics view of what their programs mean, and what they are supposed to do. from what they expect. The usual questions of group 3: * Why is an IO a evaluated if I am not interested in it's result? (opposite to the f x = y lazy behavior) main is evaluated because you asked the system to run its value (the whole IO a, not the a-typed result returned by running it). * Why is in the putStr hello world example Hello World not shown? (opposite to expected f x = y eval-first-then-show behavior) implementation deficiency. a more complete implementation might permit you to show intermediate steps in the program+device state transformation: program|| device context === putStr (hello ++world) || nothing here -context-free reductions--* putStr hello world || nothing here -context-sensitive interaction-- return () || hello world Just before the interaction, hello world is part of the program, so a step-by-step implementation might show it (you couldn't show it without implementation help, as the IO type is abstract), but after the interaction, the string has moved from the program to the device context in which it the program is running. The string appears on some output device, the implementation could show you return () as the final value of your program (the reduction system I was working with did so). * Why is in the IO (IO ()) example the inner IO () not evaluated? (somewhat opposite to expected f (f x) behavior - I personally wonder if it is even sound in a category theoretical setting) urgh,please.. categories do not need a theory about everything [ducks quickly;] first: in an expression of type IO (IO ()), there is not necessarily an inner IO (). Evaluating the expression, and running the result, will fail or return an IO (), but that might not even exist beforehand (that's just the good old no String in IO String). second: the inner IO () returned by running the outer IO (IO ()) is not evaluated unless needed, and it won't be needed unless run. To run any IO a-typed expression, the current monadic I/O system requires it to be placed at the boundary between your functional program and the I/O context it is running in, i.e., it has to be part of the sequence of IO-operations that make up the value of main. Until it gets run this way, it's just an expression that could evaluate to an IO-script that could be run. * ...Lots of other questions... better than individual answers is a simple conceptual model that allows the students to answer such
Pre-Call for Contributions -- HCA Report (November 2003 edition)
[apologies for multiple copies - experience has shown that not all Haskellers can be reached via the main Haskell list anymore] An entry for your diaries (no other action required right now): --Pre-Call Your contributions to the November 2003 edition of the Haskell Communities Activities Report will be invited soon. http://www.haskell.org/communities/ Submissions are welcome between 20th and 31st of October -- The real Call for Contributions will follow closer to the submission window, in about two weeks. This is just an early warning. You will need to plan about 15 minutes for composing your contribution, which would be a brief, ASCII-format, description of your Haskell activities (see earlier editions for plenty of examples). Cheers, Claus (current editor of HCA Reports) ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
ANNOUNCE: HaRe, the Haskell Refactorer, version 0.1
Dear Haskellers, as part of our project on Refactoring Functional Programs http://www.cs.kent.ac.uk/projects/refactor-fp/ we are pleased to announce the availability of HaRe 0.1 (also known as HaRe 01/10/2003 ;-), a snapshot of our Haskell Refactorer prototype. You can get it via http://www.cs.kent.ac.uk/projects/refactor-fp/hare.html Please see the README.txt for build/use instructions and known issues, and let us know about any problems, bugs, suggestions, or additional platforms you can confirm as working: our project address at kent.ac.uk is refactor-fp (which we'd like to keep spam-free). An initial catalogue describing the refactorings implemented in HaRe (with slightly different names) is included in the doc/ directory. Happy Refactoring! The HaRe Team (Huiqing Li, Claus Reinke, Simon Thompson) -- Background: Refactoring is the process of changing the structure of programs without changing their functionality, i.e., refactorings are meaning-preserving program transformations that implement design changes. For more details about refactoring, about our project and for background on HaRe, see our project pages and the papers/presentations/catalogue/demo/etc. available there, especially our contribution to this year's Haskell Workshop. HaRe - the Haskell Refactorer: HaRe is our prototype tool supporting a first few basic refactorings for Haskell 98 (see README.txt for known issues and limitations). It is implemented as a separate refactoring engine (on top of Programatica's Haskell frontend and Strafunski's generic traversal strategy library), with small scripting frontends that call this engine from either Vim or Emacs. The refactoring engine itself has been seen to build (with ghc-5.04.3) and run on most flavours of Windows (cygwin needed to build) and on Suns (binutils recommended to build), so we expect it to build and work on other unix-like platforms with almost no changes. In other words, we've tried to make sure that most of you should be able to build and use HaRe from your favourite OS/editor. Currently supported refactorings: removeDef : remove an unused definition duplicateDef : duplicate a definition under a new name liftToTopLevel : move a local definition to top level liftOneLevel : move a local definition one level up demote : move a definition local to point of use rename : rename an identifier introNewDef: turn expression into use of new definition unfoldDef : replace use of identifier by right-hand side addOneParameter: add parameter to definition rmOneParameter : remove unused parameter from definition generaliseDef : turn expression on rhs of definition into new parameter of that definition A series of screenshots illustrating some of the tasks one might want to accomplish with these refactorings can be found via the HaRe page (see above for URL). Caveats (see also README.txt): Please keep in mind that this is a prototype, so we do not recommend to use it on your productions sources just yet. Just play with it to get an idea of tool-supported refactoring in Haskell, and send us your feedback and bug-reports. Our goal is to develop this into a tool that many of you will find indispensible for Haskell development, and while we won't be able to follow every suggestion, we've got almost two more years in which to work towards this goal!-) History: Functionally, this is still roughly the snapshot you've seen at the Haskell workshop, packaged up for relative ease of build/use. Indeed, interim snapshots have been available all through September, and some of you have already played with those. The earliest snapshots were somewhat buggy, but over the last weeks the software has stabilised to the extent that we are back to Bug 0 (aka: insufficient test-coverage!-), and the time has come to distribute the current snapshot more widely. ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Pre-Call for Contributions -- HCA Report (November 2003 edition)
[apologies for multiple copies - experience has shown that not all Haskellers can be reached via the main Haskell list anymore] An entry for your diaries (no other action required right now): --Pre-Call Your contributions to the November 2003 edition of the Haskell Communities Activities Report will be invited soon. http://www.haskell.org/communities/ Submissions are welcome between 20th and 31st of October -- The real Call for Contributions will follow closer to the submission window, in about two weeks. This is just an early warning. You will need to plan about 15 minutes for composing your contribution, which would be a brief, ASCII-format, description of your Haskell activities (see earlier editions for plenty of examples). Cheers, Claus (current editor of HCA Reports) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: ANNOUNCE: HaRe, the Haskell Refactorer, version 0.1
First, let me say that I'm intrigued. This looks like really neat functionality to have available. Thanks; obviously, we think so, too! I'm curious whether you are planning (or have developed) tools to *detect* cases for refactoring? Occasionally, I find myself wishing for a tool to help me clean up a finished program, by e.g. There are well-established connections between refactoring and software metrics (or smell detection, as the oo folks like to call it;-), and there are whole groups of tools/software engineering ideas that need porting and adaption to Haskell (change impact analysis/visualisation anyone?). Chris Ryder here has been working on a metrics and visualisation library for Haskell (but according to his own metrics, he claims it's not ready for release yet:-( http://www.cs.kent.ac.uk/people/rpg/cr24/medina/ and although he's been moving to visualisation and other work recently, we're interested in any suggestions of what you'd like to see (no promises that we'll implement such..). We'd certainly like to use Chris' Medina experience, and perhaps part of the library itself, to build actual tools on top of them, but may not have the resources to do this ourselves. If anyone else is working on this, please let us know - we'd like to collaborate towards an integration of such tools with HaRe. - removing unused items from export/import lists Some of you might recall my using Hugs for creating export/import lists from within Vim http://www.cs.kent.ac.uk/people/staff/cr3/toolbox/haskell/Vim/ That is functionality we'd like to integrate into HaRe, as soon as we've made it module-aware. It depends a bit on what you mean by unused in connection with modules/libraries (refactoring gets even trickier when you do not have access to all clients of your module), but there are certainly some things we can and want to do in this area. - identifying functions which are only used in one place (and thus candidate for inclusion in 'where' clauses) Again, implicit export lists are likely to interfere (if everything is exported by default, you don't know whether its going to be used), such functions often occur in intermediate stages of program redesign: you move them to the top level so that you may then start using them elsewhere, and such functions (or even unused ones) occur in libraries, where you don't want to hide/remove them. A general problem/feature of refactorings is that there is no normal form for program designs, and no unique orientation for refactoring transformations: depending on the situation, you might want to apply them in either direction, so if you'd just tried to highlight places in your code to which refactorings are applicable, you might end up highlighting most of your code.. With those caveats, a general scheme might need some more thinking, but perhaps it makes sense to be able to choose a particular refactoring/orientation and look for places it might be applied to. - identifying functions defined in one module, but only (mainly?) used by/depending on functions in another module, and thus candidate for cross-module migration. All the caveats for modules/libraries apply here, but this is a prime example of what one would want to use a metrics/visualisation tool for: visualise the dependencies between modules and the entities they define/use to get an impression of whether the current modularisation could be improved. Have a look at the module browser example at http://www.cs.kent.ac.uk/people/rpg/cr24/medina/examples.shtml (you'll need an SVG plugin, and experiment with its controls), perhaps you can help to encourage Chris to release things like that? - perhaps similarly, identify function transformation that reduces the parameter lists (i.e. exposed function interface) Yes, the old correspondence principle for language design (between definitions and parameters) applies to refactorings as well: most refactorings for definitions have a sensible equivalent for parameters, and vice versa. and so on. (Does that make sense at all? :-) not at all!-) Thanks for the suggestions, we'll try to keep them in mind, although I hope you can see that some of them are not straightforward to pin down. Cheers, Claus ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Pre-Call for Contributions -- HCA Report (November 2003 edition)
[apologies for multiple copies - experience has shown that not all Haskellers can be reached via the main Haskell list anymore] An entry for your diaries (no other action required right now): --Pre-Call Your contributions to the November 2003 edition of the Haskell Communities Activities Report will be invited soon. http://www.haskell.org/communities/ Submissions are welcome between 20th and 31st of October -- The real Call for Contributions will follow closer to the submission window, in about two weeks. This is just an early warning. You will need to plan about 15 minutes for composing your contribution, which would be a brief, ASCII-format, description of your Haskell activities (see earlier editions for plenty of examples). Cheers, Claus (current editor of HCA Reports) ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Haskell for non-Haskell's sake
[this seemed to be flowing along nicely, but now that the thread has moved from information to organisation and meta-discussion, I'd like to add a few comments, and an invitation] On Saturday 30 August 2003 01:39, Hal Daume III wrote: I'm attempting to get a sense of the topology of the Haskell community. Based on the Haskell Communities Activities reports, it seems that the large majority of people use Haskell for Haskell's sake. This bias seems to exist not only in the Communities Activities reports, but The bias is entirely in what readers of this mailing list are contributing to these reports. The editor has certainly been more than willing to include other applications of Haskell. I know there are lots of other interesting things going on out there, but have so far not been able to reach these people: - some of them read/attend none of the Haskell list, c.l.f, or Haskell Workshop, so they simply never hear about these report *unless _you_ forward the invitations to participate* - some read the reports and the calls for contributions, but don't think of their work as particularly interesting to a wider community, or as not substantial enough So, I do hope that those who've answered Hal's call (or have been thinking about answering) *will* contribute to the next edition of the Haskell Communities Activities Report! I'll be expecting your email in my mailbox in the last 2 weeks of October :-) also in the Haskell mailing lists and in the Haskell-related events, such as the Haskell Workshop. I'm not sure about this list, but as for the Haskell workshop, this year we had (http://www.cs.uu.nl/~johanj/HaskellWorkshop/cfp03.html): - 4 presentations on applications of Haskell, to gaming, quantum mechanics, quantum computing, web site development - 8 presentations on programming techniques, tools, debugging, and libraries - 3 presentations on language design issues (strict language, records, lack of principal types) How does this relate to the bias you see? However, the reactions to your inquiry about use of Haskell for non-Haskell purposes suggests that a significant group of language _users_ does actually exist, though their voice is not heard too often. Indeed. So, please let yourself be heard!-) Or if you know of someone else who does interesting Haskell stuff, ask them to talk about their work. It is the collection of all these small fragments that makes the HCA Reports useful! When making your contribution is spending 10 minutes writing an e-mail (such as this one) there's no problem making your voice heard, and it's nice think you're being an active member of a very nice and helpful community. Just the way the HCA Reports are intended to work (if the 10 minute email lacks any information, the editor will get back to you). As for Haskell Applications events: IMHO, Haskell has grown beyond that (is an application more interesting for the sake of being implemented in Haskell?). The point of applications is not the language, and some of us have already presented Haskell applications at domain-specific events, which is the way to go. Where there is interaction between the application and the language (tools missing, features inadequate) or where applications are facilitates by language and tool developments, the Haskell workshop and IFL are good places to present and discuss that work. Potential benefits of a Haskell applications event would be for - advertizing: having several application presentations in a single venue - contacts: getting to meet other Haskell applicators The main problem: real Haskell developers are reluctant to talk about their work (and sometimes business interests are in the way). The comparison to JaffaOne is misleading, methinks: are there enough professional Haskell developers who can afford to/have to attend such an event with the goal of keeping up-to-date with their main tool? And would there be enough presenters to make such a visit worthwhile for the professional participants? Perhaps an add-on to the Advanced Functional Programming Summerschools might work? I guess I'd prefer a Haskell quarterly magazine, with editing, but emphasizing use over academic criteria when evaluating submissions. Cheers, Claus -- http://www.haskell.org/communities/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Interaction and ambiguous type variables
- use Helium at this stage, switch to full Haskell systems later?-) - more relevant on these two lists: people have been going on about teaching Prelude/Libraries for years. I understand that GHC at least has seen a lot of work on making the Prelude replacable recently; one good way of using that would be a teaching package, optionally to replace the standard base. - use Hugs, but don't use (overloaded!) show at this stage? Prelude :set +u Prelude [] ERROR - Cannot find show function for: *** Expression : [] *** Of type: [a] Prelude :set -u Prelude [] [] Prelude [] Main :set +u Main turn Empty ERROR - Cannot find show function for: *** Expression : turn Empty *** Of type: Tree a Main :set -u Main turn Empty Tree_Empty Main turn (Node Empty [] Empty) Tree_Node Tree_Empty [] Tree_Empty - before you ask the students to interact with any Haskell system, give a quick demo in the lecture, explaining what to do with error messages, and showing examples they are likely to encounter? Instructor: implement a test that checks whether a list is ordered. Student: ordered :: (Ord a) = [a] - Bool now, this is a completely different shade of grey. You ask the students to use an overloaded operator, and the explicit type declaration suggests that they are aware of the basics. At this stage, polymorphic types should be understood and basic error messages involving overloading should be explained (and demonstrated). Otherwise, use monomorphic types and non-overloaded operators - there is nothing wrong with beginners writing orderedIntegerList :: [Integer] - Bool or orderedBy :: (a-a-Bool) - [a] - Bool (with non-overloaded predicates provided by you) until they have seen more about overloading, Haskell-style, and its interaction with polymorphism. Let me close by saying that I think it's important to address this problem because it bites students again and again and again ... Just for the record: things are not at all wonderful, and the current state does bite, so any and all improvements are welcome (it would be advisable to have a toggle option to get back to normal). I just wanted to point out that there are ways around some of the problems, and that we sometimes get ahead of ourselves because the advanced implicit features are oh so convenient and second nature to ourselves. Claus ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Re: Representing cyclic data structures efficiently in Haskell
What is the best way to represent cyclic data structures in Haskell? There used to be some work on direct cyclic representations at UCL: Dynamic Cyclic Data Structures in Lazy Functional Languages Chris Clack, Stuart Clayman, David Parrott Department of Computer Science, University College London, October 1995 The link to the project, from http://www.cs.ucl.ac.uk/staff/C.Clack/research/report94.html is broken, but the paper seems to be online at http://www.cs.ucl.ac.uk/teaching/3C11/graph.ps If you want to go for this direct cyclic representation, the interplay with lazy memo functions is also interesting: J. Hughes, Lazy memo-functions, Functional Programming Languages and Computer Architecture, J-P. Jouannaud ed., Springer Verlag, LNCS 201, 129-146, 1985. (lazy memo-functions remember previous input-output pairs based only on pointer-info, which is sufficient to write functions over cyclic structures that, instead of infinitely unrolling the cycles, produce cyclic results) (not online?-( Claus ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Network/Notwork?
- using hFlush does *not* seem to cure the problem?? That's worrying, and it perhaps indicates that there's another problem somewhere. I just tried a small test and hFlush does appear to do the right thing, so do you think you could boil down your example to something small that demonstrates the problem? Does it happen only on Windows, or Un*x too? Windows only, of course!-) On Solaris, I never even noticed there might be a problem (it seems to work even without acknowledgment or hFlush..). I append my current test MyNetwork module - server and client are the main functions of the respective apps, nothing else going on, so it's very small, but for modified copies of some of the Network code (for use on windows, you'll want to change to the other definition of whatsWrong and uncomment c_getLastError). On solaris (ghc version 5.04), this seems to work as shown. On win2k (ghc version 5.04), with error reporting on, I get: $ ./server.exe [1] 1388 tcp: 6 $ ./client.exe huhuadsfas tcp: 6 CLIENT: huhuadsfas WSAGetLastError: 10054 Fail: failed Action: hGetLine Handle: {loc=socket: 140,type=duplex (read-write),binary=True,buffering=line} Reason: No error File: socket: 140 [1]+ Exit 1 ./server.exe (on win98, not even the tcp number would be correct, hence the hardcoded 6). The 10054 error from the hGetLine is the Connection reset by peer.-message I mentioned earlier. So it seems that on windows, when the client terminates, the connection goes down and the server fails when trying to get the message. Whereas on solaris, the message gets through anyway. Uncommenting the hFlush in the client makes no difference whatsoever. Uncomment the acknowledgement in client and server instead, and it works like a charm (although it does require an asymmetry between the two processes - I have to know which one lives longer..). Over to you, Claus -- module MyNetwork where import System(system,getArgs) import IO(hPutStrLn ,hGetLine ,hClose ,hFlush ,hSetBuffering ,BufferMode(..) ,IOMode(..) ,Handle) import Control.Exception as Exception import Foreign import Foreign.C import Network hiding (listenOn,connectTo) import Network.BSD(getProtocolNumber,getHostByName,hostAddress) import Network.Socket(Family(..) ,SocketType(..) ,SockAddr(..) ,SocketOption(..) ,socket ,sClose ,setSocketOption ,bindSocket ,listen ,connect ,socketToHandle ,iNADDR_ANY ,maxListenQueue ) server :: IO () server = withSocketsDo $ do s - listenOn $ PortNumber 9000 loop s where loop s = do l - getInput s putStrLn $ SERVER: ++l loop s getInput s = do (h,host,portnr) - accept s hSetBuffering h LineBuffering l - whatsWrong $ hGetLine h -- hPutStrLn h ack -- hClose h -- not a good idea? return l client :: IO () client = withSocketsDo $ do args - getArgs h - connectTo localhost $ PortNumber 9000 hSetBuffering h LineBuffering let l = unwords args putStrLn $ CLIENT: ++l hPutStrLn h l -- hFlush h -- hGetLine h -- wait for acknowledgement return () {- only for winsock foreign import stdcall unsafe WSAGetLastError c_getLastError :: IO CInt -} {- -- does this exist? foreign import ccall unsafe getWSErrorDescr c_getWSError :: CInt - IO (Ptr CChar) -} whatsWrong act = act {- whatsWrong act = Exception.catch act (\e- do errCode - c_getLastError --perr - c_getWSError errCode --err - peekCString perr putStrLn $ WSAGetLastError: ++show errCode throw e) -} listenOn :: PortID -- ^ Port Identifier - IO Socket -- ^ Connected Socket listenOn (PortNumber port) = do proto - getProtocolNumber tcp putStrLn $ tcp: ++show proto let proto = 6 -- bug in ghc's getProtocolNumber.. bracketOnError (whatsWrong (socket AF_INET Stream proto)) (sClose) (\sock - do setSocketOption sock ReuseAddr 1 bindSocket sock (SockAddrInet port iNADDR_ANY) listen sock maxListenQueue return sock ) bracketOnError :: IO a -- ^ computation to run first (\acquire resource\) - (a - IO b) -- ^ computation to run last (\release resource\) - (a - IO c)-- ^ computation to run in-between - IO c -- returns the value from the in-between computation bracketOnError before after thing = block (do a - before r - Exception.catch (unblock (thing a)) (\e - do { after a; throw e }) return r ) connectTo :: HostName -- Hostname -
Re: ANNOUNCE: GHC vesrion 5.04.3 released
== The (Interactive) Glasgow Haskell Compiler -- version 5.04.3 == We are pleased to announce a new patchlevel release of the Glasgow Haskell Compiler (GHC), version 5.04.3. This is a bugfix-only release. For all the changes since 5.02.3, see the release notes: http://www.haskell.org/ghc/docs/latest/html/users_guide/release-5-04.html Actually, I find it difficult to extract the sometimes drastic differences between releases from that document (just one example: does this release include last week's bugfixes in the Network module?). How do I find out about the various bugfixes between releases, to decide whether or not to upgrade (and whether or not upgrading will help with specific problems)? After all, the reason for patchlevel releases is that you've fixed some bugs, so why be so secretive about what these fixes are?-) Cheers, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: permissions on ghc Windows installer?
To what extent does the windows ghc install actually depend on being in a particular place or having registry entries? Here, we'd occasionally like to use ghc on public machines which don't have it installed, and on which we have no installation permissions. Those machines do have access to network drives, however, for which we do have write permissions (though we don't have much control over the drive letter these appear under). Initial experiments (renaming installation directory after install; running ghc from a remote copy of the installation directory on a network server) at least do not fail immediately, so that may be a route. Sigbjorn also mentioned a while ago that there's a command-line way to run windows installers, with additional options.. hth, Claus Thanks, I'll keep it in mind should I decide to revisit this. My experiences of getting per-user installs to work reliably with MSIs haven't been too positive. --sigbjorn - Original Message - From: Malcolm Wallace [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, February 17, 2003 10:24 Subject: permissions on ghc Windows installer? I have been attempting to solve some outstanding problems with building hmake, nhc98, and hat, using ghc under Windows. I don't have a Windows machine of my own, but our Dept has some public-access Windows 2000 Pro machines I can use. However, when I try to use the ghc .msi automatic installer package, it reports that I do not have sufficient privileges to install ghc in my own temporary disk area. Apparently it needs administrator privileges, which I do not have (and will not be able to acquire). Are administrator privileges really required? I notice that the installer for Cygwin (i.e. not ghc) explicitly allows you to choose whether to install for all users or just for me. Would it be difficult to allow the same choice in the ghc installer? Regards, Malcolm ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: awaitEval in Concurrent Haskell
Actually a mild variant of Claus's proposal seems to work out quite well. Another way to avoid the problems with types is to use a multi-parameter type class. Little example attached. Glad to be of help. The need to shadow the data types is a bit annoying, but then the whole generic bit would preferably be generated anyway. Template Haskell to the rescue, or Drift?-) .. the availability of data would only be detected if the demand that prompted its evaluation was in the context of the assertion-tagged expression. Yes? Yes - same issues with sharing as in Hood. You could tag the expression before sharing, though. And in your application, you might actually prefer to have that fine distinction: just because someone else has evaluated a shared expression far enough to allow your assertion to be evaluated, that doesn't mean that this assertion will be relevant to the program run. There is still something I don't understand about your specification: the assertions take on a post-mortem character - by the time the assertion fails, the main thread might have run into trouble already (after all, its evaluation drives the assertion). So while this would work, let l = print $ length $ assert non-empty l (not . null) l this might not be a good idea: let l = print $ head $ assert non-empty l (not . null) l At best, you get both the main-thread error and the assertion-thread failure message. That's why I was asking what you intend to do with the assertions. Cheers, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: awaitEval in Concurrent Haskell
Colin [ccing GHC users in case there are other enthusiasts out there] | we briefly discussed the lack in | Concurrent Haskell of any way to set up a data-driven | thread -- one that works on a data structure D without | ever forcing its evaluation, only proceeding with the | computation over D as and when the needed parts get | evaluated by some other thread. I'm not sure whether I understand what you have in mind later on, but this first part sounds so remarkably like something I've seen before, that I'll take my chances. Do you remember Andy Gill's Hood from long ago? Inside its implementation, it had a very similar problem: writing information about an observed structure to some observation trace, but without forcing premature evaluation of the structure under observation. The trick used in Observe.lhs is roughly this (here for (,)): observe label (a,b) = unsafePerformIO $ do sendObservation label (,) return (observe label a,observe label b) with some position information and strictness mangling added, and the whole nicely wrapped into a monad (see Observe.lhs for details). Nothing happens as long as the thing under observation is not inspected by its context. Then, and precisely then, the unsafePerformIO kicks in to record a piece of information and to return the outer part of the thing, wrapping its components into fresh observers. Andy used this to store observations in a list, to be processed at the end of the program run, but you can just as well send the observations during evaluation, e.g., to a concurrent thread (with the usual caveats). In particular, the sequencing of information becoming available was detailed enough to inspire my own GHood;-) With no implementation slot free, something like this might get your student's project unstuck (e.g., replace sendObservation by assert)? After all, my favourite justification for unsafePerformIO is as an extension hook in the runtime system.. Sorry if your intention was something else and I'm just trying to fit a solution to the problem. Even then, you might be able to adapt the trick to your application. Cheers, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Editor Tab Expansion
Having more than 10 years expirience with whitespace does not matter languages, the only thing that drives me crazy is the layout rule. so in 10 years of programming, you've never written a Makefile?-) As far as I understand it, I have 2 options: 1. Use braces and semicolons and ignore the layout rules. 2. Change the settings in all my editors so that the code looks like the Haskell compiler sees it. 3. Change the settings in all your editors to get rid of those crazy hard-tabs! Good editors support this by enabling you to replace hard-tabs by soft-tabs (you type Tab, your editor inserts the right number of spaces, for a configurable notion of right = you can keep your favourite tab2space number and you and the compiler see the same code). (e.g., in Vim, type :help tabstop for a discussion of alternatives, my personal favourite is 2. Set 'tabstop' and 'shiftwidth' to whatever you prefer and use 'expandtab'. This way you will always insert spaces. The formatting will never be messed up when 'tabstop' is changed. you can still enter hard tabs when needed, e.g., in Makefiles, using CTRL-V Tab; there's also a useful help to make hard-tabs visible for debugging - :help 'list'). I got into the habit of hating hard-tabs before learning Haskell - in multi-developer projects, different layout styles are one thing, but combined with hard-tabs and different tab2space settings.. Hth, Claus ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: incremental linking?
Hmm, I've never heard of linking being a bottleneck. Even GHC itself links in about 3-4 seconds here. One common problem is that linking on a network filesystem takes a *lot* longer than linking objects from a local disk. It's always a good idea to keep the build tree on the local disk, even if the sources are NFS-mounted. Unfortunately, we're not talking seconds, but coffee-breaks of linking times on our Sun (yes, the stuff is in the range of a large compiler - we're fortunate enough to be able to build on rather substantial third-party packages, think haskell-in-haskell frontend distributed over unusually many modules + strategic traversal support + our own code). And yes, I was worried about NFS-mounting first, especially since linking on our Sun takes even longer than on our PCs (long breaks instead of short ones;-), but moving .hi and .o to local tmp-space didn't speed things up (then again, it's a large machine, and our disk setup is likely to be more complex than I know - I'll have to check with our admins). Alternative a: use someone else's incremental linker, e.g., Sun's ild (ghc's -pgml option appears to have its own idea about option formatting, btw) - this doesn't seem to work - should it? You'd probably want to call the incremental linker directly rather than using GHC - what exactly does it do, BTW? What files does it generate? Calling it via GHC seemed the best way to ensure that it gets everything it needs (what else would be the purpose of -pgml?). According to docs, ild just keeps more information and space in the linked object, so that on re-linking, it can (a) check for file-modification times and (b) replace and partially relink only those contributing objects that have changed. http://docs.sun.com/db/doc/802-5693/6i9edqka5?l=zha=view Alternative b: convince ghc to link objects in stages, e.g., on a per-directory basis - gnu's ld seems to support at least this kind of partial linking (-i/-r). Not quite as nice as a fully incremental linker, but would probably save our day.. Yes, this works fine. We use it to build the libraries for GHCi. Presumably directed via Makefiles? Could this please be automated for ghc --make? Thanks, Claus ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Template metaprogramming for Haskell article.
I am reading the article template meta-programming for haskell and am wondering if 1) this is already implemented in ghc. 2) if not when this will be released answered in section 2.1 (GHC) and 3.6.1 (Template Haskell) of the Haskell Communities Activities Report (November 2002 edition): http://www.haskell.org/communities/11-2002/html/report.html#sect2.1 http://www.haskell.org/communities/11-2002/html/report.html#sect3.6.1 Claus PS. Are there readers of ghc-users who don't read the main haskell list (I didn't want to spam all Haskell-related lists with the announcements, but keep getting the feeling that not everyone is following the main list anymore)? ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: storing to a file
There's been mention of a Binary module; .. That said, there was also a post about using plain text. I tend to agree, except for certain cases. However, that is *not* to say that you should necessarily use Show/Read. | Actually, deriving binary would be a nice thing to have in general | - even more, a way to add your own deriving things from within | Haskell, although I have no idea how such a thing would work. In this context, DrIFT should be mentioned: http://www.haskell.org/communities/11-2002/html/report.html#sect5.2.1.1 http://repetae.net/john/computer/haskell/DrIFT/ Binary is already supported, I think (what about the new Binary variants?), but adding new instance derivation rules for your own external formats shouldn't be too difficult. So nobody has to stick to Show/Read for convenience reasons if there are other arguments against (such as those listed by Hal). Claus PS The Haskell ATerm Library might also be relevant? http://www.haskell.org/pipermail/haskell/2002-May/009589.html (the jDrift mentioned there is subsumed by the upcoming Drift2.0, merging various previous versions) ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: 1 line simple cat in Haskell
main = mapM (=putChar) getCharS where getCharS = getChar:getCharS How would you suggest to neatly insert the error handling code into ? \begin{code} -- some suggestions for a little zoo of cats module Main where import IO import Monad main0 = interact id main1 = getContents = putStr main2 = untilEOF (getChar=putChar) catchEOF io = catch io (\e-unless (IO.isEOFError e) (ioError e)) untilEOF io = catchEOF (sequence_ $ repeat io) main = main2 \end{code} Claus PS. I haven't kept up to date with buffering issues, and hugs/ghci may not like this kind of code.. ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: showsPrec: cui bono?
has anybody here used in a non-trivial way the showsPrec anti-parser? Isn't the idea to make things trivial while avoiding performance penalties? Perhaps: simple pretty-printing of abstract syntax trees? I often use it to get simple debugging output for complex internal data structures (first, use deriving; then, define showsPrec; if that's still not good enough, do some real thinking..). Anyway, this reminded me of a litte old hack of mine. Only trivial use of showsPrec, but perhaps you'll like it anyway?-) http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/R.hs As with anything else in my toolbox, no warranty for nothing.. Cheers, Claus --- cut here {- Representative thingies.. A little hack to pair values with string representations of their expressions. Useful if you want to explain what map (+1) [1..4] or foldr1 (*) [1..5] do, or if you want to demonstrate the difference between foldr (+) 0 [1..4] and foldl (+) 0 [1..4] Load this module into Hugs (Hugs mode) and type in some of these examples to get an idea of what I mean. Also try map (+) [1..4] This could be extended in various directions, but I wanted to keep things simple. I'm not convinced that extra complications would be worth the effort. Claus Reinke -} default (R Integer) data R a = R {rep:: String ,val:: a } instance Show (R a) where showsPrec _ a = showString (rep a) instance Show (R a - R b) where showsPrec _ f = showString (\\x-++(rep (f x))) where x = R{rep=x,val=error variable} instance Show (R a - R b - R c) where showsPrec _ f = showString (\\x y-++(rep (f x y))) where x = R{rep=x,val=error variable} y = R{rep=y,val=error variable} lift1 op a = R {rep=(++(rep op)++ ++(rep a)++) ,val= ( (val op) (val a) ) } lift2 op a b = R {rep=(++(rep op)++ ++(rep a)++ ++(rep b)++) ,val= ( (val op) (val a) (val b) ) } lift2infix op a b = R {rep=(++(rep a)++ ++(rep op)++ ++(rep b)++) ,val= ( (val a) `iop` (val b) ) } where iop = val op instance (Num a,Show a) = Num (R a) where (+)= lift2infix R{rep=+,val=(+)} (-)= lift2infix R{rep=-,val=(-)} (*)= lift2infix R{rep=*,val=(*)} negate = lift1 R{rep=-,val=negate} fromInteger a = (\fIa-R{rep=show fIa,val=fIa}) (fromInteger a) instance (Eq a,Num a) = Eq (R a) where a == b = (val a) == (val b) instance (Ord a,Num a) = Ord (R a) where a = b = (val a) = (val b) instance (Enum a,Num a,Show a) = Enum (R a) where fromEnum = fromEnum.val toEnum a = R{rep=show a,val=toEnum a} enumFrom x = map toEnum [fromEnum x..] -- missing in Hugs Prelude.. ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
ANNOUNCE: HCA Report (3rd edition, November 2002)
The many contributors and I are happy to announce that the Haskell Communities and Activities Report (3rd edition, November 2002) http://www.haskell.org/communities/ is now available from the Haskell Communities home page in several formats: in PDF (with working links and index, yet printable) or, for those who have problems with the PDF, in HTML (using John's secret weapon again) and Postscript. A big thanks here to everyone who contributed information to the report! I hope you will find it as interesting to read as we did. For those of you who haven't heard of these reports before: The first edition of the HCA Report was released in November 2001, with the goal of helping to improve the communication between the various groups, projects, and individuals working on or with Haskell. The idea of these reports is simple: Every six months, a call goes out to all of you to contribute brief summaries of your own area of work. Many of you respond (eagerly, unprompted, and well in time for the deadline;-) to the call. I then collect all these into a single report and feed it back to this very mailing list. And when we try for the next update in six months, you might want to add your own work, project, research area or group as well. So, please, put that item into your diary now: End of April 2003: target deadline for contributions to the May 2003 edition of the HCA Report Enjoy (and communicate;-)! Claus Reinke -- Computing Laboratory University of Kent at Canterbury http://www.cs.ukc.ac.uk/people/staff/cr3/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Could not unambiguously deduce..
I'm not actually sure whether this is a bug, but if it isn't, could someone please enlighten me about what is going on?-) [the following is a simplified version of a problem with instances generated by Drift, for Strafunksi, where T would be the Drift target, C would be Term, and expl(ode) would return TermRep] {-# OPTIONS -fglasgow-exts -fallow-overlapping-instances -fallow-undecidable-instances #-} module Main where data T a = D [a] class C t where expl :: t - String expl x = default instanceC String where expl s = String instance C a = C [a]where expl l = [a] instance (C a {- ,C [a] -} ) = C (T a) where expl (D xs) = expl xs main = putStrLn $ expl hi As is, both ghc and hugs reject the program, whereas both accept it with the extra constraint in the C (T a) instance.. Now, I think I can see how the right-hand-side expl could come either from the C String or from the C [a] instance - hence ghc's message: $ ghc --make Tst.hs c:\ghc\ghc-5.04\bin\ghc.exe: chasing modules from: Tst.hs Compiling Main ( Tst.hs, ./Tst.o ) Tst.hs:15: Could not unambiguously deduce (C [a]) from the context (C (T a), C a) The choice of (overlapping) instance declaration depends on the instantiation of `a' Probable fix: Add (C [a]) to the class or instance method `expl' Or add an instance declaration for (C [a]) arising from use of `expl' at Tst.hs:15 In the definition of `expl': expl xs What I don't understand, however, is why adding that extra constraint helps in any way? Shouldn't the addition of new things in the context only make more options available? Why does it make some of the existing, amgibuous options go away? Confused, Claus PS. Perhaps related, but why does Hugs seem to ignore the C a constraint in the context of the original version? $ hugs -98 Tst.hs __ __ __ __ ___ _ || || || || || || ||__ Hugs 98: Based on the Haskell 98 standard ||___|| ||__|| ||__|| __|| Copyright (c) 1994-2001 ||---|| ___|| World Wide Web: http://haskell.org/hugs || || Report bugs to: [EMAIL PROTECTED] || || Version: December 2001 _ Hugs mode: Restart with command line option +98 for Haskell 98 mode Reading file c:\Program Files\Hugs98\\lib\Prelude.hs: Reading file Tst.hs: Type checking ERROR Tst.hs:15 - Cannot justify constraints in instance member binding *** Expression: expl *** Type : C (T a) = T a - String *** Given context : C (T a) *** Constraints : C [a] Prelude ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Re: Could not unambiguously deduce..
| What I don't understand, however, is why adding that extra | constraint helps in any way? Shouldn't the addition of new | things in the context only make more options available? Why | does it make some of the existing, amgibuous options go away? With the extra constraint, the instance decl can build a C (T a) dictionary from a (C a, C [a]) dictionary, by choosing the expl method from the C [a] dictionary it is passed (ignoring the C a one). That defers the choice of which of the overlapping instance decls we are going to use. Perhaps it'll be postponed to a point at which 'a' is known, in which case the choice is easy. The point is that we don't want to have to choose between the C String and C [a] instance decls until we know enough about 'a' to choose the right one. Yes, but isn't that an implementation problem surfacing at the language level? All the dictionaries needed to delay the decision to the point of use could also be made available when compiling the original program, no? After all, that's the reason why there's an ambiguity in the first place. Not to mention that in the case for which there is an overlap, the String instance will always be chosen as the more specific one.. Claus | data T a ý [a] | | class C t where |expl :: t - String |expl x ÿdefault | | instanceC String where expl s ÿString | instance C a ÿC [a]where expl l ÿ[a] | | instance (C a {- ,C [a] -} ) ÿC (T a) where | expl (D xs) þxpl xs | | main ÿutStrLn $ expl hi | | | | As is, both ghc and hugs reject the program, whereas | both accept it with the extra constraint in the C (T a) | instance.. Now, I think I can see how the right-hand-side | expl could come either from the C String or from the C [a] | instance - hence ghc's message: | | $ ghc --make Tst.hs | c:\ghc\ghc-5.04\bin\ghc.exe: chasing modules from: Tst.hs | Compiling Main ( Tst.hs, ./Tst.o ) | | Tst.hs:15: | Could not unambiguously deduce (C [a]) | from the context (C (T a), C a) | The choice of (overlapping) instance declaration | depends on the instantiation of `a' | Probable fix: | Add (C [a]) to the class or instance method `expl' | Or add an instance declaration for (C [a]) | arising from use of `expl' at Tst.hs:15 | In the definition of `expl': expl xs | | | Confused, | Claus | | PS. Perhaps related, but why does Hugs seem to ignore the | C a constraint in the context of the original version? | | $ hugs -98 Tst.hs | __ __ __ __ ___ | _ | || || || || || || ||__ Hugs 98: Based on the Haskell 98 | standard | ||___|| ||__|| ||__|| __|| Copyright (c) 1994-2001 | ||---|| ___|| World Wide Web: | http://haskell.org/hugs | || || Report bugs to: | [EMAIL PROTECTED] | || || Version: December 2001 | _ | | Hugs mode: Restart with command line option +98 for Haskell 98 mode | | Reading file c:\Program Files\Hugs98\\lib\Prelude.hs: | Reading file Tst.hs: | Type checking | ERROR Tst.hs:15 - Cannot justify constraints in instance member | binding | *** Expression: expl | *** Type : C (T a) ÿT a - String | *** Given context : C (T a) | *** Constraints : C [a] | | Prelude | ___ | Glasgow-haskell-bugs mailing list | [EMAIL PROTECTED] | http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
(no subject)
I'd argue that -package is a global option and should stay that way. The only reason you might want to disable -package for certain modules and not others is if you want to do some tricks with module shadowing - and this definitely isn't supported in GHC. You should pass the same -package to every compilation. Funny, I wasn't actually thinking of hiding -package (effectively giving it a scope) when I wrote that. AFAIC, it would be fine for GHC to collect all the -package options from the source files it wants to compile and then use the same superset of package options for all of them (*). But it would be helpful if each source collection could include the information ghc needs to compile it. So if I have sources A that happen to need -package HOpenGL and sources B that happen to need -package lang, etc., I could simply document those dependencies in the sources in a way ghc would understand. And if I have a project C using both A and B, and I need to modify anything in A or B, I could still simply let ghc --make C figure out what to do by just telling it where to find A and B. Whereas currently, I have to go back and figure out what packages A or B need to be compiled with.. (or not use ghc --make, before anyone else suggests that;-). I think -i is a global option in the same sense as -package, though. The library options -l and -L don't really matter, since they're only used at link time and you wouldn't want to put them in OPTIONS anyhow. Same here, for non-package dependencies. Claus (*) I didn't want to have global supersets of options for language extension flags, but those seem to be dynamic anyway, and often, their effects spill over to anything wanting to use the modules that use language extensions. ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
SURVEY - Haskell editing
As some of you know, our refactoring project here at UKC has just started (at long last). Of course, it will take some time before this leads to concrete artifacts in terms of detailed refactoring catalogues and prototype tools, but we would like to know a bit more about the environment into which any such refactoring tools would have to fit. After all, you might have different development habits and tools than we do;-) So it would be great if you could spare a few minutes to fill in the following survey and send it back to [EMAIL PROTECTED] (reply-to should be set?). We'll summarize to this list. We will try to keep you involved in later stages of the project as well, e.g., to ask for input on what refactorings you'd find most useful, but if you'd like to share your thoughts on this earlier, feel free to email us. Thanks for your input, Claus Refactoring Functional Programs http://www.cs.ukc.ac.uk/research/groups/tcs/fp/Refactor/ --July 2002-- SURVEY - Haskell editing 1. When developing Haskell programs, - which OS-platforms do you work on? - which Haskell implementations are you using (please indicate rough version info: whatever, latest source release, latest binary release, latest CVS, latest stable version, ..)? Hugs98 GHC nhc98 .. - to what extent do you use (which?) features outside Haskell 98? class/instance-restrictions existential types arrow notation .. - what editors, IDEs or source-manipulation tools are you using? - editors Vim Emacs .. - tag-file generators HaskTags .. - documentation generators Haddock .. - pre-processors DrIFT cpp .. - revision control manual CVS bitkeeper .. - .. 2. To what extent are your editors/IDEs Haskell-aware? - what Haskell-specific functionality do they offer? syntax highlighting pretty-printing tag-files permitting jump-to-definition syntax-aware movements (to next definition/expression/..) .. - what non-Haskell-specific editing functionality do you find most helpful when developing Haskell programs? matching of parentheses macro recording and playback (what do you use these for?) integration with external tools (which ones?) .. - what Haskell-specific editing functionality do you you miss most? .. - what interface options does your editor/IDE offer for integration of external tools (brief explanations or pointers to detailed documentations would be great)? coarse-grained (piping text through formatters, spell-checkers; preparation of tag files, ..) fine-grained (bi-directional access, i.e., editor can use external functionality, and external tools can use editor functionality) scripting (to weave it all together) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: unsafePerformIO around FFI calls
I'm curious exactly what is safe and what is unsafe to wrap unsafePerformIO around when it comes to FFI calls. Here's a simple test: Could you imagine an alternative implementation of the same API in pure Haskell? (Don't consider efficiency or effort required to write the implementation, just whether it can be done.) If so, then it is ok to use unsafePerformIO and the ffi to implement the API instead. That looks simple, but is not unproblematic: - just because I can *imagine* a pure implementation of *the API I want*, that doesn't mean that the foreign implementation in question *is* pure (*the exact API it implements* might have some extra features or assumptions, such as single-threaded use). - so the test implicitly depends on a very strict interpretation of the same API, including all observable side-effects, and that isn't quite as simple as you make it. If it fails that test, it is incredibly unlikely that it is ok and a proof that it is ok is likely to be pretty complex - maybe worth a PLDI paper or some such. - just because I can't implement something in pure Haskell doesn't mean that it has to be impure, or unsafe (it might fall into the gaps of static typing, for instance). So it seems to boil down to the usual argument about unsafePerformIO, no matter whether the IO action is foreign or not: by casting from IO a to a, you *assert* that the IO part of your computation is not observable, and it is *your obligation to show* that this assertion is correct. You don't have to do a formal proof, but while attempting an informal one, you might find *side-conditions* (on the usage of your function) on which your assertion depends. The most valuable part of the proof attempts is to *identify and document*, *for your particular case*, those side-conditions. In other words, to become aware of exactly what is `safe' and what is `unsafe' to wrap unsafePerformIO around!-) Can your implementation defend itself against all attempts to discover that its a is really an IO a? If so, you're safe (or feel so, at least;-). If not, document the gaps in your defense (they might not arise in typical or intended use). If there are too many or too serious gaps, you're definitely unsafe. Beware of optimizing compilers or other meaning-preserving program-transformation tools. The more you want your tools to make use of the semantics of your programs, the less you want to cheat on those semantics (is your foreign implementation still pure in a multi-threaded setting?). It's not called unsafePerformIO for nothing - using it means trust me, I know what I'm doing. Do you? As for FFI-specifics, the problem seems to *find out* what the *precise API* of your foreign function is (including side-effects), and what *extra impurities* the FFI wrapping might impose. Given that the foreign function is imported as IO a, the FFI wrapping would be permitted to use IO, e.g., during marshalling. Does it? Cheers, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: HGL with GHC in Win32
The size problem is traceable to [Greencard's ffi code generation] I took a look at this and found that Sigbjorn had fixed it some months ago Would that be in the released Greencard or only in the cvs version? Building GDITypes with this command: green-card --target=ffi -i. --include-dir ../../green-card/lib/ghc -i . GDITypes.gc generates correct code for prim_RGB (the problem Hal Daume ran into). and for Win32Window.adjustWindowRect So the Win32 sources could now be regenerated and future builds and releases wouldn't run into these problems with Win32 or HGL anymore? --- Just as long as someone remembers to do the re-greencarding --- in cvs: hslibs/win32 before the next release, please, because that won't happen automatically anymore (but then I may be wrong about this - I don't understand the fptools Makefiles, yet;-). Great - thanks, Claus PS. I still have to figure out how to make hslibs/win32 in isolation, so I won't be able to test confirm whether this solves all the HGL/GHC/Win32 problems for a while (e.g., there was the problem of polygon creation running out of resources, see Friday's email; that could easily be related to the same ffi problem, but it would be good to check) - anyone else in a better position for testing? ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: HGL with GHC in Win32
--- Two suggestions for CVS maintainers: 1. we could make better use of the cvs.haskell.org home page: - add a link to the cvs-web interface proper - add an overview of what is where in the cvs tree (probably best generated automatically from brief description text files in each of the major cvs directories) - add a link to ghc build guide (which has more complete how to use this cvs info, and more info about the structure) - add a link to ghc commentary and - add a link to cvs.haskell.org from the top of www.haskell.org/libraries/ 2. introduce parts of hslibs as cvs modules (see cvs(5), so that one can say cd fptools; cvs co win32 and get all the relevant subdirectories and tools. The current suggestion, to do cd fptools; cvs co hslibs/win32 will only give the win32 subdirectory of hslibs, but not necessarily all files that are needed to do a make there.. --- Anyway, is there a useable win32 green card input package hanging around somewhere (the link to glasgow is dead it seems; CVS has moved the gc-sources aside and has never been very modular - is there a way for me to take the win32/gc-src/ directory from CVS and make it, without having to prepare the various other parts of fptools that fptools-Makefiles tend to depend on so merrily?-(. I have no idea how to do it. GreenCard I can do (details attached) but building Win32 for GHC is beyond me. GreenCard is no problem - there's that nice installer, and since there is no HOpenGL without GreenCard so far, I had to install GreenCard anyway;-) Unless there have been relevant changes that would force me to use the cvs version? Shouldn't all those IORefs (e.g., for the list of windows) be MVars in the GHC version? We did that for the X11 version - Ok, I tried that change - no difference. I guess the Win32 version has fallen behind. Yet in other aspects, the SOE version is ahead.. Would it be possible to resynch the various versions? Or, even better: once the various ffi implementations have settled down to the latest spec, and dreamHOpenGL works with all Haskell implementations/dream, perhaps HGL could be ported to HOpenGL - to eliminate all further portability issues (missing features under X, etc.).. Personally, I think I'll try to stick with HOpenGL for both 3d and 2d, but for teaching purposes, there would need to be a simple interface, a la HGL, and it would have to work with Hugs as well. But having to switch interfaces when moving from hugs to ghc would keep students from doing that. So perhaps we need to keep HGL!-) What really needs done though is to introduce a single semaphore to control all access to all parts of the HGL data structures - atomically accessing each part of the data structures doesn't necessarily protect all the system invariants. I'll leave that to others:) For a start, what about NoInline pragmas for global IORefs? Yes, that should be done. Tried that as well - no difference (at least not at this stage, that is, but both the MVars and the pragmas look like the right thing to do overall). Btw, you use yield before a potentially blocking call - would the latter run into the problems (default-configured) ghc has with blocking foreign calls (see multithreading support in the ghc commentary)? Probably not, because it works with Hugs, which has only non-preemptive concurrency, but I thought I'd better ask.. Or is this what you referred to with GreenCard's safecode? (Hmmm, maybe we should push harder on a portable language extension for defining global, mutable, monomorphic IORefs.) The current hack kind of seems to work, yet discourages wide useage - the problem with a language extension is that noone has suggested a good solution yet, and noone wants to freeze a hack into the language (parameterised modules would be my personal favourite, even though that's not quite as convenient as the current hack). --safe-code a try. But why doesn't the green card input use %safecode to eliminate that potential error source?-) Because I wrote the Win32 bindings for Hugs where %safecode and %code mean the same thing. Really, GreenCard should be changed so that %code is safe by default and you have to write %unsafecode to get the faster version. (That's the way it is in the ffi.) Either change looks like a straightforward substitution? And how would cross-ffi garbage-collection issues affect window parameters at startup? Not sure what you mean - I can't think of any new issues in GC which aren't present in Hugs. I was just trying to understand what safecode does, and the GreenCard docs say it's about foreign code that might trigger GCs. Which is allright, but at startup (when the window size parameters are ignored) there shouldn't be any GCs, so the question was how the lack of safecode is supposed to cause the effects I see. Anyway, is there a useable win32 green
Re: HGL with GHC in Win32
Thanks for looking at this Claus. no problem - I'm kind of nearby, and I'm not promising anything (unless just looking at it is going to help;-). - the full-screen titlebar effect with the SOE variant suggests some window-handling incompatibility, if it wasn't for Hugs and GHC using the same graphics source code.. (is there a difference in the win32 bindings for Hugs vs GHC?) [This mail got rather long. My best guess is that yield on GHC doesn't yield as thoroughly as on Hugs and that the Win32 library isn't being greencarded with the --safe-code flag. (Either by itself would probably be fine, it's having both at once that is probably killing you.)] According to the docs, yield does as yield should, and the unix version (ghc/hgl/X) seems to work.. Btw, while looking at code with scheduling-related issues in mind, other questions appeared to me: Shouldn't all those IORefs (e.g., for the list of windows) be MVars in the GHC version? For a start, what about NoInline pragmas for global IORefs? But, at least on unix, these don't seem to be the problem.. Unless there are any other suggestions, I could give the green card --safe-code a try. But why doesn't the green card input use %safecode to eliminate that potential error source?-) And how would cross-ffi garbage-collection issues affect window parameters at startup? Anyway, is there a useable win32 green card input package hanging around somewhere (the link to glasgow is dead it seems; CVS has moved the gc-sources aside and has never been very modular - is there a way for me to take the win32/gc-src/ directory from CVS and make it, without having to prepare the various other parts of fptools that fptools-Makefiles tend to depend on so merrily?-(. Alternatively, is anyone out there using ghc's win32 binding? Presumably, that kind of problem would show up in other uses. Claus PS. Is there a way to get system call traces on windows/cygwin? ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Lists representations (was: What does FP do well? (was ...))
Long away and far ago (or something like that;), there was a discussion on Lists implemented as arrays rather than linked structures, during which Jerzy Karczmarczuk commented: What bothers me quite strongly is the algorithmic side of operations upon such objects. Typical iterations map- (or zip-) style: do something with the head, pass recursively to the tail, would demand intelligent arrays, with the indexing header detached from the bulk data itself. The consumed part could not be garbage collected. In a lazy language this might possibly produce a considerable amount of rubbish which otherwise would be destroyed quite fast. The concatenation of (parts of) such lists might also have very bad behaviour. Can you calm my anxiety? Jerzy Karczmarczuk The reason I wanted to reply is that I can offer one data point on this. An even longer time ago, there were the various reduction systems developed in Kiel, implementing the Kiel Reduction Language KiR, a variant of Berkling's reduction languages (the Berkling pointed to in Backus' Turing Award Lecture). KiR lacked lots of useful features Haskell has, and Haskell's implementations still lack lots of useful features KiR's had. I dearly miss those features, but that is not the topic here (I don't know whether any of the systems still install or even run, but see the Manual at http://www.informatik.uni-kiel.de/~base/ for more info). The topic here was list representations. KiR's implementations moved from interpreted graph-reduction over compiled graph-reduction with a code interpreter to compiled graph-reduction with compilation via C, all more or less with the same high-level front end. All of these represented lists as vectors (KiR was dynamically and implicitly typed, btw), and the memory was divided into an area for fixed-size descriptors pointing to each other or into the second area, the heap, for variable-sized data blocks. The descriptor area was reference-counted and simply reused free space (most KiR variants were call-by-value), the other area needed memory compactification when space grew fragmented. A list's elements (or pointers to their descriptors) went into a contiguous block in the heap, and the descriptors made it possible to share subsequences of elements between different lists (descriptors were large enough to hold a pointer to the start of the sequence and its length). Quite as Jerzy suspected. Supported operations included both array and list operations, append required the allocation of a new heap block and copying of *both* lists, but was provided as a primitive (the standard approach for systems that started out as interpreters: a good mix of efficient primitives and interpreted user defined constructs). As others have pointed out, this looks rather inefficient, especially for longer lists, so when we set out to make measurements for a JFP paper [1], comparing with the likes of ghc, we expected to be beaten, but hoped to be not too far away, at least with the latest via-C implementation.. Benchmarks are always difficult, but especially so between so different languages: in KiR, we could easily and correctly execute programs that in Haskell, either wouldn't even compile, or wouldn't terminate, or wouldn't show any result (with similar problems the other way round). And after adapting to the common subset of algorithms, a translation to Haskell might mean that a complex program execution might return immediately, as the compiler and runtime system lazily determined that none of it was needed for the program output (compiled Haskell programs report almost no reduction results, only explicit program output, or show-able results). With all these preliminaries and caveats, and the standard disclaimer that all benchmarks are useless, but interesting, the relevant benchmark is the infamous pretty quicksort for some 4000 elements (qusort in the paper - lots of finite lists, traversals, fragments and appends, just like the typical Haskell program written without any concern for efficiency; Haskell programs concerned with efficiency tend to look rather different). To our astonishment, even the code interpreting implementation (which should otherwise be in the ballpark of Hugs) outperformed ghc, hbc, and Clean on this example (call-by-value also played a role: compiled sml was in the same area as compiled KiR, but both only slightly faster than code-interpreted KiR, so data representation and primitives seemed to play the main role). This prompted us to include Augustsson's sanitized variant of quicksort (qusortbin in the paper - from the hbc libs) as well, which gave the results everyone expected (it substantially modifies the algorithm to a profile better supported by the current list representation, e.g., no appends). [and before anyone accuses me of advocating functional quicksort: the naive quicksort is useless, and even the real one isn't the best choice in many cases;-] But the moral for the current discussion: a
Re: fold on Monad?
Suppose I have a task I want to do to each line of a file, accumulate a result and output it, .. I'd like to write something similar to main = do res - foldX process_line initial_value getLine print res I feel this ought to be straightforward -- the structure is obviously some sort of fold, but I shouldn't have to use a list -- so I must be missing something obvious. What is it? foldr, foldM, etc. derive a recursive computation from the recursive structure of an input list, so you have to feed one in. If you want to bypass the list, you could use IO-observations (getLine, isEOF) instead of list observations (head/tail, null): import IO foldX f c = catch (do l - getLine r - foldX f c return $ f l r) (\e-if isEOFError e then return c else ioError e) main = foldX (:) [] = print Whether that is a real fold, or what the real fold/unfold would look like, I leave to others;-) The same goes for optimization. Hth, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
HCA Report (May edition) - LAST CALL
Dear Haskellers, over the last weeks, more and more of you have (been;-) volunteered to provide short summaries of recent and planned activities in your area of work with or on Haskell (for details, and for the current status, see http://www.haskell.org/communities/). Those summaries, as far as they haven't reached me already (special thanks to those early contributors!-), are due in this week (rather sooner than later), and after putting everything together, the second edition of the collective report should get out to this list sometime next week. The web page shows which areas are covered, and which summaries have reached me. ** This is the FINAL CALL for CONTRIBUTIONS to this edition. ** So, please, have a last look at the web page, and check for: missing topics? If your favourite Haskell topic isn't covered yet, get in touch with me as soon as possible (preferably by sending in a summary, but at least by naming possible contacts). This also applies if your topic is listed, but no contact has come forward yet! If you've released your latest and greatest Haskell software during the last six months, and want it to be represented, NOW is the last opportunity to send in a paragraph or two. micro-reports? late-breaking news?-) If you don't have any big project/group/area to report on, you might still want to let the Haskell community know that you're there and what you are working on. Just drop me a line or two on your work, and a link to your home page. missing reports? If you've promised a report, remember to send it to me!-) missing emails? If I've asked for your help, could you please let me know whether or not you'll be able to provide a report?-( Cheers, Claus -- Haskell Communities and Activities Report (May edition) All contributions are due in this week! http://www.haskell.org/communities/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: HCAR (May edition) - more suggestions
Haskell Communities and Activities Report http://www.haskell.org/communities/ The current plan is to get contributions in by the end of April, and to get the collective report out early next month (*). I wrote new suggestions are welcome. Several Haskellers have already contacted me about applications of Haskell (Haskell as a mere tool for someone's work or research) this week, and during these exchanges, two further interesting topics have emerged: 1. Tips, tricks tutorials It seems that some Haskellers have documented their own hard-won experience to help others. They have been working on web pages, short papers, tours, and tutorials touching on introductory examples of monadsco, giving guided tours and explanations of prelude, libraries syntax, or tips about programming and resource tuning, even explaining the internals of GHC (scary;), or interpreting Hugs error messages. If you have, or know about such a valuable resource, please send me a link and a brief description. Ultimately, all these things should be linked from the Haskell bookshelf (which is not limited to books:): http://haskell.cs.yale.edu/bookshelf/ but many resources are not yet linked in there (wasn't there some group of pragmatic programmers choosing Haskell as their language to be learned this year? And another group working on a new tutorial?), so we'll have a section on this in the next HCA Report. 2. Haskell publication overview We already had a preview of the JFP special issue on Haskell last time, but we should make this a permanent section. So, if during the last 6 months, you've finished your Haskell-related thesis, or published your book, or if some Haskell-related conference proceedings have appeared, or you know about any other relevant publications during that period, please let me know. Btw, we don't have good coverage of Haskell research groups yet (apart from those who are covered by reports on their software releases). And if you know Haskellers who are on other lists, but don't read the main list here, please feel free to point them to the report's home page. I'm also trying to contact folks who have released software recently, but please don't wait for me to get round to you. Just send me an email if you're willing to contribute a brief report on your great new tool! Please keep those contact offers and summaries coming!-) Claus PS. I seem to have problems getting answers from some people I've contacted about the report. I hope I'm not in people's kill-files yet, or that I'm running into your anti-spam mechanisms?-) The emails don't bounce, and there's no vacation message either, so if you were expecting me to contact you personally, but haven't heard from me, could you please get in touch? Thanks. ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
HCAR (May edition) - call for contacts and contributions
Dear fellow Haskeller, it is time again - for contributions to the second edition of the Haskell Communities and Activities Report http://www.haskell.org/communities/ The current plan is to get contributions in by the end of April, and to get the collective report out early next month (*). The general idea is to update all existing summaries, to drop any topics that didn't have any activity for two consecutive 6-month periods, and to add any new developments or topics for which no-one contributed summaries to the first edition, while trying to keep the whole under 25 pages. As you can see on the report's home page, many of last time's contributors have already volunteered to provide updates of their reports, including coverage of the major Haskell implementations and Report Addenda. But where you don't yet see contacts listed for your own subject of interest, you are welcome to volunteer, or to remind your local community/project team/mailing list/research group/etc. that they really ought to get their act together and let the Haskell community as a whole know about what they've been doing!-) A typical summary report would be between 1 and 3 paragraphs (what's it about? major topics and results since the last report? current hot topics? major goals for the next six months?) plus pointers to material for further reading (typically to a home page, or to mailing list archives, specifications and drafts, implementations, meetings, minutes,..). New suggestions for current hot topics, activities, projects, .. are welcome - especially with names and addresses of potential contacts. Two particular new suggestions I'd like to try this time are appended below. Thanks, Claus (*) Last time's experience showed that it takes an arbitrary number of weeks to write a nice summary in the last hour, so I hope that 2 weeks give enough flexibility to find a spare hour, or to coordinate the summary with any planned releases. -- Two new suggestions: 1 Applications: what are you using Haskell for? The implementation mailing lists are full of people sending in bug reports and feature suggestions, stretching the implementations to their limits. Judging from the reduced examples sent in to demonstrate problems, there must be quite a few Haskell applications out there that haven't been announced anywhere (probably because Haskell is just the tool, not the focus of those projects). If you're one of those serious Haskell users, why not write a sentence or two about your application? We'd be particularly interested in your experience with the existing tools (e.g., that all-time-favourite: how difficult was it to tune the resource usage to your needs, after you got your application working? Which tools/libraries where useful to you? What is missing?). 2 Project pings: anyone still working on this? There are numerous projects out there that don't add new features to their software releases every week, but are steadily working towards longer-term goals while keeping their software releases maintained and up-to-date. The people involved often say We don't really have to report anything new, so we won't report anything.. However, most of you out there know that there is a *huge* difference between projects and software in silent maintenance mode (i.e., actively being worked on) and those that die the silent death of been there, published that, let's move on. If you are the contact person for a project/software of the former kind, and you just want to reassure Haskellers that your stuff is still alive and kicking, send me a brief ping, and I'll try to include a list of those pings, with contact addresses (as proofs of liveness), in the upcoming report. ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: using less stack
cpsfold f a [] ú cpsfold f a (x:xs) ÿ x a (\y - cpsfold f y xs) and f takes a continuation, Bob's my uncle, and I have a program that runs quickly in constant space! Good. I'm curious to know from other readers whether continuations like this are the only way of solving it, though. Actually, and quite apart from it being cumbersome to use, I've got my doubts about whether this cpsfold really does the job (is that just me missing some point?-). Also, I'm curious to know why the usual strict variant of foldl doesn't help in this case? foldl' f a [] = a foldl' f a (x:xs) = (foldl' f $! f a x) xs or, with the recently suggested idiom for strictness, tracing and other annotations:-) annotation = undefined strict a = seq a False foldl' f a l | strict a = annotation foldl' f a [] = a foldl' f a (x:xs) = foldl' f (f a x) xs Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: using less stack
Actually, and quite apart from it being cumbersome to use, I've got my doubts about whether this cpsfold really does the job (is that just me missing some point?-). It does the job for me! In practical terms I can see it works. ..[explanation omitted].. I didn't express myself well: I don't doubt that you solved your problem by using cpsfold, but the cpsfold alone doesn't do the trick. It just translates the implicit continuation (which uses stack space) into an explicit continuation function (which uses heap space). So there's something else going on. As you explained, having the continuation explicit makes it easier to get a handle on what happens next at every step, which is the usual reason for using CPS style. And if neither the cautious evaluate-to-weak-head-normal-form seq nor the all-out evaluate-to-normal-form deepSeq do the job for you, I can well imagine that CPS style permits you to fine-tune evaluation to the needs of your application. But as Olaf has pointed out, having that much control can be a bit of a fragile construction. So I was just wondering about the specific use of fold in your application, and how you've redefined your operator to make use of your CPS version of fold (in order to solve the space problem). Incidentally, cpsfold processes the list in reversed order, which may or may not matter, depending on the operator passed to it. Hugs session for: Prelude.hs R.hs folds.hs Main foldr (-) 0 [1..4] (1 - (2 - (3 - (4 - 0 Main foldl (-) 0 [1..4] 0 - 1) - 2) - 3) - 4) Main cpsfold (\x y c- c $ x - y) 0 [1..4] (4 - (3 - (2 - (1 - 0 Claus -- Research Opportunity: Refactoring Functional Programs (UKC) Closing date: Friday 22 March 2002 --- !!! http://www.cs.ukc.ac.uk/vacancies_dir/r02-24.html ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Hugs plugin, Haskell Browser
2. When I hear translate to HTML I imagine that underlined words which can be clicked to see, say, definition of function. Sadly, most htmlizers are focused on highlighting rather than navigation. Why generate HTML pages if noone reads them?-) Take this obscure location, for instance: http://www.haskell.org/libraries/#docu admittedly, Jan Skibinski's Haskell module browser is currently only available via archives, but it wasn't HTML-based anyway, and the other two links still work. Here's another example of text colouring that can be clicked on (few people do, so it has become quite silent on the mailing list, try the first months for more action..): http://haskell.org/mailman/listinfo/haskelldoc Hth, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: a universal printer for Haskell?
You don't need meta-programming technology (reflection) to do things like generic prinitng. A generic programming extension of Haskell (like Generic Haskell, or derivable classes) can do the job for you. Isn't generic programming usually based on a kind of compile-time reflection (if the argument is of this type, do this, else..)? And don't you write generic functions with the intention that the implementation will figure out the approriate type-specific variants, i.e., you write your code at a level of abstraction that is not easily reached with the standard language tools -- a meta level, where functions at different types are the first-class objects of discourse? I find it helpful to think of generic programming support as one way of integrating well-behaved subsets of reflection and other meta-programming techniques into Haskell. It is partly a trade-off: you get some of the features and avoid some of the problems of a fully reflective architecture. It is also a specialisation: by avoiding the full generality, the specific subset of features can be designed in a structured fashion, with the application domain in mind, making them easier to use for that domain. Claus (another fan of reflection and meta-programming, who would like to get their advantages without their disadvantages -- the latter are more clearly visible in a pure functional language than in Lisp) ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: syntax...(strings/interpolation/here docs)
{- Unlike my rough proposal, one should aim for a combination of (mostly) in-Haskell implementation and (some) pre-processing. As Thomas Nordin has pointed out to me in private email, Hugs (Dec 2001) already supports this (module lib/hugs/Quote.hs and flag +H). The real trick is to have the same support in all implementations.. -} module HereDocuments where {- :set +H -} import Quote text = `` When I mentioned pre-processing, I didn't mean doing something to generate a Haskell program, I meant simple language extension (as in: syntactic sugar). It is nice that the Hugs variant of here documents is easily implemented with pre-processing, but that should be done behind the scenes. Usually, I wouldn't make such a fuss, but here documents are really not some new and experimental feature. They're an old hat, and a very useful hat. The only question is how to integrate them into the rest of Haskell. The Lewis/Nordin suggestion implemented in Hugs looks like a good compromise, but it won't do harm to bind the sugar to an option/flag for a test period. In the end, a stable form of here documents should be part of the language (not part of what you can do to it with whatever tools in whatever contexts), directly supported by all implementations. '' main = putStrLn $ trim text -- Claus ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Postdoctoral Research Associate, Refactoring Functional Programs
Just a quick reminder that the closing date for this position, advertised here earlier, is *this Friday*, 11/01/2002. Happy New Year, Claus Applications are invited for a three year Postdoctoral Research Associate position at the University of Kent at Canterbury, to work under the direction of Professor Simon Thompson and Dr Claus Reinke on an EPSRC funded project entitled Refactoring Functional Programs. ... Please telephone the Personnel Office for further particulars on 01227 827837 (24 hours) or email: [EMAIL PROTECTED] , quoting the reference number R02/15. Text phone users please telephone 01227 823674. The closing date is Friday 11 January 2002. Informal enquires can be directed to either Simon or Claus: {S.J.Thompson,C.Reinke}@ukc.ac.uk. More details of the project and further particulars for the position can be found at: http://www.cs.ukc.ac.uk/people/staff/sjt/Refactor/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Where to ask questions regarding categories and datatypes
Peter Douglass writes: Hi, I have a number of questions regarding categories and datatypes. I know that many of the folk in this mailing list could answer these question, but I wonder if there is a more appropriate forum. (i.e. the question are not Haskell specific). Why don't you try the Types Forum [EMAIL PROTECTED] See also http://www.cis.upenn.edu/~bcpierce/types/ Or, if your questions are more on the categories side, there is also: Categories List The category theory mailing list, moderated by Bob Rosebrugh. http://www.mta.ca/~cat-dist/ Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Global variables
Hello, I am interested in using global variables (in GHC). I need a variable to store list of Integers to store temporary results. I have been reading the module MVar, but I wonder if there is an alternative way of doing it. I have already implemented my function using an auxiliar argument where I put my lists of Integers. Will the use of a global variable improve my function? no!-) ah, well, perhaps.. As you've already got your function, using auxiliary arguments, you probably don't really need to re-write it in a less functional style. But you will have noticed that much of your code repeatedly does the same thing - passing the auxiliary around. It's good functional programming practice to abstract away repeated code (both for reuse and to get more concise code). One way to do that will lead you into monads (your function is a state transformer, transforming the state of the auxiliary list at each step), still a functional solution. If you really, and absolutely want and must use global, mutable variables in a functional language, you might find a recent paper by John Hughes helpful, on that very topic. See his home page: http://www.cs.chalmers.se/~rjmh/ Hth, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Postdoctoral Research Associate, Refactoring Functional Programs
Postdoctoral Research Associate in Refactoring Functional Programs Applications are invited for a three year Postdoctoral Research Associate position at the University of Kent at Canterbury, to work under the direction of Professor Simon Thompson and Dr Claus Reinke on an EPSRC funded project entitled Refactoring Functional Programs. Refactoring is the process of redesigning existing code without changing its functionality. In our project we will explore refactoring for functional programs and in particular we will construct a catalogue of functional refactorings, and build a prototype tool in Haskell to support refactoring of Haskell programs. Applicants should be able to demonstrate extensive experience of programming in a functional programming language, preferably Haskell, and have good communication skills. The appointment will be made within the salary ranges of £17,278-£26,229 p.a. on the Research IA scale. Please telephone the Personnel Office for further particulars on 01227 827837 (24 hours) or email: [EMAIL PROTECTED] , quoting the reference number R02/15. Text phone users please telephone 01227 823674. The closing date is Friday 11 January 2002. Informal enquires can be directed to either Simon or Claus: {S.J.Thompson,C.Reinke}@ukc.ac.uk. More details of the project and further particulars for the position can be found at: http://www.cs.ukc.ac.uk/people/staff/sjt/Refactor/ Simon and Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Haskell Communities Survey - LAST CALL
Dear Haskellers, over the last weeks, more and more of you have (been;-) volunteered to provide short summaries of recent and planned activities in your area of work with or on Haskell (for details, and for the current status, see http://www.haskell.org/communities/). Those summaries, as far as they haven't reached me already, are due in this week (rather sooner than later), and after putting everything together, the first edition of the collective report should get out to this list next week. The web pages shows which areas are covered, and which summaries have reached me. So, please, have a last look at the web page, and check for: missing topics? If your favourite Haskell topic isn't covered yet, get in touch with me as soon as possible (preferably by sending in a summary, but at least by naming possible contacts) micro-reports? late-breaking news?-) If you don't have any big project/group/area to report on, you might still want to let the Haskell community know that you're there and what you are working on. Just drop me a line or two on your work, and a link to your home page. missing reports? If you've promised a report, remember to send it to me!-) missing emails? If I've asked for your help, could you please let me know whether or not you'll be able to provide a report?-( Cheers, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Haskell Communities Survey - Second Call for Contributions
Dear Haskellers, after the first rush of volunteers seems to have ebbed away, it is probably time for a reminder. First, the good news: We have just about enough topics covered to convince me that it makes sense to go ahead. So the Haskell Communities page has moved to a more permanent location at http://www.haskell.org/communities/ and any further documents will appear there as well. I even have the first reports coming in already (thanks very much!). If everyone else could please send in their reports over the next two weeks, i.e., before *** Monday, 29. October 2001 ***, then I could try to edit everything together in the following week (modulo my own deadlines..) and put out the first version of the collective Haskell Communities Status Report early in November. Simon Marlow sent a nice example of a (more frequent) report from the FreeBSD community, which might give an idea of how a collection of brief summaries can help to get and keep an overview of a field: http://www.geocrawler.com/archives/3/159/2001/9/150/6646127/ So far, so good. The not so good news is that there are still some quite important areas uncovered, so *** we are still looking for active Haskellers *** *** to write (and send to me) brief summaries. *** Below, you'll find first a list of topics I would like to see covered, then the list of topics for which we already have volunteers. If you think you can help out with information on one or more of the outstanding topics, please let me know. If you have the information, but are unsure about what would be a useful summary for the report, get in touch with me and we'll see what we can do. Cheers, Claus contacts still needed: core * Haskell Central changes over the last year, plans for the next few months? * Hugs often feared to be dying, but kept very much alive by enthusiasts; Currently, OGI and other enthusiastic volunteers are supporting. Any ideas about the future? What about the new release (when, what)? * nhc98 lots of new work there, though much of it will probably be covered by Olaf's report on tracing and debugging? !! for all implementations, it would be nice to know the !! position and status regarding recent extensions that !! need support to be portable, such as FFI, hierarchical !! modules namespace, portable libraries, GUI API libraries. others * non-sequential Haskell This is an important and active area, and we seem to have lots of projects there. Ideally, someone in the field could give an overview of the state-of-the-art, but I would also be happy to get summaries from individual projects (we've got Concurrent Haskell, but nothing else yet; what about GPH, port-based distributed Haskell, ..?). [and why is there no dedicated mailing-list for the collection of non-sequential Haskell projects?] * meta programming support for Haskell Tim Sheard would like to start a project on a Haskell version of Meta ML. Any progress there? Meanwhile, there are a small number of Haskell parsers and pretty-printers around. But how complete are they, are they being kept up to date, what about language extensions? What about a standard AST format? What about static analysis and type checking/inference? A standard interface to symbol table information? Partial evaluation for Haskell?-) Reification/reflection (Bernard Pope and Lee Naish have done some work here in the context of declarative debugging)? Why all the extra tools, btw? Could we not have a standard interface to the parsers, type checkers, symbol tables that exist in our Haskell implementations (as is the case for other respectable languages?-) * lightweight modules for Haskell At this year's Haskell workshop, Mark Shields asked those interested to cooperate on this topic to contact him, mentioning that he was working on the topic. It would be useful to have an idea of the plans there. Aha, I've just found a brand new paper on that, perhaps Mark could give a brief summary of that and the implementation plans?-) * Haskell libraries collection will that be covered in the report on hierarchical module namespaces or do we need a separate report? Simon? * FFI tools Manuel will cover FFI language extensions and libraries, as well as his own C-Haskell, but what about the other tools built on top of the FFI basis? What is the status of Greencard, Hdirect co? What about all that recent talk about new Haskell to Java connections?-) If anything is still (or newly) active: summariespointers, please! * Documentation tools earlier this year, there was some work and discussion on this. Anyone willing to summarise the results? * Applications Perhaps someone at Galois Connections could summarise their recent successes and immediate plans (I've heard lots of good news from that direction recently)? Haskell in hardware specification and
Haskell Communities Survey - Call for Contacts (update)
First, a big thanks to those of you who immediately volunteered to act as contact persons to their communities. That means that five of the areas are already covered. *** *** keep those contact offers coming!-) *** *** *** if you can't cover a whole area, status reports for any of *** *** the sub-areas listed can still be a good starting point*** To avoid duplication of work, and to remind everyone which areas are still looking for contact persons, here is my current list (note that there are more topics covered than contacts;-): Generic Programming/Generic Haskell [contact: Johan Jeuring] Hierarchical library proposal [contact: Simon Marlow] Happy [contact: Simon Marlow] Concurrent Haskell [contact: Simon Marlow] GHC [contact: Simon Peyton-Jones] Thanks also for the positive feedback - it seems that many of you have waited for something in this direction. There have been a few alternative suggestions as well, all of which could be pursued, if someone is willing to make a start. So I would like to pass on those suggestions to the list, with a few comments. - Tom Pledger points out the inevitable problems of a hierarchical structure, and suggests a keyword/phrase glossary instead, with definitions and links to project summaries, tending towards Wiki. I think Jan Sibinsky had started a nice glossary - unfortunately, he still seems to be offline? Has someone saved his work? As for Wikis, I've never been impressed with their apparent inability to self-organise, but we do have a Haskell Wiki, so everyone is welcome to prove me wrong there! Other structures for presenting the information might work (we'll see), but from my experience, it takes some active prodding and definite goals to get others to contribute. Hence the present structure for a start. - Ketil Malde suggests a web zine, with a group of people monitoring and summarising mailing list traffic. I'm not so sure about that specific mode (any volunteers out there?), but I do think that functional programming as a whole would profit from some kind of magazine-style publication, to bridge between the academic forums and the increasing number of (would-be) Haskell practitioners out there who have no need to conform to academic quality guidelines but would still want a quality forum more permanent and organised than mailing lists. Such a magazine would collect, edit and archive contributions about Haskell (or other FPLs) in practice, how to design, how to solve all those problems that are not really scientific in nature, but still need to be solved. I think many academics would be happy to contribute as well, focussing on their own role as practitioners. Many of those nice explanations on the mailing lists would no longer be lost to mailing list archives, and volunteer (that word again) reporters could write about interesting projects, events, etc.. That would still need some good and active editors in place (and twice a year there could be a special issue on Haskell, including the Haskell communities report;-). So a Haskell or FP web zine is a good idea, and complementary to the report we are trying to get together now. - Last, but certainly not least, Andrei Serjantov supports the idea of a separate mailing list for applications of Haskell discussions, pointing out that some potential posters to the current lists are unsure whether their own questions and contributions have any place amongst those highly technical discussions about advanced language extensions or recent implementation details. AFAIC, those other contributions are welcome as well on the current lists, but a separate applications list would still make a lot of sense (I would subscribe). Just remember that anything good needs volunteers to get things started (without them, we wouldn't have the languagelibrary reports, not to speak of implementations). And that means you, dear reader!-) I'll be at IFL next week, but keep those contact offers coming! Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Haskell Communities Survey - Call for Contacts
Dear fellow Haskellers, the recent (and eternal) questions about GUIs and future Haskell developments are closely related to a general problem: trying to keep up to date with what is going on within the Haskell community as a whole while the breadth of interests in that community keeps expanding. For the sin of pointing out this problem at this year's Haskell workshop, I was volunteered to try and organise some way of getting summaries of what the various (sub-)communities are working on and posting the results to the main Haskell list as well as on haskell.org. Below, you'll find (I) a summary of the problem, (II) my slightly revised suggestion on what to do about it, and (III) a preliminary grouping of Haskell communities by topic (some of which do not have their own mailing lists right now, or are really dispersed over several more specific lists). What I need from you now is (a) a look at the list of topics/communities at the end of this email -- it is meant as a starting point, likely to be incomplete, so if anyone feels left out, if you can confirm some of the topics as dead, or anything else is wrong, let me know. (b) a contact person in each of the communities, willing to put together a brief, representative summary of that community's work and interests (see part II below for what would be useful). Being brief, the individual reports won't require a lot of work for someone actively interested in one of the topics. In fact, they might have to be shorter than you would like them to be - feel free to include pointers to more extensive documentation. The plan is to get this set up over the next weeks, and to give the contact persons a little bit of time to get feedback on their status reports from their communities. As soon as the individual reports start flowing into my mailbox, I'll try to edit them together. Optimistically, we could have the whole report ready for the main Haskell list before October is gone (famous last words..), realistically, we'll wait and see what happens. Ultimately, the aim is to have two yearly updates of the status report, covering all active Haskell communities (sorted by topic); one update in time for the yearly Haskell workshops and one in between. The first go doesn't have to cover everything, we can add or remove topics and fine-tune the level of detail once we get going. The report will - help to keep experienced Haskellers updated - give newcomers some orientation about current activities - help communities to attract collaborators I'll step back now to make room for the volunteers;-) Claus --- THE PROBLEM: 1. The Haskell community continues to grow in size and in the variety of interests. [this is good:-] 2. Sub-communities have formed, often with their own mailing lists, to work on specific issues or just to provide foci for specialist discussion. [this is necessary to get some work done] 3. Both specialists and the just a Haskell user find it difficult to keep track of what is going on behind the scenes of the main list. In fact, some folks have opted out of much of the main list (the discussions were factored into the haskell-cafe list) and only follow the announcements part (the haskell list). Few people can afford spending their days reading through all of the dozens of Haskell-related mailing lists (this is not a joke: you'll find about a dozen real lists plus a similar number of cvs-watchlists hosted at haskell.org alone, and that isn't counting the lists hosted elsewhere or the communities that use small meetings to coordinate their work). [this is bad, and we can try to do something about it] --- THE SUGGESTION: My initial proposal at the workshop was to collect summaries from contacts on the various mailing lists every six months or so and to edit them for the main Haskell list and for a page on haskell.org. However, having combed through the list of mailing lists I am aware of so far, I am more and more convinced that the current proliferation of very narrowly focussed and often shortlived (in terms of activity) mailing lists is part of the problem, not part of the solution. Not to mention that certain important topics, such as GUIs do not even have their own mailing list yet.. My current suggestion is to come up with a broader topical grouping of Haskell communities (see below), and then to solicit brief summaries from contacts in those communities every six months, taking the yearly Haskell workshop as one of the reference points: - twice a year, everyone gets a snapshot of what's happening with Haskell. In the meantime, the communities can get on with their interests without having to bother with summaries too often (it is always difficult to find
Re: The future of Haskell discussion
There was the feeling that there is not frequent enough feedback from the Task Forces (eg, FFI Task Force, Library Task Force) to the Haskell community as a whole. Clause Reinke kindly volunteered to collect status reports of Task Forces on a 6-monthly basis and post them to the Haskell mailing list. Claus, maybe you should give the Task Forces an idea of when you expect the first status report. Just a quick update: when I was volunteered for organising more frequent feedback from the task forces to the Haskell community as a whole, I thought of just collecting individual summaries from the existing lists. However, my current opinion on the matter is that this alone would actually be a bad idea, as there are too many lists, no lists for some important topics, some topics spread over several lists, and generally not enough useful structure for the wide range of Haskell sub-communities. So what has been holding up the call for status reports from here (apart from the usual real work:-) is my attempt to (a) gather as many of the various Haskell interest groups as I can and (b) find some way to organise them, so that I get an idea of who is out there and how the pieces might fit together. I won't even try to get everything right in the first go, so I hope to send round a first draft of a structure (based on existing info at haskell.org and elsewhere) in the next days. The main point I'm still undecided on is at what level to ask for status reports. In my current version of the hierarchy, there are three levels, with 4-5 broad areas at the top (such as libraries, implementations, etc.), and many of the existing mailing lists or projects, such as Gtk+HS, at the most detailed level. If our community was more organised, I would really like to see reports at the middle (e.g., what's up in terms of GUIs?-) or top level, but as it stands, I will probably need to look for anything I can get at the mailing-list level and then try to edit all those fragments into a more useful overview. For the timescale, I still think that 6-monthly reports are a sensible compromise, and the Haskell workshops are a good reference point. I'll ask for the first status reports as soon as I've got an idea of how everything might fit together, probably early next week. The second round will then take place in between Haskell workshops, and so on. It would be nice if we could cover not only the explicit task forces, but all Haskell (sub-)communities (such as the folks interested in generic programming, or in concurrent/parallel/distributed programming, functional reactive programming, etc.). That'll mean that the individual status reports themselves will have to be brief (plus pointers to more detailed documents, and instructions about how to join the communities or find archives), which should also make it easier to find people who write them;-). More later, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Job Adverts on haskell.org
Haskell.org has a page "Job Adverts". http://www.haskell.org/jobs.html I like the new scope. But to minimise confusion of people looking for paid jobs (which now exist), why not reflect the explanations in the first paragraph in the page structure? Two main sections should be sufficient, with the current sections falling naturally into one of the two: A Adverts for positions involving Haskell - Positions in Academia - Positions in Industry B Descriptions of jobs that need doing - new libraries/tools needed - medium-level GUI library needed - new caretakers for existing libraries/tools needed -.. - calls for contributions - donate your programs for testing and benchmarking -.. - calls for cooperations - project X.. (change order if necessary;-) If you are working on some Haskell project and could do with some help, then advertise here (and on the Haskell mailing list). Even more importantly: If you built some Haskell software and unfortunately no longer feel able to maintain it, then please advertise here (and on the Haskell mailing list) for new people to take it over and give it a home. Thus we may alleviate the problem of the large number of Haskell tools and libraries that are half complete and no longer maintained. Excellent ideas, especially the focus on keeping existing things in good working order by passing them on to new caretakers! Some aspects of the Haskell wish list might overlap with aspects of the suggested section B? I would suggest to keep the two linked to each other, with the wish list focusing on would-be-nice-to-have, and the task list focusing on needs-to-be-done and is-going-to-be-done. So that ideas might start on the wish list, but move to the task list when they become more concrete (either there exists a concrete customer, or a project is forming with the first names of contributors already written down or work having started, or a current maintainer is looking for someone to take over). Cheers, Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Combinator library gets software prize
[oops, somehow this ended up in my drafts folder, not on the list; sorry for the delay] First, to avoid confusion: I'm not criticising the work described. I'm just pointing out that, in my view, the major advantages of functional programming lie beyond what is used in this application (in other words, there is yet more to be gained, and we should not underestimate the role of the facilities imported from the functional host language, such as functional abstraction). [one quibble: I don't like the idea of using floating point numbers to represent real money without further comments. Even after avoiding representation error surprises by using base10 instead of binary floating points, aren't there precise standards for rounding in the financial market? Or have I missed an explanation of this in the paper?] Am I the only one who finds the exclusive emphasis on combinator languages slightly disappointing (in fact, the article seems to equate functional language with domain-specific combinator languages, which is more than a bit mistleading)? What declarative approach(es) (other than combinators) are you referring to here? Mostly lambda-calculus; more generally, all approaches that deal explicitly with variables and variable binding, and the resulting languages, which are general enough that DSLs for many domains can be embedded in them. Even within combinator languages, I am not always sure whether those reporting on the paper are aware how much the DSL gains by being embedded in a full functional language, and how simply exporting the core DSL to another context would severe all those useful connections. In the present context, most of the excitement seems to focus on the contract combinators, because they bring a new approach towards order in the application domain. They can be viewed as a first-order functional language on contracts. This core part of the DSL could be used in other modern languages as well, provided their expression sub-language is sufficiently expressive (although the syntax, e.g., new And(..) and new Give(..) in Java, or Give../Give in other contexts, would quite likely annoy the user). But the paper goes beyond that, e.g., it defines and uses some higher-order features for observables. Also, functional programmers will be tempted to use partially applied contract combinators in higher-order compositions (as a simple example, take "all = foldr and zero :: [Contract] - Contract" -- acquiring "all contracts" acquires a list of contracts). Such uses are already slightly more difficult (and typically much more ugly) in non-functional languages, so simply providing the same basic DSL as a library embedded in a non-functional language will not necessarily give the same flexibility (and a dedicated implementation of the DSL would have to decide between supporting only simple uses of the combinators or adding some more advanced features). Of course, in addition to combinators, we also need definitions, so that we can give names to useful combinations of combinators. This comes for free if the DSL is embedded in Haskell, but if we take it out of this context, we need to start thinking about what kind of definitions we can or want to support: parameters? certainly. what types of parameters? what types of things can be defined? how are parameters passed?.. And suddenly, we're into the definition and implementation of a little (higher-order?) functional language, with support for functional abstraction (how much support?). Does the host language (or the dedicated implementation) support all the things we want in the way we expect? In any case, the core contract combinator DSL doesn't stand in isolation - we need to work with the contracts, operate on dates, evaluate options, iterate through sequences or alternatives, .. Do we do this in functional style, or by shouting imperatives at those poor contract combinations? The consequence (and the intention, as far as one can gather from the paper) of the limitation to combinators is that this language can and will be used mainly in non-functional languages, not inheriting all that much from a functional style of programming. The same will probably hold for any communication standards based on it. Why is that? I'm new to the functional programming world, and haven't reallystruck the concept of combinator libraries elsewhere. I assumed they were largely a functional programming concept. Even though I can see how they could be implemented in imperative languages, it doesn't seem that they would be a nice fit. That's my point. But so far, I don't see any external commitment to a functional host language for the core combinators. Of course, that impression could well be wrong.. The paper suggests the Haskell implementation as a prototype to help the development of an existing C++ implementation as well as a new implementation in OCaml (implementation language, not host language). Most positive
Re: Combinator library gets software prize
This article is very good, and having read the conference paper earlier in the year I finished it with only one question: What's a 'quant' ... and is it good or bad to be one? "Ten years ago, Jean-Marc Eber, then a quant at Société Générale, ..." The OED has: .. So perhaps he was tall, thin and fond of wearing a cap? :-) You might want to try http://www.dictionary.com/cgi-bin/dict.pl?term=quant An expert in the use of mathematics and related subjects, particularly in investment management and stock trading. ... http://www.investorwords.com/q1.htm#quant One who performs quantitative analyses. Or more generally, a securities analyst. Am I the only one who finds the exclusive emphasis on combinator languages slightly disappointing (in fact, the article seems to equate functional language with domain-specific combinator languages, which is more than a bit mistleading)? This is a Haskell forum, not one on Backus' FP, so readers are well aware of the advantages of functional abstraction. Domain-specific languages, if embedded in Haskell, tend to inherit these advantages, even if the language in question was designed as a pure combinator language. The consequence (and the intention, as far as one can gather from the paper) of the limitation to combinators is that this language can and will be used mainly in non-functional languages, not inheriting all that much from a functional style of programming. The same will probably hold for any communication standards based on it. This appplication is undoubtely a success and a good step forward, but there is more to functional languages. The paper somewhat downplays the role of embedding the combinator-based DSL in a full functional language while also mentioning that some features gained for free in the Haskell prototype considerably complicate implementations in other languages. I would be interested to hear more about these aspects: - Is the strictly combinator-based approach a pragmatic necessity ("one step at a time") or is there no need for more advanced features (especially abstraction) in the application area? - Has there been a comparison of the way the combinators of the DSL are used in Haskell and in other implementations? I would think that the use of abstraction to define more complex instruments in terms of the basic combinators should play a rather dominant role. On the other hand, I can imagine that users would want to have each complex instrument explicitly named and studied, instead of trusting large amounts of money to anonymous abstractions created on-the-fly. - What about interactive exploration of new instruments, e.g., in a Hugs or OCaml session (as opposed to changing a C++ implementation, or a stand-alone executable generated by an OCaml compiler, for every experiment)? Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
ANNOUNCE: GHood (updated pre-release)
Ever wanted to see what your Haskell program is doing?-) - GHood (pre-release, 11 January 2001) "Graphical Hood" -- a Java-based graphical observation event viewer, as a back-end for Andy Gill's Hood (Haskell Object Observation Debugger). Currently, GHood comes in two files: a drop-in replacement for the Hugs98 variant of Hood (only minimal changes, same interface) and a Java class file archive for the graphical viewer itself. To find the two files, please visit my Haskell corner at: http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/#GHood - The GHood pre-release has been updated. The main change (apart from single-stepping through the observation event stream, which you might have been missing in the previous version): GHood can now be used as an applet (requires Java 2/Swing-capable browser or a suitable Java plug-in). This means that GHood animations can now be used to enrich webpages. If you ever wanted to discuss the behaviour of Haskell programs on your webpages, you can now add applets to those pages that visualise and animate the issues you describe. If you never dared to write about program behaviour for lack of illustrations, you might now consider adding such discussions to your webpages. Potential uses: - educators: as part of your functional programming course webpages, - programmers: as part of the description of a clever functional algorithm, or to document problems in a misbehaving functional program - GHood implementors (me;-): show examples of GHood animations online. I would be interested to learn about example problems from practice for which GHood has been found helpful. Please let me know, too, if you create websites that use GHood as an applet. Enjoy, Claus -- Claus Reinke, http://www.cs.ukc.ac.uk/people/staff/cr3/ Computing Lab, University of Kent at Canterbury ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Too Strict?
Dominic, What I can't see at the moment is how to keep what I was doing modular. I had a module Anonymize, the implementation of which I wanted to change without the user of it having to change their code. The initial implementation was a state monad which generated a new string every time it needed one but if it was a string it had already anonymized then it looked it up in the state. I initially used a list but with 100,000+ strings it took a long time. The next implementation used FiniteMap which improved things considerably. I only had to make three changes in Anonymize and none in Main. Using MD5 is quicker still but isn't so good from the program maintenance point of view. my first stab at the modularity issue was the version _2 in my last message. Looking back at the Anonymizable class and instances in your full program, type Anon a = IO a class Anonymizable a where anonymize :: a - Anon a -- MyString avoids overlapping instances of Strings -- with the [Char] data MyString = MyString String deriving Show instance Anonymizable MyString where anonymize (MyString x) = do s - digest x return ((MyString . showHex') s) instance Anonymizable a = Anonymizable [a] where anonymize xs = mapM anonymize xs the problem is in the Anonymizable instance for [a]: the mapM in anonymize constructs an IO script, consisting of some IO operation for each list element, all chained together into a monolithic whole. As IO a is an abstract type, this is a bit too restrictive to be modular: if I ever want any of the anonymized Strings, I can only get a script that anonymizes them all - before executing that script, I don't have any anonymized Strings, and after executing the script, all of them have been processed. This forestalls any attempt to interleave the anonymization with some further per-element processing. Instead, I would prefer to have a list of IO actions, not yet chained together (after all, in Haskell, they are just data items), but that doesn't fit the current return type of anonymize. One approach would be to change the type of Anon a to [IO a], or to ignore the [a] instance and use the MyString instance only, but the longer I look at the code, the less I'm convinced that the overloading is needed at all. Unless there are other instances of Anonymizable, why not simply have a function anonymize :: String - Anon String ? That would still allow you to hide the implementation decisions you mentioned (even in a separate module), provided that any extra state you need can be kept in the IO monad. One would have to write mapM anonymize explicitly where you had simply anonymize, but it would then become possible to do something else with the list of IO actions before executing them (in this case, to interleave the printing with the anonymization). First, here is the interesting fragment with the un-overloaded anonymize: readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename a - mapM anonymize (lines s) hPutStr h (unlines a) It is now possible to import anonymize from elsewhere and do the interleaving in the code that uses anonymize: readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename let action line = do { a - anonymize l ; hPutStr h a } mapM_ action (lines s) Would that work for your problem? Alternatively, if some of your implementation options require initialization or cleanup, your Anonymize module could offer a function to process all lines, with a hook for per-line processing: processLinesWith perLineAction ls = do { initialize ; as - mapM action ls ; cleanup ; return as } where action l = do { a - anonymize l ; perLineAction a } Then the code in the client module could simply be: readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename processLinesWith (hPutStr h) (lines s) return () Closing the loop, one could now redefine the original, overloaded anonymize to take a perLineAction, with the obvious instances for MyString and [a], but I really don't see why every function should have to be called anonymize?-) Claus PS The simplified code of the new variant, for observation: module Main(main) where import Observe import IO(openFile, hPutStr, IOMode(ReadMode,WriteMode,AppendMode)) filename = "ldif1.txt" fileout = "ldif.out" readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename let { anonymize s = return (':':s) ; action l = do { a - anonymize l ; hPutStr h a } } mapM_ (observe "action" action) (lines s) main = runO readAndWriteAttrVals
Re: Too Strict?
Can someone help? The program below works fine with small files but when I try to use it on the one I need to (about 3 million lines of data) it produces no output. The hard disk is hammered - I assume this is the run time system paging. My suspicion is that the program is trying to read in the whole file before processing it. Is this correct? If so, how do I make the program lazy so that it processes a line at time? I was about to apply GHood to your program to see whether such a tool could help to find the problem, so I started to cut down your code. However, in the simplified version of the program, one can see what is going on without any graphical tool.. (nevertheless, observation of the simplified code with GHood confirms your suspicion immediately, and it points out the spine of the list as the problem, too, so the tool is useful!-) (*) In effect, your anonymize comes down to a "mapM" over a list of actions applied to input lines, and all of the resulting IO-actions are placed before the single "hPutStr". So, even if the results of the individual actions in the list may not be needed until later, the whole spine of the list of lines has to be traversed before "hPutStr" can be executed, meaning that all input is read before any output is produced (and thus before any computation results are requested, blowing up memory usage). For the problem at hand, you could simply output each line as it is processed instead of just returning it into a list for later use (see variant _1 below). If you would want to keep both the modular program structure and the explicit line-by-line IO-style, you would need to interleave the input and output commands somehow (perhaps similar to variant _2 below?). Hth, Claus (*) Please note that our web-server is being upgraded today.. (web-pages and GHood download will not be available until tomorrow, hence no URL here :-( PS The simplified code (+ variations) with observations: module Main(main) where import Observe import IO(openFile, hPutStr, IOMode(ReadMode,WriteMode,AppendMode)) filename = "ldif1.txt" fileout = "ldif.out" readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename let action l = return (':':l) a - mapM action (observe "input" (lines s)) hPutStr h (unlines (observe "output" a)) main = runO readAndWriteAttrVals readAndWriteAttrVals_1 = do h - openFile fileout WriteMode s - readFile filename let action_and_output l = hPutStr h (':':l) mapM_ (observe "output" action_and_output) (observe "input" (lines s)) main_1 = runO readAndWriteAttrVals_1 readAndWriteAttrVals_2 = do h - openFile fileout WriteMode s - readFile filename let { action l = return (':':l) ; as = map action (observe "input" (lines s)) ; os = repeat (hPutStr h) } mapM id (observe "output" (zipWith (=) as os)) main_2 = runO readAndWriteAttrVals_2 ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Too Strict?
Can someone help? The program below works fine with small files but when I try to use it on the one I need to (about 3 million lines of data) it produces no output. The hard disk is hammered - I assume this is the run time system paging. My suspicion is that the program is trying to read in the whole file before processing it. Is this correct? If so, how do I make the program lazy so that it processes a line at time? I was about to apply GHood to your program to see whether such a tool could help to find the problem, so I started to cut down your code. However, in the simplified version of the program, one can see what is going on without any graphical tool.. (nevertheless, observation of the simplified code with GHood confirms your suspicion immediately, and it points out the spine of the list as the problem, too, so the tool is useful!-) (*) In effect, your anonymize comes down to a "mapM" over a list of actions applied to input lines, and all of the resulting IO-actions are placed before the single "hPutStr". So, even if the results of the individual actions in the list may not be needed until later, the whole spine of the list of lines has to be traversed before "hPutStr" can be executed, meaning that all input is read before any output is produced (and thus before any computation results are requested, blowing up memory usage). For the problem at hand, you could simply output each line as it is processed instead of just returning it into a list for later use (see variant _1 below). If you would want to keep both the modular program structure and the explicit line-by-line IO-style, you would need to interleave the input and output commands somehow (perhaps similar to variant _2 below?). Hth, Claus (*) Please note that our web-server is being upgraded today.. (web-pages and GHood download will not be available until tomorrow, hence no URL here :-( PS The simplified code (+ variations) with observations: module Main(main) where import Observe import IO(openFile, hPutStr, IOMode(ReadMode,WriteMode,AppendMode)) filename = "ldif1.txt" fileout = "ldif.out" readAndWriteAttrVals = do h - openFile fileout WriteMode s - readFile filename let action l = return (':':l) a - mapM action (observe "input" (lines s)) hPutStr h (unlines (observe "output" a)) main = runO readAndWriteAttrVals readAndWriteAttrVals_1 = do h - openFile fileout WriteMode s - readFile filename let action_and_output l = hPutStr h (':':l) mapM_ (observe "output" action_and_output) (observe "input" (lines s)) main_1 = runO readAndWriteAttrVals_1 readAndWriteAttrVals_2 = do h - openFile fileout WriteMode s - readFile filename let { action l = return (':':l) ; as = map action (observe "input" (lines s)) ; os = repeat (hPutStr h) } mapM id (observe "output" (zipWith (=) as os)) main_2 = runO readAndWriteAttrVals_2 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
ANNOUNCE: GHood -- a Graphical Hood (pre-release)
Ever wanted to see what your Haskell program is doing? Andy Gill's Hood library (http://www.haskell.org/hood/) represents a big improvement over previous uses of trace co. It doesn't affect strictness properties, and instead of displaying debug information in the nearly incomprehensible order in which it is generated, it collects, post-processes and pretty-prints the information and displays the results after program evaluation, in a more readable form. However, as Andy already noted in the Hood documentation, there is a lot of useful information to be gathered from the order in which (parts of) data structures are observed. Now that Hood associates individual observation events with the data structures to which they belong, thus facilitating comprehension of observations, it would be nice to find a way to visualise the observation order as well. Andy's plan was to incorporate such a feature into a textual browser add-on for Hood (in CVS, not released yet?). But for tasks for which structural context and relationships between parts dominate over details, my personal preference would be a graphical form of visualisation. GHood is my current attempt to add a such a graphical viewer to Hood. It hasn't yet reached its final form, but it is quite useful and usable already. We have played with it locally, and I can't spent too much time on this, but I would like to get some external feedback before I finalise the development. Hence this pre-release. Currently, GHood comes in two files: a drop-in replacement for the Hugs98 variant of Hood (only minimal changes, same interface) and a Java class file archive for the graphical viewer itself. To find the two files, please visit my Haskell corner at: http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/ Enjoy (and let me know what you think about it), Claus -- Claus Reinke, http://www.cs.ukc.ac.uk/people/staff/cr3/ Computing Lab, University of Kent at Canterbury ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Green Card for untyped lambda calculus?
nil :: List a cons:: a - List a - List a forlist :: b - (a - List a - b) - List a - b .. The implementation I'm interested in (one without constructors) is: nil fornil forcons= fornil consx xs fornil forcons= forcons x xs forlist fornil forcons ls = ls fornil forcons .. As one might guess, this implementation relies on infinite types, which becomes obvious when offering for example map f = forlist nil (\x xs - cons (f x) (map f xs)) I have to admit that I misread your definitions at first, because they are very similar to the representation of data structures as their own folds. As your original question has been answered by now, it might be interesting to have a look at this alternative. In the case of lists: nil c n = n cons x xs c n = c x (xs c n) -- pass c and n to xs fold c n l = l c n The reversed order of the cons and nil replacements is just for consistency with the standard fold, but note that the replacements get passed down to the rest of the list in the definition of cons, ensuring a consistent interpretation for all constructors in a given structure. This gives you the obvious non-recursive map f = fold (cons.f) nil and all those other golden foldies sum = fold (+) 0 length = fold (const (+1)) 0 append l1 l2 = fold cons l2 l1 .. Instead of an explicit recursion over an unknown structure, with pattern-matching at each stage, these functions utilize the recursive structure of the list parameter, and work uniformly on the whole list. [This fold representation of lists just uses a parameterized variant of the successor function known from Church numerals. I seem to remember that Church himself used continuation-passing representations of data structures, but I don't have access to his presentation here:-( Could someone please confirm whether or not he used it for recursive structures (other than naturals) or only for pairs, products, conditionals and the like?] To make things more visible, the following conversions are useful: fromlist = foldr cons nil tolist = fold (:) [] Then you have, for instance Main fromlist [1..4] function Main tolist $ fromlist [1..4] [1,2,3,4] Main tolist $ map (+1) $ fromlist [1..4] [2,3,4,5] What I don't know is whether this representation is an option for you or whether you need the "flat" pattern-matching. Your initial definitions seem to correspond not to the standard lists but rather to: data L a b = N | C a b which doesn't fix the recursive structure and allows, among other things, for heterogeneous lists and trees. You'll find it hard to write some recursive functions even for this non-functional variant. And if you fix (L a) to give you something equivalent to data List a = N | C a (List a) the flat functional representation is no longer the only possible choice - it represents the "sums of products"-part, but leaves the recursive structure unattended. The variant given above would take care of that, but has other problems: As mentioned, the variant that represents data structures as their own folds corresponds -for lists- to Church numerals with parameterized successor. Unfortunately, the predecessor for these numerals is known to be slightly non-obvious, so the definition of tail for the fold-style lists is left as an exercise to the reader;-) If you absolutely need to operate on parts of your lists and want to treat each cons individually, the modified definitions of the "flat", pattern-matching-style representation given in other postings will be more convenient but, where it is applicable, the whole-structure approach supported by the "deep", fold-style representation is quite attractive. Claus ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell