Re: [Haskell] What makes a functional language functional?
Chris, I'm not sure what exactly your question is as you are mixing up several concepts. However, you start with: In haskell, we can transform: g x + f x into: f x + g x Here you assume that the function + is commutative. Although this is probably true for all numeric types in all Haskell implementations, neither functional languages in general nor the Haskell report guarantee commutativity of +. So this assumption is dangerous. 0 + _|_ /= _|_ + 0 Or, does this just become: _|_ = _|_ ? The Haskell report probably does not say so explicitly, but in general primitive functions like + are strict in all arguments and hence 0 + _|_ = _|_ = _|_ + 0. Ciao, Olaf ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] IFL 2007: Symposium on Implementation and Application of Functional Languages
** Call for Papers and Participation 19th International Symposium on Implementation and Application of Functional Languages IFL 2007 27th-29th September 2007, Freiburg, Germany co-located with ICFP 2007 http://proglang.informatik.uni-freiburg.de/IFL2007/ ** == Submission for Draft Proceedings 31 August 2007 == Early Registration Hotel Deadline 1 September 2007 The aim of the IFL symposium is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. The symposium provides an open forum for researchers who wish to present and discuss new ideas and concepts, work in progress, preliminary results, etc. related primarily but not exclusively to the implementation and application of functional languages. Topics of interest include (but are not limited to): * language concepts * type checking * compilation techniques * (abstract) interpretation * generic programming techniques * automatic program generation * array processing * concurrent/parallel programming * heap management * runtime profiling * performance measurements * debugging and tracing * (abstract) machine architectures * verification * formal aspects * tools and programming techniques Papers on applications or tools demonstrating the suitability of novel ideas in any of the above areas and contributions on related theoretical work are also welcome. The change of the symposium name adding the term application, introduced in 2004, reflects the broader scope IFL has gained over the years. Contributions Prospective authors are encouraged to submit papers to be published in the draft proceedings and present them at the symposium. All contributions must be written in English, conform to the Springer-Verlag LNCS series format and not exceed 16 pages. The draft proceedings will appear as a technical report. Every attendee of IFL 2007 will have the opportunity to submit a revised version of their paper for post-symposium reviewing. As in previous years, selected papers will be published by Springer Verlag in the Lecture Notes in Computer Science (LNCS) Series. Important Dates Submission for Draft Proceedings 31 August 2007 Early Registration Deadline 1 September 2007 Hotel Registration Deadline 1 September 2007 Symposium 27-29 September 2007 Submission for post-refereeing 2 November 2007 Notification of acceptance / rejection 14 December 2007 Submission of camera-ready version 25 January 2008 Programme Committee Peter AchtenRadboud University Nijmegen, The Netherlands Kenichi AsaiOchanomizu University, Japan Manuel Chakravarty The University of New South Wales, Australia Olaf Chitil (chair) University of Kent, UK Martin ErwigOregon State University, Oregon, USA Marc Feeley Université de Montréal, Canada Martin Gasbichler Zühlke Engineering AG, Switzerland Kevin Hammond University of St. Andrews, Scotland Zoltán Horváth Eötvös Loránd University, Budapest, Hungary John Hughes Chalmers University of Technology, Sweden Ken Friis LarsenUniversity of Copenhagen, Denmark Rita Loogen Philipps-Universität Marburg, Germany Michel MaunyENSTA, France Sven-Bodo ScholzUniversity of Hertfordshire, UK Clara SeguraUniversidad Complutense de Madrid, Spain Tim Sheard Portland State University, Oregon, USA Glenn StrongTrinity College, Dublin, Ireland Doaitse Swierstra Utrecht University, The Netherlands Malcolm Wallace The University of York, UK Local Organisation Markus DegenUniversität Freiburg, Germany Peter Thiemann Universität Freiburg, Germany Stefan Wehr Universität Freiburg, Germany Further Information http://proglang.informatik.uni-freiburg.de/IFL2007/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] IFL 2007: Symposium on Implementation and Application of Functional Languages
** Announcement and Call for Papers for the 19th International Symposium on Implementation and Application of Functional Languages IFL 2007 27th-29th September 2007, Freiburg, Germany co-located with ICFP 2007 http://proglang.informatik.uni-freiburg.de/IFL2007/ ** The aim of the IFL symposium is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. The symposium provides an open forum for researchers who wish to present and discuss new ideas and concepts, work in progress, preliminary results, etc. related primarily but not exclusively to the implementation and application of functional languages. Topics of interest include (but are not limited to): * language concepts * type checking * compilation techniques * (abstract) interpretation * generic programming techniques * automatic program generation * array processing * concurrent/parallel programming * concurrent/parallel program execution * heap management * runtime profiling * performance measurements * debugging and tracing * (abstract) machine architectures * verification * formal aspects * tools and programming techniques Papers on applications or tools demonstrating the suitability of novel ideas in any of the above areas and contributions on related theoretical work are also welcome. The change of the symposium name adding the term application, introduced in 2004, reflects the broader scope IFL has gained over the years. Contributions Prospective authors are encouraged to submit papers to be published in the draft proceedings and present them at the symposium. All contributions must be written in English, conform to the Springer-Verlag LNCS series format and not exceed 16 pages. The draft proceedings will appear as a technical report. Every attendee of IFL 2007 will have the opportunity to submit a revised version of their paper for post-symposium reviewing. As in previous years, selected papers will be published by Springer Verlag in the Lecture Notes in Computer Science (LNCS) Series. Important Dates Submission for Draft Proceedings 31 August 2007 Early Registration Deadline 1 September 2007 Symposium 27-29 September 2007 Submission for post-refereeing 2 November 2007 Notification of acceptance / rejection 14 December 2007 Submission of camera-ready version 25 January 2008 Programme Committee Peter AchtenRadboud University Nijmegen, The Netherlands Kenichi AsaiOchanomizu University, Japan Manuel Chakravarty The University of New South Wales, Australia Olaf Chitil (chair) University of Kent, UK Martin ErwigOregon State University, Oregon, USA Marc Feeley Université de Montréal, Canada Martin Gasbichler Zühlke Engineering AG, Switzerland Kevin Hammond University of St. Andrews, Scotland Zoltán Horváth Eötvös Loránd University, Budapest, Hungary John Hughes Chalmers University of Technology, Sweden Ken Friis LarsenUniversity of Copenhagen, Denmark Rita Loogen Philipps-Universität Marburg, Germany Michel MaunyENSTA, France Sven-Bodo ScholzUniversity of Hertfordshire, UK Clara SeguraUniversidad Complutense de Madrid, Spain Tim Sheard Portland State University, Oregon, USA Glenn StrongTrinity College, Dublin, Ireland Doaitse Swierstra Utrecht University, The Netherlands Malcolm Wallace The University of York, UK Local Organisation Markus DegenUniversität Freiburg, Germany Peter Thiemann Universität Freiburg, Germany Stefan Wehr Universität Freiburg, Germany Further Information http://proglang.informatik.uni-freiburg.de/IFL2007/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Removal candidates in patterns
I am very please to see on the Wiki also a list of removal candidates and that these include n+k patterns and ~ patterns. I'd like to add one pattern to this list of removal candiates: k patterns, that is, numeric literals. Why do I want to get rid of these three patterns? Because all three caused me no end of trouble when implementing the program transformation of the Haskell tracer Hat. Hat actually still doesn't handle nested ~ patterns. Why are these patterns so hard to implement for Hat? Surely the Haskell report gives a translation into simple core Haskell. Well, Hat does not use this translation because it does not want to be an inefficient pattern matcher (leave that job to the compiler) but produce a trace of the Haskell program as it is written. However, both n+k and k patterns cause calls of functions ( (-), (==) etc) that Hat has to record in its trace. Also ~ patterns do not fit the simple rewriting semantics of the Hat trace and hence have to be recorded specially. While in simple cases that occur in practice it is pretty straightforward to remove n+k, k and ~ patterns from a larger pattern while keeping the rest of the larger pattern intact, in the general case this is incredibly hard. Iff n+k patterns are removed, there is little good use for k patterns either. Since the introduction of monadic IO the ~ pattern is hardly used in practice either. In all the simple cases that these three are currently used in practice, it is easy for the programmer to define their function in an alternative way. So get rid of these three and pattern matching becomes so much more simple. Ciao, Olaf ___ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime
Re: Removal candidates in patterns
Duncan Coutts wrote: I think it's a perfectly reasonable mental model for people to believe that: data Char = 'a' | 'b' | 'c' | ... data Int = ... -2 | -1 | 0 | 1 | 2 | ... I don't see why we should remove one and not the other. Students will ask why the can pattern match on strings, characters and booleans but not numbers. Numbers are special because they are overloaded. A numeric literal is an element of many types. That clearly distinguishes them from other literals. Perhaps primitive recursion on integers is misleading, but people will still write foo n | n == 0= ... | otherwise = ... where they previously wrote foo 0 = ... foo n = ... so what have we gained except less notational convenience? Discourage anyone from teaching primitive recursion on integers. Recursion on integers then has to be taught as a separate topic, giving opportunity to point out the pitfalls. Sure, it doesn't prevent anyone from writing anything. Not all pattern matching on numeric literals is involved with recursion on integers, where as virtually all n+k patterns is used for that purpose. I think there are very few situations where you would use k patterns without recursion. So there is some distinction between the two forms. n+k patterns are a quirk of the numeric types. k patterns are regular with other types in the language. As I said above, they are not regular because of overloading. It's partly the complexity of the language and partly because our latest language spec (H98) is not the language that we all use (H98 + various extensions). I'm sure Haskell-prime will help in this area. I hope as well that Haskell' will be the language that most people will use and some extensions are certainly required for practical use. I just want to get rid of superfluous features. Ciao, Olaf ___ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime
Re: Proposal: Improved error message of Error in array index
Simon Marlow wrote: I can explain why it happens, though. The compiler has lifted out the error case into a top-level binding: x = error Error in array index So cost-centre stacks are not produced for the original program written by the user, but for the intermediate program after optimisation. I understand that you want to profile the optimised version, but the fact that your profile does not quite match your program can be confusing. For debugging, however, I think that it is clear that you want your debugging information (cost centre stacks) to match your program, even if it costs you some performance. Ciao, Olaf ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: [Haskell] line-based interactive program
Christian Maeder wrote: Colin Runciman wrote: buffer xs = foldl const xs xs I don't find it this easy nor a good programming practise. My interaction depends on the (subtle order of) evaluation of a pure and total function? I would not think so much about the operational evaluation order, but about the denotational value of functions. In a non-strict language functions always also accept partially defined arguments (including bottom _|_). What a function returns for partially defined arguments tells you how it can be used in an interactive program. So: buffer _|_ = _|_ buffer (a1:_|_) = _|_ buffer (a1:a2:_|_) = _|_ ... buffer (a1:a2:...:an:[]) = a1:a2:...:an:[] Ciao, Olaf ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] line-based interactive program
Andrew Pimlott wrote: It is one thing to embrace lazy evaluation order, and another to embrace lazy IO (implemented using unsafeInterleaveIO). As a relative newcomer to Haskell, I got the impression that the interact style was always a hack, discarded for good reason in favor of the IO monad. Is there a strong case for interact? Why invent a new feature (IO monad) if you can do IO already based on lazy evaluation? It is no argument against interact that all Haskell systems use unsafeInterleaveIO to implement it. Any kind of IO needs some primitive, you could just as well have interact directly as primitive. In fact, unsafeInterleaveIO shows up limitations of the IO monad. Without this strange primitive (what is actually unsafe about it?) you cannot simulate lazy IO within the IO monad. Another example is the previously discussed line buffering of input. If this were not a primitive in the IO monad, how would you define it in Haskell without some nasty hacks (a top-level IORef springs to my mind)? The IO monad certainly has advantages with respect to modularity. A shame that you cannot freely mix lazy IO and monadic IO in a program. I find it sad that the IO monad has so widely been accepted that nobody seems to search for better alternatives any more. Ciao, Olaf ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: Scoped type variables
I'm not sure I understand the objection raised by Jon; the 'implicit declaration' of type variables in type signatures has never bothered me, and in fact seems quite similar to how names for values don't have to be declared beforehand but are brought into scope by the binding (which I also have no problem with). The binding of a variable is the declaration of the variable. In contrast, type variables are never declared in Haskell 98, they are only used. In my opinion this lack of an explicit type variable quantifier is just acceptable, because all type variables are universally quantified and their scope is just the type in which they appear. The very moment you allow for wider scopes of type variables the disadvantage of the lacking type variable quantifier becomes appearant: When you see f :: a - a somewhere within an inner where clause you do not know at all if this means f :: forall a. a - a or the a is actually quantified somewhere outside and hence f has a far more restricted type (because Haskell does not even require you to write a type signature next to its variable binding, you have to search the *whole* module to find out). So scoped type variables do not fit Haskell well anyway, but I think when added they should at least be an upward compatible extension; any Haskell 98 program should still be correct. Hence I support Jon in that ghc should only allow those type variables a wider scope that have been explicitly declared with the non-Haskell 98 keyword forall. Olaf ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
[Haskell] Research Associate position available
Research Associate Computing Laboratory, University of Kent at Canterbury, UK £19,460 - £21,640 pa Full-time and fixed term for 30 months, to start early 2005 The post is in association with the EPSRC project A Theory of Tracing Pure Functional Programs. The overall aim of the project is to develop a semantical theory of tracing pure functional languages, eager and lazy, including: tractable formal definitions of traces, views of traces and fault location methods; theorems proving the correctness of the methods. The theory will allow existing systems to be improved and novel systems to be built. Candidates should have a good working knowledge of programming language semantics, in particular operational semantics for functional languages. A PhD degree in Computer Science (or related) is highly desirable. Enquiries about the project should be addressed to Dr Olaf Chitil, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:R05/11 and further information can be found at http://www.cs.kent.ac.uk/people/staff/oc/. *Closing date for receipt of completed applications is: 12 noon Friday, 7 January 2005. Note that this project grows out of my previous work with Colin Runciman and Malcolm Wallace at York on the Haskell tracer Hat (http://www.haskell.org/hat). While the project itself is theoretical and more general, knowledge of Haskell would be beneficial. Olaf * ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Understanding strictness of ghc output
Duncan Coutts wrote: On Tue, 2004-06-22 at 14:17, Tomasz Zielonka wrote: On Tue, Jun 22, 2004 at 01:52:44PM +0100, Malcolm Wallace wrote: Same again. Try addHeight h E= h `seq` h which, although it looks bizarre, actually forces the evaluation of h, whilst simply returning it does not. That contradicts my intution for seq. I would read it as h is forced before h is forced, and I would think that (h `seq` h) is equivalent to h. I think a better intuition is that h is forced before h is *returned*. You can return a value without that value being forced to head normal form. In fact this is the ordinary case. Values are only 'forced' when you pattern match on them (or if you use seq), and even then only when the result of the pattern match is used. However, standard lazy evaluation will only evaluate a function application if the head normal form of the result is needed. So when addHeight is computed its result is needed. Hence there is no point in writing h `seq` h instead of just h. This could only make a difference for some form of optimistic evaluator that stops in the middle of evaluation on turns the remaining evaluation into a closure. But would such an evaluator give the programmer control about where exactly it would stop optimistic evaluation? Olaf -- OLAF CHITIL, Computing Laboratory, University of Kent, Canterbury, Kent CT2 7NF, UK. URL: http://www.cs.kent.ac.uk/people/staff/oc/ Tel: +44 (0)1227 824320; Fax: +44 (0)1227 762811 ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: interact behaves oddly if used interactively
Simon Marlow wrote: So, we're agreed that the presence of lazy evaluation makes implementing optimistic evaluation (and other evaluation strategies) more complicated. Personally, I find it disturbing that the presence of this family of library functions affects something as low-level as the evaluation strategy of the language. Or, to put it another way, it enforces a particular evaluation strategy on a part of the language, when the rest of Haskell makes no such restrictions. Lazy IO doesn't make lazy evaluation more complicated ;-) Optimistic evaluation is a already a complicated evaluation strategy and in my opinion lazy IO just makes it a bit more complicated. Lazy IO does not force you to use lazy evaluation any more than other uses of non-strictness force optimistic evaluation to bail out of eager evaluation. I would expect, in a pure language, that I could reduce any expression in a program to HNF (or _|_) without affecting the meaning of the program. In Haskell, I can only do this so long as the expression doesn't involve any lazy I/O. You always have to be careful with reducing an arbitrary expression in case it is _|_. The real problem is that lazy I/O injects side effects into the pure world of expressions. Haskell has a perfectly good system for encapsulating side effects - the IO monad. So why put these sneaky side effects into pure values? I fear one problem is that the word side effect is not properly defined. Actually both getContents and interact are in the IO monad. Since the action interact is not terminated until the input stream ends, I do not see any problem there. Evaluation never leaves the IO monad until termination. getContents is more strange. However, you can just define its semantics to yield a random string. When you are not careful you may certainly get something nearly random ;-) I completely agree with Thomas Hallgren's message. I also view the IO monad as a temporary solution and regret that research into better lazy IO seems to have stoped. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: interact behaves oddly if used interactively
So... you agree that getContents in its current form should really be called unsafeGetContents? Unless perhaps we redefine its semantics to either (a) yield a random string or (b) eagerly slurp the entire stream contents? Well, I'm not sure that the semantics of getContents is currently clearly defined and I'm not sure what the prefix `unsafe' really means. However, I don't want to continously disagree with you. unsafeGetContents would have been a better name. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: interact behaves oddly if used interactively
Christian Maeder wrote: Malcolm Wallace wrote: [...] Surely the name suggests that interactive behaviour is required, i.e. exactly some interleaving of input and output. The chunk-size of the interleaving should depend only on the strictness of the argument to interact. I'm not happy that interleaving depends on the strictness. Lazy or strict evaluation should only change the behaviour of overall termination (lazy evaluation should terminate more often). I disagree with your point of view. Non-strictness is an essential feature of Haskell that any Haskell programmer should learn about soon. The use of an interleaving function interact helps to understand non-strictness. Also it shows that Haskell doesn't need a set of additinal primitives to deal with IO (the IO monad), but that non-strictness can provide the basis for IO. You only need as single primitive function, interact, that connects your non-strict IO function to the external world. I do not claim that this IO model is the best for programming in the large, but you can learn a lot from it. Surely also something is needed for endless character resources as Tom pointed out. An interactive interact is fine for that. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: interact behaves oddly if used interactively
Simon Marlow wrote: Malcolm Wallace writes: But the whole purpose of 'interact' is to use its argument as the demanding function which drives lazy consumption of the input. It is *designed* to reveal the evaluation behaviour, by hoisting it into the I/O monad. This is why interact is bad, IMO: it forces you to think about the evaluation order. The evaluation order for Haskell is not part of the language definition - it is normally up to the implementation to pick a strategy. Except when you get to lazy I/O. The commonly accepted meaning for the lazy I/O operations forces the implementation to adopt a lazy evaluation strategy for values which require lazy I/O. For example, eager evaluation would be a completely valid implementation strategy for Haskell if it were not for lazy I/O. Pardon? Haskell is a non-strict language. Using 'interact' is one of numerous situations where one takes advantage of non-strict semantics. (Keith just gave a different example.) Non-strict semantics does not prescribe the evaluation order, although usually lazy evaluation is used. I suppose you are talking about optimistic evaluation, which is a mixture of eager and lazy evaluation. That is fine and should work well with 'interact', otherwise there is something wrong with optimistic evaluation. Certainly pure eager evaluation is not a valid evaluation strategy for Haskell. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: interact behaves oddly if used interactively
Robert Ennals wrote: No, optimistic evaluation does not work well with interact, because it causes the input stream to be evaluated (and therefore demanded) earlier than you would expect. This is the problem: interact exposes more than just non-strictness, it exposes laziness. In Robert Ennals' implementation of optimistic evaluation he has to fall back to lazy evaluation for lazy I/O, precisely because of this problem. Or to put it another way. Optimistic Evaluation works fine with interact. You can write programs that use interact, evaluate them optimistically, and they will behave exactly as they always did. Good to hear that ;-) I still do not quite agree with Simon that 'interact' exposes anything but non-strictness. Non-strictness means that map toUpper _|_ = _|_ map toUpper ('a':_|_) = ('A':_|_) map toUpper ('a':'b':_|_) = ('A':'B':_|_) and 'interact (map toUpper)' is a great way to experience this property. However, you can also experience the property without 'interact', evaluating expressions like take 2 (map toUpper ('a':'b':undefined)) I suppose Simon finds it annoying that optimistic evaluation has to deal specially with 'interact' (or actually the primitive that really implements it). I do not find that surprising. When you define an evaluation strategy you have to define it for all language constructs, including special primitives. If many of the special primitives don't need special treatment, that is nice, but it cannot be expected in general. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: interact behaves oddly if used interactively
Simon Marlow wrote: Certainly you can observe non-strictness, that's not the point. The point is that you can also observe more than just non-strictness using interact, and I don't think that is desirable. For example: interact (\xs - let z = length xs in Hello World\n) Now, Haskell is a pure language, so it shouldn't matter whether the implementation evaluates z or not, as long as it is careful not to violate the non-strict semantics by turning a terminating program into a non-terminating one. A parallel Haskell implementation might happily spawn off another thread to evaluate z for example. An optimistic implementation might evaluate z for a fixed amount of time before continuing with the main thread of evaluation. BUT in the presence of lazy I/O this simply isn't true any more. Why? Because z is not pure; evaluating it has a side-effect. And yet it has type Int. Am I the only one who thinks this is wrong? I don't know what everyone else thinks, but I do not see your problem. There is nothing impure about z. According to non-strict semantics we have (\xs - let z = length xs in Hello World\n) _|_ = Hello World\n So every implementation has to make sure that this equation holds. You know it is not enough for optimistic evaluation to `avoid _|_' by evaluating eagerly only for a fixed amount of time. You also have to bail out if evaluation raises an exception. This in particular if eager evaluation causes a black hole: reevaluation at some later time may yield a perfectly fine value. Likewise the presence of lazy IO forces you to test if the value is currently available or needs to be read from some handle; the presence of lazy IO adds another sort of black hole. Yes, the presence of lazy IO makes optimistic evaluation more complicated, but I do not see that it compromises the purity of the language in anyway (whatever purity is ;-). Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: interact behaves oddly if used interactively
Robert Ennals wrote: It is wrong for all the same reasons that unsafePerformIO is wrong, except that it is worse than that because unsafePerformIO has unsafe in the title, and people are discouraged from using it without due care. By contrast, interact and getContents are presented as being nice friendly functions that everyone should use. If I had my way, getContents would be called unsafeGetContents and interact would be called unsafeInteract. Even better would be for them to both be removed from Haskell completely :-)) I agree that getContents can easily cause some complex and unwanted IO behaviour and that users should be warned about it. interact probably only makes sense as the only IO action of a program. However, they are both not unsafe in the sense that they do not change the `standard' semantics of Haskell. unsafePerformIO even undermines the type system. [begin rant] I'd be far more in favour of removing polymorphic seq and strictness annotations. Their presence changes the equational semantics of the language. Their presence makes most theorems for free invalid. [end rant] But then, if I had my way, I would abandon lazy evaluation correctly and fork a strict-by-default language derived from Haskell. I have spent sufficient time analysing lazy evaluation during my PhD to become convinced that it is the wrong evaluation strategy in 90\% of cases, and that in the 10\% of cases where it is needed, the program would be much clearer if this was stated explicitly in the program text. Haskell is a good language, pureness is good, type classes are good, monads are good - but laziness is holding it back. [end rant] -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Predicativity
Nor do I know how Hugs and nhc manage to succeed! I don't think either of them do fancy hash-consing of types. Maybe Malcolm W knows? Malcolm is on holidays. nhc98 doesn't do anything fancy with types. | Since the type system is predicative, the leftmost instance | of `copy' is instantiated to (Int - Int)^27 where t^1 = t and | t^(n+1) = t^n - t^n. The type explodes The reason why nhc98 and Hugs succeed is quite simple: the type checker does not do a translation into an explicitly typed calculus like GHC. Hence no data structure holds on to the final type of the leftmost `copy'. The type checker just holds onto the current type of the current expression. The type of the leftmost `copy' is initially _a - _a (_a is internal type variable). The type of (copy # copy) is _b - _b when visited, the type of (copy # copy) # copy is _c - _c and so on. No explosion at all. I also applied my type checking algorithm of principal monomorphic typings to the example. The algorithm holds onto all intermediate typings, so that the user can see them. However, the principal typing of `copy' is just {} |- a - a, so there is no problem here either. Hence I think Ralf's example exposes a problem with using an explicitly typed calculus in a compiler. GHC should probably use hash-consing. Any type view tool based on final types (instead of principal typings like mine) faces the same problem. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Glasgow-haskell-bugs mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
Re: how to track down infinite loop?
David Roundy wrote: Any ideas for tricks to see where a program is looping indefinitely? I'm sure I can track down this bug pretty easily, but is seems like this is something one really ought to be able to do... May I answer your question by advertising the Haskell tracer Hat? http://www.cs.york.ac.uk/fp/hat/ The various trace viewing tools of Hat show you your computation in various ways and thus help you in particular to locate a fault. In your case hat-trail is the right tool. Just start hat-trail with the trace of your looping computation (you may interrupt the computation early to save time and trace space; but note that a traced computation is considerably slower). Then repeatedly press enter to see the whole virtual stack. Each stack entry shows a function with its arguments in most evaluated form. I should point out that Hat works for Haskell 98; it supports only few libraries and no ghc language extensions. Real soon we will release a new version supporting more libraries, a few language extensions (multiparameter classes, functional dependencies) and some other improvements. Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Problem with hierarchical libraries in Hugs compared to ghc/nhc98
Ross Paterson wrote: On Fri, Mar 07, 2003 at 02:35:47PM +, Olaf Chitil wrote: The simple reason why Hugs behaves so is that when searching a module it *always* searches the current directory (* where the import was demanded *) first. Only afterwards the paths set with the -P option are searched. This is a clear flaw in Hugs (long known, but hard to fix). A workaround is to add the -X option to stop it adding the extra directory. Thank you very much for pointing me to the -X option which I had not noticed before. Why is it hard to fix if -X actually exists? Why not have -X as default? Is the current default behaviour good for anything? I still suggest putting some semantical description into the hierarchical libraries extension document, so that people like me do not jump to wrong conclusions because of the similarity with directory structures. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Ambiguous type variable
Jorge Adriano wrote: This works fine, as expected f :: (Num a, Random a) = Int - [a] f = randomRs (0,1).mkStdGen If I skip the type signature, though, I get the following error messages: ... You just tripped over the infamous monomorphic restriction of Haskell: A variable that is defined like a constant (no arguments before =) is not allowed to be overloaded (have a class context), except when the type of the variable is explicitly given. Maybe it would be possible to generate a more informative type error message in such cases? Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: do notation and
I do not understand what full laziness has to do with all this! The big question is, in the following: f = do expr1 expr2 Should expr2 be shared among different calls to f? It is clear that expr1 will, but expr2 will not be shared, using the current translation used by GHC and Hugs. Well, if a compiler implemented full lazyiness, then both translations (with = and with ) would share expr2... I think that it is worth warning Haskell users about the potential space leak you noticed, but I don't think that it should influence the decision on do and . I don't believe that it will break many programs. How many programs produce large *input independent* output, that is not already literally in the source, in a caf with a long life-time? Unfortunately I'm not even sure that your warning should be added to the Haskell report, because the report says hardly anything about sharing and space usage. I believe even a call-by-name implementation or a full laziness implementation would be fully Haskell 98 compliant. (I'm not happy about that either). Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Do-notation
I am certain that was not the intention of the authors. So one possibility is to change the second line to read do {e;stmts} = e = (\ _ - do {stmts}) and that is what I *propose* to do. However, James implies that in his monad () has a different meaning than its usual one, and Haskell 98 allows that because () is one of the class operations (not a good design choice I think). I'm quite reluctant to make the meaning of do-notation dependent on such differences. I disagree. Hugs should be changed, not the report. In my opinion () is a class operation, because there may be more efficient implementations of () than the default one (\x y - x = \_ - y). The do notation should profit from a more efficient implementation. I do think that James steps on very dangerous ground if his implementation of () is semantically different from (\x y - x = \_ - y). This semantic equality is a law of the monad class, that users expect to hold. However, the language (implementation) cannot enforce it; in particular, compilers cannot use such laws for optimisations. The programmer has the freedom to break such laws. The situation is similar with classes such as Eq and Ord. You expect x /= y == not (x == y), but it is not enforced. Actually, the report currently doesn't say that the given default definitions are laws which are expected to hold (except where a comment says otherwise, see class Enum). I think such a statement should be added to the report. Happy Easter, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Do-notation
Ross Paterson wrote: Yes, but in that case the specific implementations are required to be denotationally equal to the default versions. And surely that was the original intention here. Section 6.3.6 of the Report needs an additional equation: m k = m = \_ - k Then Hugs and GHC would be correct but suboptimal. I agree that this equation should be in the report, but note that there is no *requirement* that the monad laws hold for user defined instances. The report merely states that they *should* hold. Maybe we should be more clear what this word *should* means. I understand it as meaning that the term monad suggests that these laws hold, that a programmer should have very good reasons for breaking any of them and if a law is broken a big warning should be attached to the code, because any user of a monad expects it to fulfill the laws. However, because the laws cannot be checked by an implementation, no implementation is allowed to make use of them. In particular, the laws may accidentally have been broken and if an implementation makes use of the laws, then the behaviour of the resulting computation may be completely incomprehensible. (Just to make clear that I don't advocate breaking these laws; but then I'm not a fan of monads anyway...) Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Haskell slicing tool?
[EMAIL PROTECTED] wrote: Are there any tools to perform program slicing on Haskell? I often find myself wanting to find all fromJusts invoked from the current function, or all functions that use a particular member of my monad's ADT. Actually I was hoping for a static tool. I'd be interested to learn for what purpose you want this kind of information and also more precisely what you would like such a static tool to do for you. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: using less stack
with the cpsfold I get f x1 a (\y1 - f x2 y1 (\y2 - f x3 y3 (\y3 - f x4 y4 (\y4 - y4) so x1 and a are available immediately for f to use, and f x1 a is the outermost expression so will be evaluated first. Yes, however, if f just calls its continuation without forcing the evaluation of at least its second argument, e.g. f x y k = k (g x y) you get f x1 a (\y1 - f x2 y1 (\y2 - f x3 y3 (\y3 - f x4 y4 (\y4 - y4 = f x2 (g x1 a) (\y2 - f x3 y3 (\y3 - f x4 y4 (\y4 - y4))) = f x3 (g x2 (g x1 a)) (\y3 - f x4 y4 (\y4 - y4))) = f x4 (g x3 (g x2 (g x1 a))) (\y4 - y4) = g x4 (g x3 (g x2 (g x1 a))) like the foldr. largerx y = if x y then x else y cpslarger x y k = if x y then k x else k y Yes, with this definition of `cpslarger' no stack is used, because the comparison forces evaluation. With cpslarger x y k = k (larger x y) it does not work. Still, if your definition of cpslarger is natural for your application, it is a nice solution of the problem. Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: n+k patterns
In any case, I propose to change Integral to Ord and Num. I agree. And nhc98 seems to actually implement this. Nonetheless I find using n+k patterns for floating point numbers pretty horrible. And it raises the question why k cannot be a rational ... But then n+k patterns are a wart anyway. Btw., in 3.17.2 Informal Semantics of Pattern Matching the end of the following sentence should be changed: Matching a non-_|_ value x against a pattern of the form n+k (where n is a variable and k is a positive integer literal) succeeds if x=k, resulting in the binding of n to x-k, and fails if xk. There is no guarantee that if x=k == False then x k == True. So the sentence should end ..., and fails otherwise. Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Haskell 98 report; D Specification of Derived Instances
Just before Section D.1 there is the sentence When inferring the context for the derived instances, type synonyms must be expanded out first. I don't understand it. Which type synonyms need expansion? All the u_n are type variables. Besides, this would make deriving even more horrible than it is. It would require knowledge about type synonyms. I thought that it should be possible to derive instances, only knowing the type definition with its deriving clause. Furthermore the next sentence in D reads: Free names in the declarations d are all defined in the Prelude; the qualifier `Prelude.' is implicit here. Well, `Prelude' does not necessarily refer to the builtin prelude. I suppose the sentence should actually be the same as the statement at the beginning of Chapter 3 (Expressions). Naturally this makes clear that it is actually impossible to correctly build a tool like DrIft. There is no safe way in Haskell to refer to the builtin prelude... Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
A Haskell specific preprocessor
I faintly remember that there was once work on a Haskell specific preprocessor. Why was the work abandoned? cpp just annoyed me again. I wrote (/*) = something and cpp complained Test.hs:4: unterminated comment Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: understanding type errors - was: Compiler error using IO
Bjorn Lisper wrote: ... Is there any work being done on intelligent error messages for higher-order implicitly typed languages? One could perhaps come up with search strategies that tries to find a minimal number of program transformations (edit distance) that yields a syntactically correct and well-typed program. For instance, in the case of type errors one could try to search for the minimal number of definitions that have to be removed to make the remaining program well-typed, and then report the removed definitions as likely sources of the errors. Has anyone done this? Quite a number of people have published work on helping the programmer to locate the source of type errors in Hindley-Milner typed languages. I presented a paper about this subject at last ICFP. Mainly my paper is about a way of viewing the types of arbitrary subexpressions of a program. Note that you can only attach a type to an expression such as f x if you know the types of f and x. However, f x may appear in the definition body of f. You don't want cyclic explanations of types... I propose to have the user either browse freely through the typings of arbitrary expressions, or use a method called algorithmic debugging to locate the source of a type error. The algorithmic debugger asks the user if certain typings for certain expressions are correct according to his/her intentions. After some yes/no answers the debugger gives the location of the error. Other people have developed methods for more intelligent error messages, although not exactly in the way you propose. Have a look at the Related Work Section of my paper. You can get it from my web page (URL below). I am a bit doubtful about the practical usefulness of intelligent error messages. It is easy to create examples where a program can be made type correct by one of many rather different small transformations, all of similar complexity. I believe such examples do occur in practise and hence knowledge of the intended semantics is necessary to locate errors. However, with all the many proposals for better error location, there is definitely a lack of practical evaluation. Basically for each proposed method there exists only a toy implementation for a toy language, each method for a different language (and most of these implementations don't even seem to be available). I want to extend my prototype, replacing the type inference phase of nhc98, but I haven't yet had the time... Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Haskell 98 prelude bug report
The prelude contains instance Bounded Char where minBound= '\0' maxBound= '\x' I didn't follow all the discussion about unicode, but the maxBound seems to be wrong. It probably should be '\x10'. The exact specification could also be avoided by defining maxBound = primUnicodeMaxBound where primUnicodeMaxBound is imported from UnicodePrims. -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Strange error in show for datatype
My claim was that forall a. Show a = T could be implemented by passing a bottom dictionary for Show. Excuse me, but Jan-Willem Maessen has already shown that this implementation can give unexpected results. A simple example: instance Show MyType where shows _ = (element of my type : ) Then show (undefined :: MyType) yields element of my type and with the suggested implementation show undefined would yield undefined The instance may look a bit weird, but in general you cannot assume that the functions of all instances of all classes are strict. In fact, some Haskell systems (e.g. nhc98) extend class Show by another method showsType :: a - String; the function is non-strict in all instances. Anyway, I find all these suggestions about occurrence and variance of type variables rather complicated. Worse than the monomorphic restriction. Admittedly, many Haskell beginners fall into the `show []' trap. However, this is a chance for the teacher to discuss the problem (possibly before they fall into the trap). Any type system that prevents you from making some mistakes has to reject some perfectly sound programs (because the absence of mistakes is not decidable). Hopefully the type system is simple enough to enable the programmer to understand why a program is rejected. And fortunately in this special case there is a simple way around the problem (specify a type). By the way, I'm sure some months (a year?) ago there was a similar discussion about replacing dictionaries by bottom dictionaries on a Haskell mailing list or comp.lang.functional. Unfortunately I don't find it anymore. Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: let/where
Yes, Haskell is a rather big language and unfortunately has much redundancy. It seems mostly to be a matter of taste which of `let' and `where' you prefer. Personally, I nearly always use `where' when possible (for the same reason you give); `let' only if the `where' would be too far away. However, for your example I would use `case': main = putStrLn . show $ case maybe_read word of Nothing - DP_Unkown Just index - DP_Number index (Another matter of tast: I don't like `_' in identifiers, but use capital letters like the Haskell Prelude) I was disappointed to find that I don't seem to be able to write things like, main = let maybe_index = maybe_read word in putStr (show (if maybe_index == Nothing then DP_Unknown else DP_Number index) ++ \n) where (Just index) = maybe_index No, the `let' is in the scope of the `where', but the `where' is not in the scope of the `let'. Expressions are nested and allowing mutual recursion here would make things really confusing. Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: The future of Haskell discussion
Here a short summary by Malcolm and me of the final discussion at the Haskell workshop: First Simon Peyton Jones stated that the Haskell'98 Report continues to be revised in small ways for correctness and readability. He will continue this until a whole month passes with no further changes, then issue a finalised new version (so send bug reports now). John Launchbury and many further people made a plea that the single biggest hindrance to the further spread of Haskell is the lack of a standard cross-platform GUI. Alas, no answer to the problem was found. There is no agreement which (existing) library could be the basis of a standard one and nobody wanted to commit himself to developing and supporting such a library. Well, Manuel Chakravarty promised to continue developing the GTK+ binding and would be happy about people helping him. (The GUI library presented at the workshop is not intended to solve the standard GUI problem.) Rather than starting work on a successor to Haskell, people generally want to see `blessed addendums' to the Report, for common agreed extensions, e.g. FFI, exceptions, concurrency, MPTC, fundeps, etc. The FFI addendum is almost ready, but any others need volunteers, not to mention agreement between designers and implementations. The idea is that a solid reference definition/documentation of each extension should exist independent of any particular compiler (but there is a lack of volunteers). It was noted that the future job of designing Haskell-2 will be made much easier by many small steps rather than one large effort. My personal comment about application letters: I think your idea of a poster session with abstracts in the proceedings is very good. In fact, anybody already had the possiblity to give a 10 minute talk. But there was no special invitation to talk about applications and an abstract would certainly be useful for people not attending or future reference. Here a list of the 10 minute talks given at the workshop (Ralf, maybe you can put them on the workshop web page?): Simon Marlow: Haskell Libraries, The Next Generation Presentation of the hierarchical structure. Need more libraries and maintainers for them. Would like to have HaskellDoc and some standard testing method. Mark Shields: Lightweight Modules for Haskell Shortly stated that he is working on a new module system and would like every interested person to join. Wolfram Kahl: Animating Haskell by Term Graph Rewriting HOPS is a system for term graph rewriting that can also show the rewriting steps in a graphical animation. Similar to GHood but all intermediate redexes are shown. Translation of Haskell programs into HOPS by hand. Useful especially for understanding space leaks. Martin Sulzmann: TIE: A CHR-based Type Inference Engine He reformulates context simplification of Haskell classes in the constraint handling rule formalism. This gives a flexible framework for all kinds of extensions and variants of class systems. Manuel Chakravarty: A standard Foreign Function Interface for Haskell98 Basically finished (general part and C, not for Java). Solicits last comments from the general community. -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Haskell 98 Report possible errors, part one
Unfortunately both the old and the new situation are not so nice. Both don't allow a simple translation of Haskell into the Haskell kernel, e.g. you cannot translate [1..] into Prelude.enumFrom 1, because the latter may be ambiguous. The following remark at the beginning of Section 3 is misleading: Free variables and constructors used in these translations refer to entities defined by the Prelude. To avoid clutter, we use True instead of Prelude.True or map instead of Prelude.map. (Prelude.True is a qualified name as described in Section 5.3.) It implicitly suggests that a simple translation is possible. Unfortunately I don't see any simple way to regain a simple translation. Hence I just suggest to change the remark at the beginning of Section 3. Just say that all free variables and constructors refer to entities defined by the Prelude and warn that full qualification is in general not sufficient to achieve this (because the entity may not be imported and because of import .. as). Ciao, Olaf Marcin is right about this. It is inconsistent as it stands. I propose to delete the sentence The Preldue module is always available as a qualified import... in the first para of 5.6.1. The situation will then be: if you don't import Prelude explicitly, you implicitly get import Prelude if you do import Prelude explicitly, you get no implicit imports Nice and simple | 5.6.1. an implicit `import qualified Prelude' is part of | every module and names prefixed by `Prelude.' can always be | used to refer to entities in the Prelude. So what happens in | the following? | | module Test (null) where | import Prelude hiding (null) | null :: Int | null = 0 | | module Test2 where | import Test as Prelude | import Prelude hiding (null) | x :: Int | x = Prelude.null | | ghc allows that, it dosen't seem to implement the qualified | part of the implicit Prelude import. The report is | contradictory: adding `import qualified Prelude' makes | Prelude.null ambiguous, and thus names prefixed by `Prelude.' | can't always be used to refer to entities in the Prelude. -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Haskell 98 Report possible errors, part one
The report is vainly trying to say that, regardless of what is lexically in scope, the builtin syntax refers to Prelude entities. Perhaps I should reword the offending paragraph to say: Free variables and constructors used in these translations refer to entities defined by the Prelude, regardless of what variables or constructors are actually in scope. For example, concatMap used in the translation of list comprehensions (Section 3.11) means the concatMap defined by the Prelude, regardless of whether or not concatMap or Prelude.concatMap are in scope. Would that be better? You can probably delete the first , regardless ... subsentence for better readability. Maybe you should add and refer unambiguously to the Prelude. to the end of the last sentence? Or does the report state somewhere that being in scope includes not being ambiguous? Unfortunately, as far as I see, the report does not even explain what it means for an identifier to be ambiguous. -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Negatively recursive data types
Lars Henrik Mathiesen wrote: From: Keith Wansbrough [EMAIL PROTECTED] Hi... I'm currently looking at the semantics of recursive data types. One thing that Haskell allows, but the semantics for it is very hairy, is *negatively* recursive data types. That is, data types where the recursion occurs to the left of a function arrow. For example: This example is not a very good one. Does anyone have an example of a useful data type involving negative recursion? data Fix a = F (Fix a) - a You can find this and some related examples in Erik Meijer and Graham Hutton. Bananas in space: Extending fold and unfold to exponential types. In Proceedings of the Seventh International Conference on Functional Programming Languages and Computer Architecture (FPCA'95), pages 324-333. ACM Press, 1995. The paper also discusses some hairy details of recursive data types ;-) Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Why is there a space leak here?
David Bakin wrote: That's a very nice visualization - exactly the kind of thing I was hoping for. I grabbed your papers and will look over them for more information, thanks very much for taking the trouble! The animations you sent me - and the ones on your page - are really nice; it would be nice to have a system like HOPS available with GHC/GHCi (I understand it is more than the visualization system, but that's a really useful part of it). (I also found out that Hood didn't help for this particular purpose - though now that I see how easy it is to use I'll be using it all the time. But it is carefully designed to show you (observe) exactly what has been evaluated at a given point in the program. Thus you can't use it (as far as I can tell) to show the data structures that are accumulating that haven't been processed yet - which is what you need to know to find a space problem.) You can use GHood, http://www.cs.ukc.ac.uk/people/staff/cr3/toolbox/haskell/GHood/ . It animates the evaluation process at a given point in the program. It doesn't show unevaluated expressions, but for most purposes you don't need to see them. Most important is to see when which subexpression is evaluated. (an older version of Hood, which is distributed with nhc, also has an animation facility) At some time we also hope to provide such an animation viewer for our Haskell tracer Hat. The trace already contains all the necessary information. Ciao, Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: An attempt at foldr/build fusion for zip
Hi Matt, I think I may have found a way to get zip friends to fuse with *both* of their input lists. ... I have no idea what kind of code this would actually end up creating. However, that is the important point. The goal of deforestation/fusion is to optimise a program. Removing data structures is not the final goal. In principal you can replace all algebraic data types by higher order functions (at least with the second order types that ghc allows). You just don't gain anything by doing it. I'm sorry that I don't have the time to look into your definition in detail. But basically you replace the intermediate list by higher order functions. Your foldr2_both does about the same amount of work as foldr2. Fusion of foldr2 with both arguments should give an expression without any recursively defined function (i.e. any foldr variant). Btw: The real benefit of deforestation does not come from saving the time for constructing and destructing the intermediate list. The real benefit comes from moving the code for the construction of an element next to the code for destructing the element which usally enables many further optimisations. A solution to the zip fusion problem is presented by John Launchbury, Sava Krstic, and Tim Sauerwein in: http://www.cse.ogi.edu/PacSoft/publications/phaseiiiq13papers/zipfusion.pdf I haven't yet looked into it in detail. Some problems with this approach are mentioned in the paper and I suppose they are the reason why the approach is not used in ghc. \begin{code} newtype BuildZip a b = BZ ((a - (BuildZip a b) - b) - b) bz :: (forall b. (a-b-b)-b-b) - b - BuildZip a b bz f n = f (\x xs - BZ (\c - c x xs)) (BZ (\_ - n)) {-# INLINE bz #-} foldr2_both :: (a-b-c-c) - BuildZip a c - BuildZip b c - c foldr2_both k (BZ xs) (BZ ys) = xs (\x xs' - ys (\y ys' - k x y (foldr2_both k xs' ys') ) ) {-# RULES foldr2/both forall k n (f::forall z.(a-z-z)-z-z) (g::forall z.(b-z-z)-z-z) . foldr2 k n (build f) (build g) = foldr2_both k (bz f n) (bz g n) #-} \end{code} Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Eq instance for (a,b,c,d,e) and upwards
I think that this discussion about tuples shows that there is a problem with the Haskell 98 report which should be corrected. I hope Simon accepts it as a minor bug fix. 1) The current wording of paragraph 6.1.4 is highly ambigious. 6.1.4 Tuples Tuples are algebraic datatypes with special syntax, as defined in Section 3.8. Each tuple type has a single constructor. There is no upper bound on the size of a tuple. However, some Haskell implementations may restrict the size of tuples and limit the instances associated with larger tuples. The Prelude and libraries define tuple functions such as zip for tuples up to a size of 7. All tuples are instances of Eq, Ord, Bounded, Read, and Show. Classes defined in the libraries may also supply instances for tuple types. May the limit for the size of tuples and the limit for instances be different? Do tuples exist at least for a size up to 7? What is the purpose of the last sentence? We should decide what we want and the paragraph should then state it clearly. 2) The arbitrary restrictions that Haskell implementations are permitted to apply, basically imply that you shouldn't use tuples at all if you want your code to be portable. Maybe there isn't even a Show instance for ()? So I think the report should demand that tuples and instances for Eq,Ord,Bounded,Read and Show should be implemented for at least up to a size of a given number n. 3) I propose this number n to be 7. This fits well with the fact that zip etc are defined for tuples up to a size of 7. I agree with Martin that you should use a labelled record or at least an own type instead of large tuples anyway. Programs that generate code should als generate definitions for data types, if large product types are needed. Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Literate Programming
I agree with Mark that literate programming is messed up in Haskell. I think that it is even worse than he says and hence I don't use it anymore at all. Mark P Jones wrote: The literate programming conventions using leading ''s (also known as "Bird tracks" or the "inverted comment convention") go back to Orwell, ... This was the only commenting convention in Orwell. It is very interesting that using ''s was the *only* commenting convention in Orwell. In contrast Haskell has ''s and begin{code}...\end{code}, but also the more conventional -- and {- ... -}. As Mark pointed out so clearly, the idea of literate programming is to have program and documentation in one file and be able to generate various forms of output from it. You can do this from Haskell programs that use -- and/or {- ... -} just as well. The only slight advantages of the inverted commenting conventions are that they are easier to parse and that they stress the importance of the comments. Personally I prefer braces such as begin{code}...\end{code} and {- ... -} for longer program/comment snippets. So I end up using mostly {- ... -}. (Markup is often required for text too; use whatever seems appropriate (and sufficiently general) for the application you have in mind.) Here is a problem. Which markup language should you use? Latex, because you want to generate latex documents? A markup language that your generator tool translates into any other language like latex, html,...? Cheers, Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767
Re: redundant constraint
A constraint in a data type definition does not do much. It only restricts the type of the data constructor(s), e.g. here FooType :: Foo a = a - FooType a. Functions that use the data type FooType a cannot take any advantage of the constraint. There has been a long discussion on this issue on the Haskell 98 committee. However, in the end it was left as unsatisfactory as it is, because giving a constraint in a data type definition the semantics that you would like, would require considerable changes of Haskell. John Hughes wrote a proposal for a better semantics of constraints in data type definitions: Restricted Datatypes in Haskell, http://www.md.chalmers.se/~rjmh/Papers/restricted-datatypes.ps From that paper you will see that the semantics you would like to have is not that easy to obtain. Olaf Zhanyong Wan wrote: The following piece of code was rejected by Hugs: class Foo a where write :: a - String write _ = "I'm a foo!" data Foo a = FooType a = FooType a writeFoo :: FooType a - String writeFoo (FooType a) = write a ERROR "Test.hs" (line xx): Cannot justify constraints in explicitly typed binding *** Expression: writeFoo *** Type : FooType a - String *** Given context : () *** Constraints : Foo a By changing the type signature for writeFoo to: writeFoo :: Foo a = FooType a - String or just deleting the signature, I could get it compile. What I don't understand is, since the constraint "Foo a" is redundant here (Hugs should be able to figure it out from the line "data Foo a = FooType a = FooType a"), why doesn't Hugs allow it to be omitted? After all, there could not really be any "FooType a" where a is *not* an instance of Foo. Could anybody shed a light on this? Thanks. -- Zhanyong Wan -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767
3nd Invitation to IFL2000
This is the third and final invitation and call for contributions to the 12th INTERNATIONAL WORKSHOP on the IMPLEMENTATION of FUNCTIONAL LANGUAGES (IFL2000) to be held at RWTH Aachen Aachen, Germany Monday, Sept 4th - Thursday, Sept 7th 2000 email: [EMAIL PROTECTED] http://www-i2.informatik.rwth-aachen.de/ifl2000 /\ | Deadline for registration: Aug 1st 2000! Six days left to register | \/ This invitation is being sent out to participants of previous workshops and to other researchers who we thought might be interested. The announcement will also be sent to comp.lang.functional and other appropriate newsgroups. Please forward to anyone who you think might be interested in attending the workshop. The IFL workshops are a tradition that has lasted for over a decade. The aim of these workshops is to bring together researchers actively engaged in the implementation and application of functional programming languages to discuss new results and new directions of research. Workshop Topics --- The workshop is intended to provide an open forum for researchers who wish to present and discuss new ideas and concepts, work in progress, preliminary results etc. related primarily but not exclusively to the implementation of functional or function-based languages. A not necessarily exhaustive list of topics includes * language concepts * type checking * compilation techniques * (abstract) interpretation * automatic program generation * (abstract) machine architectures * array processing * concurrent/parallel programming and program execution * heap management * runtime profiling and performance measurements * debugging and tracing * tools and programming techniques Contributions on applications of functional programming, e.g., in teaching, or on theoretical work in any of the above areas are also welcome. Special Erlang Session -- The purpose of this session is to bring together researchers in the field of functional programming with industrial users on the programming language Erlang. It will feature talks and demonstrations of product examples written in Erlang. This session is sponsored by Ericsson Computer Science Laboratory and free of charge for the participants. For contributions and more information contact: Thomas Arts Computer Science Lab Phone: +46 8 7273000 ÄL2/UAB/F/R Fax: +46 8 7275775 Ericsson Utvecklings AB [EMAIL PROTECTED] Box 1505, SE-125 25 Älvsjö Sweden Contributions - All attendees are encouraged to submit papers to be published in the draft proceedings and to give presentations at the workshop. Submitted papers must be written in English and not exceed 16 pages. Paper should be submitted by email to [EMAIL PROTECTED] and conform to the LNCS format. The draft proceedings will be published as a Technical Report of the Computer Science Department. We expect to follow the lead of the three last IFL workshops in publishing a high-quality subset of contributions in the Springer LNCS series. All speakers attending the workshop will be invited to submit a paper. Papers for the LNCS proceedings will be refereed according to normal conference standards. Dates Jun 27th - Aug 1st 2000:Registration Aug 7th 2000:Submission for Draft Proceedings Sept 4th - Sept 7th 2000:Workshop Schedule of Events -- |Sep 4 |Sep 5 |Sep 6 |Sep 7 | Mon | Tue | Wed | Thu -+--+--+---+ morning | | IFL sessions | IFL sessions | IFL sessions -+--+--+---+ afternoon| IFL sessions | IFL sessions | IFL sessions | Erlang session -+--+--+---+ evening | | | social event/ | | | | diner | Fees and Accommodation -- The workshop fee is EUR 130,- and includes lunch on all days of the workshop, a copy of the draft proceedings, and a social event. For accommodation, we have arranged a special discount of DM 112,90 (EUR 57.7249) per night for a single room and DM 127,90 (EUR 65.3942) for a double room (both including breakfast) in the hotel ibis Aachen Normaluhr, in the centre of Aachen. Due to an error of the hotel, this is only available during the period Mon, 4th to Thu, 7th. If you plan to arrive earlier or leave later, you have to arrange accommodation on
Re: Haskell libraries, support status, and range of applicability(was:Haskell jobs)
I have never submitted any request to the maintainers of www.haskell.org to place links to my modules. Yet, several of them are there, and that tells me that either John or Olaf does occasional scan of messages from this list and update the pages from time to time. The latter is exactly what I do and how 99% of the information about tools and libraries were collected. The main page of haskell.org asks for information about projects, compilers, papers, classes, or anything else but we hardly receive any. This is why I am rather wary about putting much effort into any automatic submission and updating scheme (maybe I should put a simple form there which just forwards any information to John and me via email? ;-). And if you want to know if a library is still supported/further developed, then just click on the link and look at the web page for the library. If it talks about Haskell 1.2 and plans for 1995 you know what (not) to expect. Unfortunately many libraries and tools do not even have proper web pages to link to from haskell.org (Michal Weber, are you listening? ;-). Simon, can you tell me how I shall link to hslibs, especially each individual library? Obviously a user would also like to download a single library. Are the URLs stable? I do not want to sound too negative. I am always happy about suggestions for improving haskell.org (especially if they do not imply too much work ;-). And I completely agree that supported libraries should somehow be more visible. I didn't update haskell.org for a few month because I had to finish my thesis, move to York and start a new project here. But I will update the pages with all the stuff that accumulated in the meantime very soon. Cheers, Olaf -- OLAF CHITIL, Dept. of Computer Science, University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767
2nd Invitation to IFL2000
This is the second invitation and call for contributions to the 12th INTERNATIONAL WORKSHOP on the IMPLEMENTATION of FUNCTIONAL LANGUAGES (IFL2000) === to be held at RWTH Aachen Aachen, Germany Monday, Sept 4th - Thursday, Sept 7th 2000 email: [EMAIL PROTECTED] http://www-i2.informatik.rwth-aachen.de/ifl2000 This invitation is being sent out to participants of previous workshops and to other researchers who we thought might be interested. The announcement will also be sent to comp.lang.functional and other appropriate newsgroups. Please forward to anyone who you think might be interested in attending the workshop. The IFL workshops are a tradition that has lasted for over a decade. The aim of these workshops is to bring together researchers actively engaged in the implementation and application of functional programming languages to discuss new results and new directions of research. Workshop Topics --- The workshop is intended to provide an open forum for researchers who wish to present and discuss new ideas and concepts, work in progress, preliminary results etc. related primarily but not exclusively to the implementation of functional or function-based languages. A not necessarily exhaustive list of topics includes * language concepts * type checking * compilation techniques * (abstract) interpretation * automatic program generation * (abstract) machine architectures * array processing * concurrent/parallel programming and program execution * heap management * runtime profiling and performance measurements * debugging and tracing * tools and programming techniques Contributions on applications of functional programming, e.g., in teaching, or on theoretical work in any of the above areas are also welcome. Special Erlang Session -- The purpose of this session is to bring together researchers in the field of functional programming with industrial users on the programming language Erlang. It will feature talks and demonstrations of product examples written in Erlang. This session is sponsored by Ericsson Computer Science Laboratory and free of charge for the participants. For contributions and more information contact: Thomas Arts Computer Science Lab Phone: +46 8 7273000 ÄL2/UAB/F/R Fax: +46 8 7275775 Ericsson Utvecklings AB [EMAIL PROTECTED] Box 1505, SE-125 25 Älvsjö Sweden Contributions - All attendees are encouraged to submit papers to be published in the draft proceedings and to give presentations at the workshop. Submitted papers must be written in English and not exceed 16 pages. Paper should be submitted by email to [EMAIL PROTECTED] and conform to the LNCS format. The draft proceedings will be published as a Technical Report of the Computer Science Department. We expect to follow the lead of the three last IFL workshops in publishing a high-quality subset of contributions in the Springer LNCS series. All speakers attending the workshop will be invited to submit a paper. Papers for the LNCS proceedings will be refereed according to normal conference standards. Dates - Jun 27th - Aug 1st 2000:Registration Aug 7th 2000:Submission for Draft Proceedings Sept 4th - Sept 7th 2000:Workshop Fees and Accommodation -- The workshop fee is EUR 130,- and includes lunch on all days of the workshop, a copy of the draft proceedings, and a social event. For accommodation, we have arranged a special discount in the hotel ibis Aachen Normaluhr, in the centre of Aachen. Previous IFL Workshops -- Lochem (1999), London (1998), St. Andrews (1997), Bonn (1996), Bastad (1995), Norwich (1994), Nijmegen (1993), Aachen (1992), Southampton (1991), Nijmegen (1990), Nijmegen (1989) Sponsors IFL2000 is generously sponsored by Ericsson Eurolab Deutschland GmbH debis Systemhaus Aachen The workshop is organised in cooperation with GI Fachgruppe 2.1.4. Location The workshop will be held in Schloss Rahe, a 18th century castle in Aachen-Laurensberg. The castle is now used as business and conference site. Aachen is located in the western part of Germany, immediately at the borders to The Netherlands and Belgium. Like many other German cities, it was founded by the Romans about 2000 years ago. The Roman name Aquis Grana comes from the name of the Celtic god of water and healing Grannus and hints at the thermal springs found in Aachen. During the reign of Charlemagne, it was the capital of the German Empire. In 936, Aachen became the place of coronation for the
Invitation to IFL2000
This is the first invitation and call for contributions to the 12th INTERNATIONAL WORKSHOP on the IMPLEMENTATION of FUNCTIONAL LANGUAGES (IFL2000) === to be held at RWTH Aachen Aachen, Germany Monday, Sept 4th - Thursday, Sept 7th 2000 email: [EMAIL PROTECTED] http://www-i2.informatik.rwth-aachen.de/ifl2000 The IFL workshops are a tradition that has lasted for over a decade. The aim of these workshops is to bring together researchers actively engaged in the implementation and application of functional programming languages to discuss new results and new directions of research. Workshop Topics --- The workshop is intended to provide an open forum for researchers who wish to present and discuss new ideas and concepts, work in progress, preliminary results etc. related primarily but not exclusively to the implementation of functional or function-based languages. A not necessarily exhaustive list of topics includes - language concepts - type checking - compilation techniques - (abstract) interpretation - automatic program generation - (abstract) machine architectures - array processing - concurrent/parallel programming and program execution - heap management - runtime profiling and performance measurements - tools and programming techniques Contributions on applications of functional programming, e.g., in teaching, or on theoretical work in any of the above areas are also welcome. Contributions - All attendees are encouraged to submit papers to be published in the draft proceedings and to give presentations at the workshop. Submitted papers must be written in English and not exceed 16 pages. Paper submission will be possible through a web form; the details will be given at a later date. The draft proceedings will be published as a Technical Report of the Computer Science Department of RWTH Aachen. We expect to follow the lead of the three last IFL workshops in publishing a high-quality subset of contributions in the Springer LNCS series. All speakers attending the workshop will be invited to submit a paper. Papers for the LNCS proceedings will be refereed according to normal conference standards. Dates - Aug 1 2000: End of Registration Aug 7 2000: Submission for Draft Proceedings Sept 4th-Sept 7th 2000: Workshop Accommodation and Fees -- Due to pending industrial sponsors, fees and accommodation is not yet fixed. We estimate that the conference fee including everything will be about EUR 320,-. Previous IFL Workshops -- Lochem (1999), London (1998), St. Andrews (1997), Bonn (1996), Bastad (1995), Norwich (1994), Nijmegen (1993), Aachen (1992), Southampton (1991), Nijmegen (1990), Nijmegen (1989) Location Aachen is located in the western part of Germany, immediately at the borders to The Netherlands and Belgium. Like many other German cities, it was founded by the Romans about 2000 years ago. The Roman name Aquis Grana comes from the name of the Celtic god of water and healing Granus and hints at the thermal springs found in Aachen. During the reign of Charlemagne, it was the capital of the German Empire. In 936, Aachen became the place of coronation for the Roman Kings of German Nation, a tradition which lasted for 600 years. Today, Aachen is a medium sized town with a beautiful historic town centre. RWTH Aachen (aka Aachen University of Technology) was founded as a Polytechnikum in 1870 with considerable support from local industry. It became a Technische Hochschule (Institute of Technology) in 1880 and was established in 1948 as Rheinisch-Westfaelische Technische Hochschule Aachen (RWTH), the Institute of Technology of the State of Nordrhein-Westfalen in the new Federal Republic of Germany. In the late 1960s, RWTH expanded into a full university by the addition of Faculties of Medicine, Humanities and Economics. Organisation Markus Mohnen Pieter Koopman Lehrstuhl f. Informatik II Computing Science Institute RWTH AachenNijmegen University GermanyThe Netherlands Contact Person -- Markus Mohnen Lehrstuhl f. Informatik II RWTH Aachen D-52056 Aachen, Germany Phone: +49-241/80-21240 Fax: +49-241/-217 email: [EMAIL PROTECTED] Further Information --- Full details of the workshop will be provided at the following URL: http://www-i2.informatik.rwth-aachen.de/ifl2000
PhD Positions at RWTH Aachen
In the offered positions you have to do some teaching in German. Hence I didn't translate the advert into English. Informal inquiries may be made to me. In our group Haskell is used as implementation language and is a subject of research. - Am Lehrstuhl für Informatik II (Programmiersprachen und Verifikation) der Rheinisch-Westfälischen Technischen Hochschule Aachen sind baldmöglichst zwei Stellen für Wissenschaftliche Mitarbeiter/innen (Verg.-Gr. IIa BAT) zu besetzen. Die Beschäftigung ist zunächst auf 2 Jahre befristet, kann jedoch auf insgesamt 5 Jahre verlängert werden. Die Forschungsarbeiten des Lehrstuhls betreffen die Analyse von Programmiersprachen und Verteilten Systemen mit dem Ziel praktischer Anwendungen. Schwerpunkte bilden - die Modellierung und Verifikation Verteilter Systeme sowie - die Analyse, Transformation und Optimierung deklarativer und objektorientierter Programme. Dabei werden als Methoden vor allem Abstrakte Interpretation und Model-Checking eingesetzt. Zu den Aufgaben gehört die Mitarbeit in Forschung und Lehre. Es wird erwartet, daß die Gelegenheit zur Promotion zielstrebig genutzt wird. Voraussetzung für eine Einstellung ist ein sehr gut abgeschlossenes Hochschulstudium in Informatik oder einem benachbarten Fach sowie Interesse an einem der genannten Gebiete. Die Hochschule strebt eine Erhöhung des Anteils von Frauen am wissenschaftlichen Personal an. Daher werden insbesondere Frauen gebeten, sich zu bewerben. Bewerbungen geeigneter Schwerbehinderter sind erwünscht. Richten Sie Ihre Bewerbung bis zum 15. Dezember bitte an Prof. Dr. Klaus Indermark Lehrstuhl für Informatik II RWTH Aachen Ahornstr. 55 52074 Aachen Tel.: 0241/80-21201 Email: [EMAIL PROTECTED] URL: www-i2.informatik.rwth-aachen.de -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: View on true ad-hoc overloading.
1) Allowing operational parameters to be used. Sorry, I don't know what you mean by `operational parameters'. 2) Allow record names to be reused. Most langauges have a special syntak for record names but haskell does not. Record names are just treated as selector functions. Yes, that's very useful. And the TREX extension (extensible records) of Hugs permits you to do that and much more. 3) Allow really really clean systax for functions with compicated parameters. Try doing this within current haskell. array (range 1 to 10) ... array (range 1 to 10 skip 2) ... array (range 1 to 100 factor 2) ... Reuse the existing list functions: array [1..10] array [1,3..10] array (takeWhile (=100) (iterate (*2) 1)) -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: Type problem with ST monad
Sigbjorn Finne wrote: forwarding to the mailing list is restricted to off-hours only at the moment, but thought I'd suggest a solution to you before then - use a (universally quantified) pattern matching function rather than a pattern binding, i.e., deTIM :: TIM s a - ST s a deTIM (TIM m) = m runTIM :: (forall s. TIM s a) - Maybe a runTIM m = runST (deTIM m) Thank you very much. I thought I had tried this, but now I note that when I used the projection I forgot to give the type signature for runTIM... Maybe you could mention this problem its solution in the ghc manual section about second order types. -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: ghc 4.01 solaris binary dist: bus error on Hello World!
I just fetched the ghc-4.01-sparc-sun-solaris2.tar.gz binary distribution, unpackaged, configured (./configure --prefix=/udd/lande), tested: 16:31 /tmp/test; cat Main.hs main = do putStrLn "Hello World\n"; 16:31 /tmp/test; ghc -c Main.hs ghc: module version changed to 1; reason: no old .hi file 16:31 /tmp/test; ghc -o Main Main.o 16:31 /tmp/test; ./Main Bus error 16:31 /tmp/test; uname -a SunOS ultra 5.6 Generic_105181-09 sun4u sparc SUNW,Ultra-1 Do you use gcc 2.8.1? Under Solaris 2 ghc doesn't work with that version. However, it works with gcc 2.7.2. If you have that under the name gcc-2.7.2 you can ask ghc to call this version instead of gcc by writing: ghc -pgmcgcc-2.7.2 -o Test Test.hs -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: Bus Error from ghc-4.01 binary on Solaris 2.6
Simon Marlow wrote: I just downloaded the binary relase of ghc-4.01 for sparc-sun-solaris2 I am running SunOS 5.6 Generic_105181-11 sun4u sparc SUNW,Ultra-4 I tried to compile and run this little program module Main where main = putStr "Hello!\n" and it compiles just fine. However running the program results in a bus error. Are you using egcs as your C compiler? I'm pretty certain that egcs isn't compatible with GHC on Sparcs. If not, then we'll have to dig a bit deeper. I'd just like to say that we have the same problem with the ghc-4.01 binaries. We are running SunOS 5.5 Generic sun4m and use gcc version 2.8.1. -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: Reduction count as efficiency measure?
Bart Demoen wrote: Simon wrote: CSE can also make space consumption asymptotically worse. I can understand this when garbage collection is around and because of CSE, some data structures don't get cleaned up; but is it also true without gc ? If you don't use a rule like e'[e,e] -- let x = e in e'[x,x] for CSE, but limit yourself to rules like let x = e in e'[e] -- let x = e in e'[x] then you are right that the transformed program will not allocate more closures than the old (in fact less in most cases). However, the liveness of heap cells will (often) increase and hence they cannot be collected by garbage collection as early as in the original program. Just the elimination of a few common subexpressions can increase heap residency of a program considerably. If want to know more about why I think that CSE is unfortunately not suitable for lazy functional languages have a look at my paper `Common Subexpressions are Uncommon in Lazy Functional Languages' at http://www-i2.informatik.rwth-aachen.de/~chitil/PUBLICATIONS/comSubsAreUncom.html Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: MonadZero (concluded)
Philip Wadler wrote: class Monad m = MonadPlus m where mzero :: m a mplus :: m a - m a - m a Why is this here? It doesn't need to be in the prelude. Just leave it for the user to define (and then the user may pick better names, like Ringad, zero, and +). -- P First, the prelude (or standard libraries) can give instances for [], Maybe and Error. More importantly, I believe that monads with plus and zero will appear in many Haskell programs. Having standard names for them makes programs written by other people much easier to understand. I'd like to oppose Erik Meijer's statement: On the other hand you can easily achieve the effect yourself using some hiding and adding a handfull of definitions, which is what I will probably end up doing. An extra level of indirection can do wonders. I don't want to read these programs. ;-) However, I have to admit that I don't like the names mzero and mplus either :-( Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Current state of GUI libraries for Haskell
We have developed here a large Haskell program with ghc-2.08. Now we want to extend it by a graphical user interface, preferably using Haskell as well. I am aware of Fudgets and Haggis, but it seems that their development ceased in 1996 (correct?). Obviously we are searching for a library which will still be supported for some time (we are currently moving to ghc-3.0x, because we need HDirect). I worked a bit with TKGofer and liked it. Also I have more trust in the future of an interface to TK than a library that is developed from scratch. Are there other choices? Is there a TKHaskell (under development)? Does anybody have experiences with using one of these GUI libraries? I'm sure we are not the only ones interested in this subject. Thank you for any comments, Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: Monomorphism
Simon L Peyton Jones wrote: What worries me is that a programmer might write this: read2 :: (Read a, Read b) = String - (a,b) read2 2 = let [(r1,s1)] = read s [(r2,s2)] = read s1 in (r1,r2) The examples of John Peterson and Simon Peyton Jones demonstrate the usefulness of the MR for non-simple pattern bindings, but not for *simple* pattern bindings, i.e. where the pattern is a single variable. In my opinion the case of simple pattern bindings is just the one where most programmers have problems with the MR, because everybody considers let f x = ... in ... and let f = \x - ... in ... as equivalent. Hence I suggest that part (b) of rule 1 of the MR should be deleted, i.e. simple pattern bindings are just treated as function bindings. As I have said in a previous email, the recomputation issue could be handled by warnings from the compiler. Olaf PS: For me the description of the MR in the report was quite difficult to understand. The two reasons for the MR should stand out clearly and not be separated by a paragraph on the fact that non-simple pattern bindings are always monomorphic. Simon's example should become part of the report. -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Re: Extending parsing combinators
Dave Tweed wrote: Has anyone out there extended the parser combinators in Hutton Meijers paper on Monadic Parser Combinators to deal with a large set of operators with lots of precedence levels? Have a look at Steve Hill: Combinators for parsing expressions, J. Functional Programming 6(3):445-463, May 1996. Steve Hill defines combinators for expressions with precedences and associativities. He doesn't use monadic parser combinators but the ones described by Graham Hutton earlier. However, rewriting should not be difficult. For more references to papers on parser combinators see the bottom of page http://www-i2.informatik.rwth-aachen.de/~chitil/DiplArb/parser.html Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Proposed class system for Standard Haskell
I've just played a bit with Hugs 1.3c which implements the type system that is proposed for Standard Haskell. I came accross a limitation compared to Gofer which I find quite unfortunate. As an example for the advantages of multiple parameter classes Mark Jones gives: class Collection c a where empty :: c a insert :: a - c a - c a ... Having the element type of a collection as a parameter enables you to define collections whose elements are restricted, e.g. newtype Set a = Set [a] -- ordered list without duplicates instance Ord a = Collection Set a where empty = Set [] insert = ... -- implementation requires ordering Several people (e.g. Graeme Moss on this mailing list, John Hughes on the Standard Haskell discussion list) have pointed out that it would be nice to do the same thing for monads. Well, in Gofer you can. You just write: class MyMonad m a where myReturn :: a - m a myBind :: MyMonad m b = m a - (a - m b) - m b instance Ord a = MyMonad Set a where myReturn = \x - Set [x] myBind = bindSet bindSet :: (Ord a, Ord b) = Set a - (a - Set b) - Set b bindSet = ... However, Hugs 1.3c doesn't like this: ERROR "../test.hs" (line 32): Cannot justify constraints in instance member binding *** expression: myBind *** type : (MyMonad Set a, MyMonad Set b) = Set a - (a - Set b) - Set b *** given context : (MyMonad Set a, MyMonad Set b) *** constraints : Ord b Note that neither Hugs 1.3c nor Gofer have any problems with the type of `myBind' which is (MyMonad m a, MyMonad m b) = m a - (a - m b) - m b The difference lies in the handling of instances. Vividly speaking, Hugs just type checks the body of the instance `MyMonad Set a' under the assumption that `Ord a' is given. Hence it lacks context `Ord b' when type checking `myBind'. Gofer however recognises that every `MyMonad Set a' has an `Ord a' context. Hence the `MyMonad Set b' context of `myBind' implies the existence of `Ord b'. Formally the difference is that the Gofer type system has an additional predicate entailment rule. Any instance definition extends predicate entailment such that the defined instance implies its own context. See Section 8.2.3 of Mark Jones' thesis for details. Mark Jones just introduced this rule for optimisation reasons. In retrospect he considers this rule as bad idea, because it can break data abstraction. I agree with that. However, if the scope of the rule is restricted to the body of the instance, then it certainly does not break data abstraction. The monad example shows how useful the rule can be. I'd like to hear your opinions, Olaf PS: 1. on the Standard Haskell discussion list John Hughes gave a showable and printable monad as another example. 2. implementing the additional rule is a bit of a problem. It requires dictionaries of extensible size. This is no problem for Gofer or Hugs because the internal language is untyped. However, GHC uses FOmega as intermediate language. I found a simple way to code the required form of subtyping in FOmega, but it is not `nice'. -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
problems with nofib benchmarks
I have several problems with using the nofib benchmarks 2.05 with ghc-2.08. Most problems are connected with the fact that I try to compile them for several ways, i.e. my `build.mk' includes: NoFibWays= a b WAY_a_NAME= 02 prepared for elimination WAY_a_HC_OPTS= -fvia-C -O2-for-C -dcore-lint -Ofile$(MYTOP)/newoptionsO22.ghc WAY_b_NAME= 02 with elimination WAY_b_HC_OPTS= -fvia-C -O2-for-C -dcore-lint -Ofile$(MYTOP)/newoptionsO2Elim2.gh 1. `make all' says: ==nofib== anna_a: time to compile Utils follows... .. ==nofib== anna_a: size of Utils.a_o follows... .. Panic! Utils.a_o exists, but Utils.a_hi does not. exit 1 This shows that the interface files produced lack the extension `_a' for way a. The interface files should have different content and also sharing of interface files implies that any `make ...' recompiles everything. 2. `make runtests' tries to use files like `compress_a.stdout' which do not exist. Here the extension should not be used. 3. Several `*.stderr'-files seem to have a superfluous newline at the end. Only after removing this newline `make runtests' succeeds (is this machine dependent??). Examples: gamteb.stderr, hidden.stderr, pic.sterr. 4. My result of `anna' is completely different from the given `big.sum.out' (with unmodified ghc-2.08). I obtain: 1279649 5. For `make runtests' the program `symalg' requires more heap on my sparc station. It works with `-H70'. Greetings, Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Strange time behaviour without optimisations
I know that ghc performs hardly any optimisation if called without -O or -O2. However, is it necessary that just adding a single function definition to a program *without* calling this function increases execution time by 50%? (ghc-2.08) Example: {- fromStepTo :: Int - Int - Int - [Int] fromStepTo start step end | start end = [] | otherwise = start : fromStepTo (start+step) step end -} all2 :: (a - Bool) - [a] - Bool all2 p [] = True all2 p (x:xs) = p x all2 p xs main = putStr (show (all2 even [2,4..200])) If `fromStepTo' is commented out as above I measure: 64,001,320 bytes allocated in the heap 300 bytes maximum residency (0.0%, 6 sample(s)) 52 garbage collections performed (6 major, 46 minor) INIT time0.01s ( 0.11s elapsed) MUT time8.04s ( 8.51s elapsed) GCtime2.51s ( 2.70s elapsed) Total time 10.56s ( 11.32s elapsed) However, deleting `{-' and `-}' results in 64,001,320 bytes allocated in the heap 300 bytes maximum residency (0.0%, 6 sample(s)) 52 garbage collections performed (6 major, 46 minor) INIT time0.01s ( 0.14s elapsed) MUT time 12.17s ( 12.49s elapsed) GCtime2.44s ( 2.68s elapsed) Total time 14.62s ( 15.31s elapsed) With -O both versions give the same result: 24,000,852 bytes allocated in the heap 232 bytes maximum residency (0.0%, 5 sample(s)) 21 garbage collections performed (5 major, 16 minor) INIT time0.02s ( 0.07s elapsed) MUT time2.79s ( 2.99s elapsed) GCtime2.13s ( 2.31s elapsed) Total time4.94s ( 5.37s elapsed) Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Echoing of input functions
Section 7.1 of the Haskell Report says about the input functions getChar, getLine, getContents, interact, ...: "By default, these input functions echo to standard output. Functions in the I/O library provide full control over echoing." However, the section about module IO in the Library Report does *not* mention echoing at all. Often no echoing is what you really need. The developers of Hugs have recognised this need and hence their implementations of `getContents' and `interact' do not perform echoing, contradictory to the report. Surely this non-conformance is not desirable (I came across this problem when compiling a simple program with GHC that had been developed with Hugs) and the module IO should provide functions for controlling echoing. Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Notes on Adding an Optimisation Pass to GHC
Hi, as a result of my experiences with adding optimisation passes to GHC I have written some notes on the things I needed to learn for doing so. They are on the Web for anybody who is interested: Adding an Optimisation Pass to the Glasgow Haskell Compiler November 4, 1997 (2nd version), 16 pages http://www-i2.informatik.rwth-aachen.de/~chitil/PUBLICATIONS/extendGHC.html (not to be confused with Will Partain's document of similar title) Cheers, Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
NoFib with gnumake
Hi Sigbjorn, Do you have a version of the nofib benchmarks with the new makefile system? Maybe also with some improvements with respect to the *.stdout files (correct newlines; no comparison of float values up to the last digit,...)? I started doing this for a few of the benchmarks, but this would really be a waste of time if you already had it. Cheers, Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
GHC's internals
The compiler options for changing debugging output don't work properly in ghc-2.04 pl 2: -dppr-all gives less information than -dppr-all (e.g. no types, no identifier uniques); looks more like what you would expect of -dppr-user; however, even a user will probably want types. -dppr-user output identical to -dppr-all except that all identifiers are enclosed in `...' which makes the output less readable. The desugarer translates a Haskell program containing numbers, like e.g. module Test where x = 42 into a single binding Rec{ ... end Rec} of a lot of equations, which, however, are not mutually recursive at all. While this isn't semantically wrong, it is quite a problem for the core transformation I am implementing, since its effects are limited by mutual recursion. Shouldn't there be an invariant stating that in any stage of the compiler the equations inside a Rec binding are mutually recursive (and hence there are as few of them as possible)? Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Hugs error messages
Some suggestions for improving error messages produced by Hugs Runtime errors could be more informative than just stating "Program error: erroneous term" e.g. No matching pattern in definition of function: f for argument: arg Actually warnings at compile time about incomplete patterns would be very helpful (however, I remember that Mark Jones mentions already in his Gofer implementation documentation that this doesn't fit into Gofer's design; the code generator doesn't produce messages). The module structure should influence runtime error messages. Inside the error message "Program error: erroneous term" only unqualified names are used. This can be very confusing when the error occurs inside an imported module, the top level module does not import some names of the erroneous term, and possibly the names are even defined differently in the top level module. -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/
Deriving class instances
Why is the automatic derivation of instances for some standard classes linked to data and newtype declarations? It happened already several times to me that I needed a standard instance of a data type that I imported from a module that did not provide that instance and which I did not want to change (a library; GHC, which I mainly want to extend by further modules, not spread changes over 250 modules). When declaring a new data type one normally avoids deriving (currently) unneeded instances, because it costs program code (and maybe one even wants to enable the user of the module to define his own instances). Hence I suggest to introduce a new top level declaration like deriving typeconstructor (class1, ..., classn) e.g. data Tree a = Leaf a | (Tree a) :^: (Tree a) ... deriving Tree (Eq, Ord, Show) deriving Tree (Read) The current syntax would just be an abbreviation. I'm not quite sure if the syntax shouldn't be deriving (Tree a) (Eq, Ord, Show) because it is an instance for the type, not the type constructor. The type variable is, however, superfluous. Naturally all data constructors of a data type need to be in scope to permit derivation of instances. For separate compilation a module's interface file needs to include information about the order of constructors in the type declaration. GHC already does that. I know that attaching the order of data constructors to data types is not purely declarative. However, this is already the case for the current deriving construct, and attaching it to `data' and `newtype' just makes the construct less useful. Olaf -- OLAF CHITIL, Lehrstuhl fuer Informatik II, RWTH Aachen, 52056 Aachen, Germany Tel: (+49/0)241/80-21212; Fax: (+49/0)241/-217 URL: http://www-i2.informatik.rwth-aachen.de/~chitil/