rewrite rules diagnostics
(Rewrite rules are very nice, by the way.) The User's Guide says: * Use -ddump-simpl-stats to see what rules are being fired. If you add -dppr-debug you get a more detailed listing. but this only happens if the preprocessor symbol DEBUG is defined in simplCore/SimplMonad.lhs. Could it be a run-time option, like the guide says?
RE: What's in an interface? Unnecessary recompilation again
Fair enough. But would you like to suggest an algorithm GHC could use to decide what to put in the .hi file? You can't be suggesting that it parse literal strings?? | -Original Message- | From: George Russell [mailto:[EMAIL PROTECTED]] | Sent: 14 April 2000 13:49 | To: [EMAIL PROTECTED] | Subject: What's in an interface? Unnecessary recompilation again | | | Changed .hi files are a nuisance because they make Make rerun | all GHC compilations | which import the relevant module. I thought I would not | have .hi changing | today when I modified the implementation of an interface, but it did. | According to -hi-diffs the difference is: | | | 1 lvl18 :: [PrelBase.Char] {-## __A 0 __U | (PrelBase.unpackCStringzh "execOneWayCmd ") ##-} ; | ! 1 lvl20 :: [PrelBase.Char] {-## __A 0 __U | (PrelBase.unpackCStringzh "Expect.hs:441|case") ##-} ; | 1 lvl22 :: RegularExpression.MatchResult - PrelIOBase.IO | [PrelBase.Char] {-## __A 1 __S L __U (\ matchResult :: | RegularExpression.MatchResult - return @ [PrelBase.Char] | (case matchResult of wild { RegularExpression.MatchResult ds1 | matched ds2 ds3 - matched })) ##-} ; | --- 29,31 | 1 lvl18 :: [PrelBase.Char] {-## __A 0 __U | (PrelBase.unpackCStringzh "execOneWayCmd ") ##-} ; | ! 2 lvl20 :: [PrelBase.Char] {-## __A 0 __U | (PrelBase.unpackCStringzh "Expect.hs:461|case") ##-} ; | 1 lvl22 :: RegularExpression.MatchResult - PrelIOBase.IO | [PrelBase.Char] {-## __A 1 __S L __U (\ matchResult :: | RegularExpression.MatchResult - return @ [PrelBase.Char] | (case matchResult of wild { RegularExpression.MatchResult ds1 | matched ds2 ds3 - matched })) ##-} ; | | If you ignore the lines of context etcetera, you'll see that | the difference is | only in a line number "Expect.hs:441" rather than | "Expect.hs:461". I submit, mlud, | that line numbers are not something the importing module | needs to know about, and | that therefore the location might be better off not inlined . . . |
Re: What's in an interface? Unnecessary recompilation again
Simon Peyton-Jones wrote: Fair enough. But would you like to suggest an algorithm GHC could use to decide what to put in the .hi file? Well that's a big question. As I understand it, the errant value, lvl20, is a string representing an error message (which I suppose is to be thrown in the event of a matching failure). Since it should only be used on error, surely there is no speed benefit to inlining it, and probably there is a space penalty (since you store the string multiple times). In this case there is a very real disadvantage, since inlining forces everything that imports that module to recompile. So my suggestion would be that values required only for errors should never be inlined or put in a .hi file.
Inlining errors...
George Russell writes: Well that's a big question. As I understand it, the errant value, lvl20, is a string representing an error message (which I suppose is to be thrown in the event of a matching failure). Since it should only be used on error, surely there is no speed benefit to inlining it, and probably there is a space penalty (since you store the string multiple times). In this case there is a very real disadvantage, since inlining forces everything that imports that module to recompile. So my suggestion would be that values required only for errors should never be inlined or put in a .hi file. I agree so wholeheartedly that I just write a (very!) short paper on the subject for ICFP (having discovered to my surprise that no such write-up existed). It describes how to identify such expressions and hoist them out so they don't end up getting inlined. It's still being refereed is thus likely to be revised, but if you're interested in a pre-print take a look (there are no pointers to it from elsewhere at the moment): @unpublished{bottomExtract, AUTHOR = {Jan-Willem Maessen}, YEAR = {1999}, month= Mar, TITLE= {Bottom Extraction: Factoring error handling out of functional programs}, documentURL = "http://www.csg.lcs.mit.edu/~earwig/extraction.ps" note = {Submitted to ICFP 2000} } -Jan-Willem Maessen
RE: Binary Search Tree debugging
Hi Andrew, | Hey all.. I was wondering if somebody might offer me some assistance in | trying to debug some code I wrote to check whether a tree is a binary | search tree.. For some reason it always comes back as false! :( Thanks | much! One of the great things about functional programming is the opportunities that it brings to explore old problems from new angles. The code that you wrote is an attempt to describe a test for binary search trees as a recursive algorithm, which is the kind of implementation that you'd expect to see in an imperative language. As your experience has shown, this kind of code can be hard to read, and hard to get right. I'd encourage you to try a different approach. Consider the simple question: When is a tree a binary search tree? Answer: When its leaves are arranged in increasing order. Translating that idea directly to Haskell gives: isBST :: Ord a = Tree a - Bool isBST = increasing . leaves where: leaves :: Tree a - [a] increasing :: Ord a = [a] - Bool The idea here is that leaves and increasing are general purpose functions that you can use to enumerate the leaves of a tree from left to right, and to determine whether the values in a list are increasing, respectively. I'll leave the definitions to you, but it shouldn't be too difficult: a simple two line recursive definition suffices for leaves, while a mildly cunning one-liner will get you to increasing (hint: think zipWith (=)). The whole definition is shorter and (IMHO) simpler than the code you wrote, while at the same time providing useful functions that might find applications elsewhere in a larger program. I hope that you'll find my comments here interesting. Perhaps I should explain that I responded to your message because it reminded me of some similar examples that got me excited when I was first learning a functional programming language. I'd been an imperative programmer for some time before then, but as I looked at those examples, I began to see new ways of solving all kinds programming problems. And, in fact, many of those ideas are still useful to me today, whatever language I happen to be using. Enjoy! Mark
RE: libraries for Integer
Hi Sergey, | In what way the Haskell implementations may use the GMP library? | (GNU Multi-Precision integers ?) Hugs 98 doesn't use gmp at all. For legal reasons (later rendered irrelevant by changes to the Hugs license), Hugs used it's own implementation of multi-precision integers. | And there also exist other powerful libraries for Integer and for the | number theory. Probably, some of them written in C. One could consider | exploiting them in the Haskell implementation. I guess that H/Direct would be the best way to take advantage of these right now. All the best, Mark
Re: libraries for Integer
Mark P Jones wrote: I guess that H/Direct would be the best way to take advantage of these right now. I agree actually. Integer only needs to be an implementation of multiprecision arithmetic; we shouldn't tie it to GMP. There are other multiprecision arithmetic packages out there, for example the LIP package included in NTL http://www.shoup.net/ntl/ Of course where it so happens that GMP is used for the implementation of big integers, there is no reason why the implementation should not provide additional access, via HDirect or a specially written interface, to additional GMP functions.
Re: libraries for Integer
George Russell ([EMAIL PROTECTED]) wrote: : I agree actually. Integer only needs to be an implementation of : multiprecision arithmetic; we shouldn't tie it to GMP. There are : other multiprecision arithmetic packages out there, for example But it is pretty fast. : the LIP package included in NTL :http://www.shoup.net/ntl/ Do you have any data about comparisons with this or other packages? Regards, Marc van Dongen -- Marc van Dongen, CS Dept | phone: +353 21 903578 University College Cork, NUIC | Fax: +353 21 903113 College Road, Cork, Ireland | Email: [EMAIL PROTECTED]
Re: libraries for Integer
Marc van Dongen wrote: Do you have any data about comparisons with this or other packages? I've just looked around Dave Rusin's page: http://www.math.niu.edu/~rusin/known-math/index/11YXX.html but it doesn't seem to contain any up-to-date comparisons; in particular not of GMP 3. There are two things in favour of LIP: (1) it has the magic name of "Lenstra" attached to it (Arjen in this case). (2) I believe it will be better than GMP 2 for integers with thousands of digits or more because it implements FFT multiplication (or something similar). But I can't remember for sure. However if GMP now implements Toom-Cook that should make a big difference here. Sorry I can't be more helpful. But there is unlikely to be a simple answer to the question "Does LIP or GMP multiply numbers fastest?"; it will depend on how big the numbers are, what platform you are using, and how much difficult the interface is to use. (GMP is faster if you use the mpn_ functions, but then you have to do all your own allocation and only get non-negative integers.)
Re: libraries for Integer
George Russell wrote: (GMP is faster if you use the mpn_ functions, but then you have to do all your own allocation and only get non-negative integers.) Sorry, I meant GMP is faster if you use mpn_ than if you use the other GMP functions, not that the mpn_ functions are faster than LIP.
Re: libraries for Integer
George Russell ([EMAIL PROTECTED]) wrote: [...] : Sorry I can't be more helpful. But there is unlikely to be a simple : answer to the question "Does LIP or GMP multiply numbers fastest?"; : it will depend on how big the numbers are, what platform you are using, : and how much difficult the interface is to use. (GMP is faster if Thanks anyway. [...] Regards, Marc
RE: coercing newtypes
| Many of you have run across the problem with | newtypes that, although it is very cheap to | coerce between the newtype and the base type, it | can be very expensive to coerce between, say, | a list of the newtype and a list of the base type. | Stephanie Weirich and I are working on a proposal | for the Haskell Workshop addressing this problem, | and we would welcome any feedback from the community. Thanks for the draft, which I read last night. Interesting. I have two main comments. 1. Personally, I've never actually been bitten by this problem. Have you? In other words, how practically important is it? (Regardless, it's a wart, I must agree.) 2. More substantially, you describe your solution as lightweight, but it involves quite a bit: multi-param type classes, automatic derivings.. And STILL the programmer has to fiddle about with frankly non-obvious constructions (Section 4). So I have an alternative suggestion: provide 'cast' in the language. That is, if e :: t then(cast e) :: s for any type s that has the same representation as t. So I need to add some rules saying what it means to 'have the same representation' but that's pretty easy. And there's no problem with nested types. What made me think about this is that GHC internally has exactly such a thing (it's called 'coerce' rather than 'cast'). Newtype constructors are compiled into applications of 'coerce', and similarly pattern matching. It seems bizarre to require the *programmer* to write complicated code (like (Apl . f . unApl)) in order to get the *compiler* to produce the very simple code (cast t). That's back to front! One could also deal with the abstraction issue: you can say that two types have the same representation iff you could convert a value of one type into a value of the other, *using the constructors currently in scope*. So if the ADT doesn't expose the constructors you can't cast. There is also the question of whether newtype is a good thing at all. Maybe we'd be better off with Gofer's restricted type synonyms. Simon
Re: libraries for Integer
No. It is all right. For example, gcdExt 4 6 = (2,-1,1),so -1*4 + 1*6 = 2 = gcd 4 6. Maybe, you forgot of negatives? -- Sergey Mechveliani [EMAIL PROTECTED]
Re: libraries for Integer
Wed, 19 Apr 2000 10:19:08 +0400 (MSD), S.D.Mechveliani [EMAIL PROTECTED] pisze: I have an impression that Haskell-98 calls `Integral' various models for the domain of integer numbers. And this is for Haskell-98'. While the good standard of future (I hope for Haskell-2) has, to my mind, to remove Integral and introduce GCDRing, FactorizationRing, EuclideanRing, Field - see http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/ Why I don't like this proposal: - It's too complicated. - Relies on controversial type system features, like undecidable instances and overlapping instances. - Relies on type system features that are not implemented and it's not clear if they can be correctly designed or implemented at all, like "domain conversions". - Has many instances that should not exist because the relevant type does not have the class property; they return Nothing or fail, instead of failing to compile. - Properties like commutativity cannot be specified in Haskell. The compiler won't be able to automatically perform any optimizations based on commutativity. - belongs is strange. IMHO it should always return True for valid arguments, and invalid arguments should be impossible to construct if the validity can be checked at all. - Tries to turn a compiled language into an interpreted language. FuncExpr, too much parsing (with arbitrary rules hardwired into the language), too much runtime checks. - It's too complicated. - It's not true that it's "not necessary to dig into mathematics". I studied mathematics and did not have that much of algebra. - I perfer minBound to looking at element under Just under Just under tuple of osetBounds. - Uses ugly character and string arguments that tune the behavior, e.g. in syzygyGens, divRem, canFr. I like Haskell98's divMod+quotRem better. - Uses unneeded sample arguments, e.g. in toEnum, zero, primes, read. - Have I said that it's too complicated? -- __("Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/ \__/ GCS/M d- s+:-- a23 C+++$ UL++$ P+++ L++$ E- ^^ W++ N+++ o? K? w(---) O? M- V? PS-- PE++ Y? PGP+ t QRCZAK 5? X- R tv-- b+++ DI D- G+ e h! r--%++ y-
math libraries
Hi folks! Where can I find math libraries with functions for differential and integration calculus, statistics, lin. algebra, ...? Regards Sebastian -- | Sebastian Schulz May the source be with you! | mailto:[EMAIL PROTECTED]
Use of irrefutable
Hi, I have a rather naive question, being new to Haskell. I am looking at the Hawk Signal module, where the following definition occurs: lift1 f (List xs) = List $ lazyMap f xs where lazyMap f ~(x:xs) = f x : lazyMap f xs Now setting aside how the function is used in Hawk, I ran a little experiment to see what happens when the irrefutable definition is removed by calling it with: a = List [1, 2] b = lift1 (+ 1) a Now without it I get an error for a "non-exhaustive pattern". With it, I get an "irrefutable pattern failed". Can some one explain to me the advantages and disadvantages of using irrefutable matching, including how irrefutable matching is used in general? Why and when it is used, etc. Thanks, Mike