Re: ideas for compiler project
Jerzy Karczmarczuk writes: Steven Bevan wrote interesting numeric routines a long time ago. Actually I did little more than transliterate the algorithm descriptions I found in a book on numerical analysis. I know next to nothing about numerical analysis so I have no idea if they are interesting or useful. Implementing them was a pleasant diversion from what I was supposed to be doing at the time :-) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: parameterisable modules, the lack thereof ...
Everyone now acknowledges that Haskell's modules are the weakest part of the language, and people are working on ways to fix that (yes, someday I hope there will be a Haskell 2). Are any details available as to what directions this work is taking? I'm aware of of some work to extend type classes but I'm more interested in any approaches which don't necessarily require overloading (my candidate for the weakest part of the language btw). bevan
overspecification of bounds combind with [ | - ]
When I was dabbling with some numerical analysis code a while ago I noticed what appears to be an overspecification in the way "bounds" interacts with list comprehensions. Sometimes it was important that the bounds be produced in order and this is the behaviour defined in the manual (1.1 anyway). However, sometimes I just wanted to iterate over the bounds in any order (i.e. potentially in parallel) but I could not find any way to indicate this. Is the inability to be able to indicate this a deliberate choice, oversight or just not considered important? bevan
parameterisable modules, the lack thereof ...
At the risk of opening old wounds: why weren't ML/FX style modules included in Haskell? I can think of a number of possible reasons, but I'm interested in the "official" reason for leaving out such an important feature. bevan
Re: Another import question
On Fri, 18 Dec 92 15:42:42 GMT, kh [EMAIL PROTECTED] said: On Thu, 03 Dec 92 08:13:17 +, Simon L Peyton Jones [EMAIL PROTECTED] said: Simon Why do you need to drop the (..) when it turns into a "data" decl? Simon You only need do so if you want it to be abstract! Simon But "type" decls can't be abstract; the (..) reminds you of this. I don't want reminding. I know it isn't abstract, but for the sake of the importing module I like to pretend it is. kh In that case, perhaps you should always use data declarations (with a kh dummy constructor) rather than type synonyms. Some compilers will give kh you better error messages this way, and a good compiler might eliminate kh the extra constructor anyway (depending on how good a strictness kh analyser it has!). Are there any "good" compilers available? i.e. will any of the currently available compilers do the above optimisation? bevan
importing derived functions
Is it possible to import a type and the derived "show" function for it without having to import all the type's constuctors? For example, in the following I attempt to import just Lexeme into Token as the internal details of Lexeme should not be known to Token :- module Lexeme where data Lexeme = Symbol_Lexeme Symbol | Int_Lexeme | ... deriving(Eq,Text) module Token where import Lexeme(Lexeme) data Token = Token Lexeme ... deriving(Text) However, HBC claims that Lexeme is not an instance of Text when I compile the Token module. This seems quite fair since I'm trying to use Lexeme as an ADT and if it allowed the above it would break abstraction. The question is, how do I import the derived functions for "show" ... etc. so that I can derive Text for Token? I tried :- import Lexeme(Lexeme,shows) and combinations thereof, but with no success. The solution I'm using at the moment is to define the function "show_lexeme" in the Lexeme module, import it into the Token module and then explicitly make Token an instance of Text i.e. import Lexeme(Lexeme,show_lexeme) instance Text Token where showsPrec _ (Token l ...) = show_lexeme l . ... ... However, this is tedious if Token has lots of alternatives. Anyone have a better way? bevan
bogus array signature (was What's the most efficient ... etc.)
I wrote: In article [EMAIL PROTECTED] [EMAIL PROTECTED] writes: Isn't the type signature you give incorrect? The parammeters m and n must have type a (where (a,a) is the index types of the arrays). I think your right, but as I can't seem to get to grips with the numeric hierarchy (anybody have a more detailed explanation of it?) I write with no signatures and see what the compiler produces. In this case Hbc 9.998.1 produced the signature I gave (Actually it has Integer instead of Int, but it was pointed out to me that Int is sufficient for indexing arrays) Well I got it wrong :- It seems inbetween the .hi file and my mail buffer I managed to mangle the signature. Hbc _correctly_ derives the type as :- lu_decomp :: (Ix a, Num a, Fractional b, Enum a) = (Array (a, a) b) - a - a - Array (a, a) b which I attempted to restrict to :- lu_decomp :: (Fractional b) = (Array (Int, Int) b) - Int - Int - Array (Int, Int) b ^^^^^^ but mangaged to leave these in their original form (i.e. "a"). You'd think I'd know by now not to post untested code wouldn't you :- sorry if I caused any confusion, bevan
What's the most efficient way to build an array?
Now that I've said my piece about syntax, we can get down to some nitty gritty details of the language ... The following are two different ways of factoring the L and U sections needed in Crout reduction. The first version is a direct translation of some FORTRANish pseudo-code and the second version is my attempt to build the array monolithically. lu_decomp :: (Fractional b) = (Array (Int, Int) b) - a - a - Array (Int, Int) b lu_decomp_direct a m n = lu where ab = ((m,m), (n,n)) lu = lu1//[(n,n-1) := a!(n,n-1), (n,n) := a!(n,n) - a!(n,n-1)*lu!(n-1,n)] lu1 = foldl f lu0 [m+1..n-1] lu0 = array ab [(m,m) := a!(m,m), (m,m+1) := a!(m,m+1)/a!(m,m)] f lu i = lu' where li1 = a!(i,i-1) li2 = a!(i,i) - li1*lu!(i-1,i) lu' = lu//[(i,i-1) := li1, (i,i) := li2, (i,i+1) := a!(i,i+1)/li2] lu_decomp a m n = lu where ab = ((m,m), (n,n)) lu = array ab (lu_elems lun) lu_elems = foldl f lu0 [m+1..n-1] lu0 v = ((m,m) := a!(m,m)):(((m,m+1) := a!(m,m+1)/a!(m,m)):v) lun = [(n,n-1) := a!(n,n-1), (n,n) := a!(n,n) - a!(n,n-1)*lu!(n-1,n)] f le i = le' where li1 = a!(i,i-1) li2 = a!(i,i) - li1*lu!(i-1,i) le' v = le (((i,i-1) := li1):((i,i) := li2):((i,i+1) := a!(i,i+1)/li2):v) My question is (and it is probably aimed at implementors): are either likely to be optimised very well? e.g. update in place for the first version and doing away with the thunks creating the list in the second? Alternately can anyone suggest any methods that are likely to be more efficient? For example, I have a third version that uses a function in a similar way to one of my Choleski solutions (see an Haskell library), but there are so many guards I find it hard to believe it will be particularly quick. Any timings I've done are distorted by the fact that I only have one compiler (Hbc), exellent though it is. BTW feel free to (publically) pick holes in the code if you think it is "wrong" in any way e.g. should it be foldr instead foldl? (I think not, but I'm never too sure :-) bevan
relative precedence of ! and function application
Well after complaining publically that people spend too much time worrying about concrete syntax; I'm now going to do just that :-) I've recently "discovered" arrays in Haskell and in writing a program or two wrote something like :- f x!i I assumed that this would apply _f_ to the _i_th element of _x_. How wrong I was :-) It applies _f_ to the array _x_ and indexes the _i_th element of the resulting array (using the rule that function application binds most tightly of all). I'm not saying this is wrong, but I think the former interpretation is one that is more useful especially in mathematical code where this sort of operation appears many times and the latter interpretation hardly ever. This doesn't mean I'm pushing for a change, especially as breaking the "functions binds tightest" rule would be a major change. However, having to write lots of lines like "f (x!i)" under the current rules rather than the occasional "(f x)!i" under alternative rules is proving taxing for this ex FORTRAN weenie :-) bevan
Re: relative precedence of ! and function application
In article [EMAIL PROTECTED] [EMAIL PROTECTED] (Paul Hudak) writes: Array notation conventions aside, I think the simple rule that normal application has higher precedence than infix application is a Big Win. So do I, and therefore I'm not seriously suggesting a change. Howerver, emphasising the "function application binds tightest" rule might be worth a small note somewhere in any document that mentions arrays. I realise it is already in the manual, but something a little closer to the array section would be nice (maybe a footnote?) Perhaps the committee should have introduced special syntax for arrays, but that was simply not palatable to most of the members, even though it was for lists. Bit of a double standard eh? :-) Of course, if you really don't like the parens, you can always write your example as: "f $ x!i" where ($) is defined in the prelude as: infixr 0 $ f $ x = f x (:-) -Paul My usual solution is just to throw parens around _everything_ and make it look like Scheme :-) bevan
dumpable AST (was tags support ...)
It appears that at least one compiler will have the option to produce a tags output from a set of Haskell files. Seeing as I'm 1/1 with questions at the moment, I thought I'd enlarge the request :- Is any work being done on a standard abstract syntax for Haskell? Something like DIANA for Ada or whatever they call the system used in Modula III. If compilers could output this tree, then tools such as pretty printers, tags generators ... etc. could be separated from the compiler. If there is no consensus on what the correct AST should be, then are there plans to include implementation dependent dumpable ASTs in the compilers currently being developed? Stephen J. Bevan[EMAIL PROTECTED]
tags support for Haskell/Gofer
Are there any tools available for producing a GNU Emacs readable tags file from a set of Haskell/Gofer scripts? Ideally this would be an optional output file from a compiler, as it knows where the functions are better than any kludgy awk/perl ... etc. script ever could. However, as I don't currently have a compiler installed, I'll take anything I can get (well I draw the line at Perl :-). I realise that with overloading and definitions spread over multiple lines there could be problems for the standard tags commands. If the normal tags commands can't cope, has anybody looked into some form of alternative/extended tags system? ta Stephen J. Bevan[EMAIL PROTECTED] PS the "d" in "kludge" is deliberate, the hacker's dictionary isn't right about everything :-)