fast IO in ghc
I tend to deal a lot with very very large data files in Haskell and my current approach to dealing with getting data out of one program an into another is writing it to a file using 'show' and then reading it in using 'read'. Unfortunately, this is very slow and produces very large files which are very slow to read and write. Is there another option? I don't care about H98 compatibility, so if there's a way to somehow just dump ghc's internal representation (I also don't care about x-platformness) to a file and read it back, that would be excellent. Other suggestions are welcome too :). -- Hal Daume III Computer science is no more about computers| [EMAIL PROTECTED] than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume ___ Glasgow-haskell-users mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: a survey of language popularity
Richard Uhtenwoldt [EMAIL PROTECTED] writes: Here are some Google search results that suggest how many web pages are devoted to particular langauges. (Google tells you how many pages match your query.) A better survey of language popularity would include newsgroup and mailing list traffic, but no time, no time. I had a quick look at the Usenet statistics at www.ibiblio.org, but unfortunately Galeon had some problems navigating around the pages. comp.lang.perl.misc had 1624 posts per month, comp.lang.functional had 55. I guess Perl is rather popular, after all. But for some reason, c.l.f is recorded with 23K readers, compared to Perl's meagre 9500. Perhaps that's 1624 complaints about the ugly syntax? Or just a tribute to the devastating wit and intellect of functional programmers? -kzm -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
PLAN-X: Programming Language Technologies for XML
PLAN-X: PROGRAMMING LANGUAGE TECHNOLOGIES FOR XML Oct 3, 2002 Pittsburgh, PA (Co-located with PLI) CALL FOR PAPERS Submission deadline: May 1, 2002 XML has emerged as the de facto standard for data interchange on the web. The use of XML as a common format for representation, interchange, and transformation of data poses new challenges to programming languages, applications, and database systems. During the last few years, the database research community has devoted a lot of attention to XML's data representation challenges, as evidenced by the number of XML-related publications in premier database conferences and journals. In contrast, the attention devoted to XML by the programming language research community has been minimal. This is unfortunate, since the robustness of current and future programming standards and tools for XML will depend on the strength of their foundations in core programming technologies e.g., XML parsing (parsing theory and incremental parsing), XML schemas (type systems), XPATH expressions and XSLT programs (pattern-matching languages and their optimization), XSLT debuggers (dynamic program analysis and slicing). Since XML is a new domain, core programming technologies developed in past research cannot be used unchanged; instead, novel research is required to address the unique challenges posed by XML and its use in web applications and standalone applications. This workshop aims to bring together researchers from the programming languages and XML communities, a) to foster novel research to address unique challenges being posed by XML on current and future programming technologies; b) to exchange information on early research experiences with XML-related programming systems, tools, and languages; and c) to expose the PLI community to XML technologies and the potential impact of these technologies on future software. SUBMISSION PROCEDURE We solicit submissions on original research not previously published or currently submitted for publication elsewhere, in the form of extended abstracts. These extended abstracts should not exceed 5000 words (approximately 10 pages). Detailed submission instructions will be posted at http://www.research.avayalabs.com/user/wadler/planx by early April. PROCEEDINGS There will be no formal proceedings. An informal proceedings will be distributed at the workshop. IMPORTANT DATES Paper submission deadline May 1 Notification of acceptanceJune 21 Final papers due for informal proceedings Sep 4 WEB PAGE: http://www.research.avayalabs.com/user/wadler/planx/ GENERAL CHAIR: Vivek Sarkar, IBM PROGRAM COMMITTEE: Allen Brown (Microsoft) Peter Buneman (Edinburgh) Sophie Cluet (Xyleme / INRIA) Mary Fernandez (ATT Labs) Shriram Krishnamurthi (Brown) Makoto Murata (IBM Japan) Benjamin Pierce (University of Pennsylvania), co-chair Michael Schwartzbach (Aarhus) Dan Suciu (University of Washington) Philip Wadler (Avaya Labs), co-chair INVITED SPEAKER: James Clark ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Haskell Reference at Zvon (alpha release)
Hello, I have put on-line alpha version of Haskell reference: http://zvon.org/other/haskell/Outputglobal/index.html I would appreciate your comments and suggestions. -- ** firstName Miloslav /firstName surname Nic /surname mail[EMAIL PROTECTED]/mail support http://www.zvon.org /support ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: Lambda over types.
anatoli anatoli at yahoo wrote: This is all, of course, of purely academical interest. The notation is extremely inconvenient to do any real work. I'd rather prefer a real, language-supported lambda over types. Or... wait a minute! You did find all those problems; does it mean you tried to *use* this stuff for something? Just curious. I must start with a profuse apology, because what follows is perhaps of little relevance to the list. I also propose to re-direct the follow-ups to the Haskell Cafe. I have examined your code and then tried a few examples, some of which from my code's regression tests. I have implemented a compile-time lambda-calculator, in a different language. I should say, in a meta-language. The output of the evaluator is a term that can then be compiled and evaluated. The lambda-calculator acts as a partial evaluator. The calculator normalizes a term in a pure untyped lambda calculus. The result is compiled and evaluated in a call-by-value lambda-calculus with constants. Haskell typechecker (augmented with multi-parameter classes with functional dependencies and let on loose) may do something similar. BTW, the meta-language lambda-evaluator code (with the regression tests) can be found at http://pobox.com/~oleg/ftp/Computation/rewriting-rule-lambda.txt The calculator is implemented in CPS, in some sort of extended lambda calculus. Therefore, the code has three kinds of lambdas: of the source language, of the transformer meta-language, and of the target language. The first and the third lambdas are spelled the same and are the same. ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: and do notation
On Tue, 2 Apr 2002 10:00:37 +0200 (MET DST), John Hughes [EMAIL PROTECTED] wrote: If (as a human reader of a programme) I see do a - thing1 expression and I notice (perhaps after some modifications) that a is not present in expression, then I /really/ don't want a change to do thing1 expression to change the meaning of the programme. I think the point that's being missed in this discussion is that a monad is a n *abstract* type, and sometimes the natural equality on the abstract type is not the same as equality on representations. Of course, we want the left and right hand sides of the monad laws to have the same meaning, and we want to mean = \_ -, but this meaning is really up to the abstract equality, not the concrete one. If we can give a more efficient implementation, whic h constructs a better representation than = does, that's fine, as long as the two representations mean the same thing. Point taken, but I remain unconvinced. What comes out of the monad /isn't/ abstract; there's nothing to ensure that subsequent use respects the abstraction. To be specific, the application that kicked off this discussion was program generation. In this example, it's not important that and = generate the same *program text*, the important thing is that they generate equivalent programs. If can more easily generate a more efficient program, that's fine. Is it fine though? Since it will be possible to inspect the generated code, it's possible that a change from do {_ - A; B} to do {A; B} can radically alter the subsequent behaviour of the programme. There's another example in QuickCheck, where we use a monad Gen for random value generation -- Gen a is a generator for random values of type a. Gen doe s not satisfy the monad laws: for example, g and g = return will usually generate different values. But viewed as *probability distributions* (which i s how we think of them), they are the same. Morally, Gen is a monad. Well, aren't there cases where generating the /same/ pseudo-random sequences is important? In such a case, making what looks like a semantically neutral change to the programme might invalidate years of stored results. This is all perfectly respectable, and has to do with the fact that Haskell i s a programming language, not mathematics -- so we represent equivalence classe s by values chosen from them. For the *language* to rule out constructing different representations for equivalent constructions, such as and =, would be unreasonable. Here's the problem. Your argument sounds very similar to the one against type checking. That /you/ can get it right doesn't encourage me to believe that J Random Hacker isn't going to abuse the facility. It's not as if you couldn't define != and ! for something that's nearly a monad, it would just stop you using the do notation, which is, I think fair, since Haskell provides no way of selecting the correct form of equality for do {_ - A; B} == do {A; B}. Jón -- Jón Fairbairn [EMAIL PROTECTED] 31 Chalmers Road [EMAIL PROTECTED] Cambridge CB1 3SZ+44 1223 570179 (after 14:00 only, please!) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Inefficient sort implementation
I am wondering about a design decision in the List module. To wit: sort, in both the H98 library report and the Hugs file List.hs, is implemented using a quadratic sort (insertion sort). Using the name sort certainly suggests that this is a good function to use for sorting. I would think it is pretty obvious that sort should be implemented using an efficient algorithm. I am thinking primarily of merge sort, which has O(n log n) worst case behavior, is stable, and has an elegant and efficient functional implementation. Certainly insertion sort is good to have around, if one is sorting data that is nearly sorted already, but I would say insertion sort is clearly not the best choice (or even a good choice) for a general- purpose sorting algorithm. Or am I missing something? = Glenn G. Chappell[EMAIL PROTECTED] Dept. of Mathematical Sciences * Ofc: (907) 474-5736 University of Alaska Fairbanks * Fax: (907) 474-5394 __ Do You Yahoo!? Yahoo! Tax Center - online filing with TurboTax http://http://taxes.yahoo.com/ ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
JVM-Bridge Current Status
1. Changes So Far The following changes have been checked into CVS since the 0.1 release: Native/ * MacOS X support * Beginnings of mingw32 support * New Sun 1.4 JVM * Now installs in /usr/lib/jvm-bridge/ * ExecuteFunction bindings now done at run-time Haskell/ * MacOS X support * 'Haskell' now a separate Autotools project installing in /usr/lib/jvm-bridge/ * New array region access functions * Correct array types in autogenerated class interfaces * Subtype parameters in autogenerated class interfaces * VMLayer 'env' parameter now implicit * Various changes in the BasicLayer * new native StartExecuteFunction function called by getExecuteFunctionClass, defineCallbackClass I recommend using GHC 5.02.2 or later. Possibly it might work with 5.02. 2. Current Work MacOS X callback support is buggy, and I'm currently working on it. It might a problem with ghc-5.03-13032002-MacOSX (foreign export dynamic), or it might be a bug in JVM-Bridge. Thomas Pasch [EMAIL PROTECTED] has done some work on getting JVM-Bridge to compile under mingw32. I have integrated some of his patches. I do not have a Windows machine myself. I would like to do a 0.2 release, say when both examples work on all three platforms. 3. I have created a new mailing list: http://lists.sourceforge.net/lists/listinfo/jvm-bridge-devel If you are using JVM-Bridge I encourage you to subscribe. -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
handling errors in Happy
Is it possible to get the result of function happyError, in the main module of my program (which imports the module generated by Happy)? Thanks a lot, -- Andre ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: sort inefficiency
Glenn G. Chappell [EMAIL PROTECTED] writes I am wondering about a design decision in the List module. To wit: sort, in both the H98 library report and the Hugs file List.hs, is implemented using a quadratic sort (insertion sort). Using the name sort certainly suggests that this is a good function to use for sorting. I would think it is pretty obvious that sort should be implemented using an efficient algorithm. I am thinking primarily of merge sort, which has O(n log n) worst case behavior, is stable, and has an elegant and efficient functional implementation. Certainly insertion sort is good to have around, if one is sorting data that is nearly sorted already, but I would say insertion sort is clearly not the best choice (or even a good choice) for a general- purpose sorting algorithm. Or am I missing something? The Standard library specifies only the map related to the name `sort'. This map can be described, for example, via sort-by-insertion program. And the algorithm choice is a matter of each particular implementation. Implementation has right to change the algorithm. For example, I tried once to argue with GHC for incorporating the mergeSort algorithm for `sort'. But they have some version of quickSort which is said faster on average and O(n^2) in the worst case. Disliking this n^2 hazard, I overload `sort' with my_sort. - Serge Mechveliani [EMAIL PROTECTED] ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: and do notation
On Tue, Apr 02, 2002 at 08:48:41PM +0100, Jon Fairbairn wrote: Point taken, but I remain unconvinced. What comes out of the monad /isn't/ abstract; there's nothing to ensure that subsequent use respects the abstraction. Sure. That's the programmer's responsibility to keep track of. To me the situation seems entirely analogous to defining a '+' operation that is not associative; if the programmer wants to do it, more power to her. (In fact, the standard '+' on floating point numbers is not associative. Sometimes it matters!) These considerations are the reasons compilers are typically prohibited from taking advantage of such laws, and why the translation from the 'do' notation should be the obvious one (using ''). Best, Dylan Thurston msg10610/pgp0.pgp Description: PGP signature
Re: Ex 9.9 in Paul Hudak's book
At 10:41 1/04/2002 -0500, Paul Hudak wrote: It's really not as obscure as it first seems. A fixpoint of a function foo is a value x such that foo x = x. The fix operator tells us one way (not the only way) to generate such a fixpoint: fix foo = foo (fix foo) Note from this equation that (fix foo) is, by definition (!), a fixpoint of foo, since foo (fix foo) is just (fix foo). Thanks for the explanations. I still have difficulties to understand the equation but i thought about it a long time and here are my thoughts. I consider the following line as a function definition in Haskell: fix foo = foo (fix foo) That is, a simple rule for rewriting the term to the left of the equal sign. It is Haskell (syntactic), not mathematic (from my point of view which is maybe wrong). Not all mathematical functions have a fixpoint. So the fix function can only be used with haskell functions that have a fixpoint, and i think those are the ones that have a base case (for the recursive ones). So one can think of the fix function as a fixpoint finder (or just searcher if there is no base case in the recursion). Generally speaking, if you take any mathematical function that has a fixpoint, the function fix will only lead to the fixpoint if the function converges to the fixpoint from the chosen start point. But thanks to the mechanism of lazy evaluation, the argument (fix foo) in foo (fix foo) is not evaluated before it is needed, that is, before we know we are in the inductive case. The fix function can also be viewed as a way to provide as many functions as needed. Infinitely many in fact. I was wondering if i am right or not ? Ludovic ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: newbie:: getting random Ints
[EMAIL PROTECTED] (Ketil Z. Malde) writes: snip OP Depending on why you need random numbers, you might want to consider using a fixed seed, i.e. a = randoms (mkStdGen 4711) will get you a sequence of random numbers. Of course, it will be the same sequence for every run of your program, but in some cases, (when you only want arbitrary data, and not necessarily random) this won't matter much. The advantage is, of course, that you get rid of the IO monad. I've toyed with the idea of using unsafePerformIO (at least to get the seed), but haven't quite dared (or needed) to yet. :-) well, thanks to all for the help. in my case, it was not OK to have arbitrary data, i needed (pseudo-) random numbers for different runs of the program. i want to: -generate random Ints -do some arbitrary computations on them to generate a [[Int]] -compare each [Int] in the list with a list of [Int] known at compile time the functions below seem to be doing what i need; i'm posting the code in case it helps other newbies get there a bit faster than i did. any constructive criticism / pointing out of errors most welcome. code: import Random rollDice :: IO Int rollDice = getStdRandom (randomR (1,6)) getRandomSeed :: IO Int getRandomSeed = do retval - rollDice return retval getRandomSeedInt :: IO Int - Int getRandomSeedInt x = unsafePerformIO x getARange :: Int - Int - [Int] getARange x y = randomRs (x,y) (mkStdGen (getRandomSeedInt getRandomSeed)) getRandomInt :: Int - Int getRandomInt x = head (take 1 (getARange 0 x )) output: Main take 20 (getARange 0 10) [5,8,1,8,2,8,9,7,1,4,6,2,5,8,6,2,10,0,7,10] Main take 20 (getARange 0 10) [9,8,9,9,7,2,4,5,1,7,2,2,8,2,5,10,5,3,1,8] thanks and regards, peter ___ Haskell-Cafe mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: sort inefficiency
On Wed, Apr 03, 2002 at 09:35:51AM +0400, Serge D. Mechveliani wrote: The Standard library specifies only the map related to the name `sort'. This map can be described, for example, via sort-by-insertion program. And the algorithm choice is a matter of each particular implementation. Implementation has right to change the algorithm. Reading this, it occurred to me that if you're very picky the implementation probably isn't allowed to pick the algorithm: you need to assume that '' is actually a total order to have much leeway at all. (Suppose, e.g., that comparing two particular elements yields an exception.) It seems to me this is a problem with providing code as specification: you probably fix the details more than you want. Best, Dylan Thurston msg01575/pgp0.pgp Description: PGP signature